Tensorflow 如何改进易失性GPU Util?

Tensorflow 如何改进易失性GPU Util?,tensorflow,keras,Tensorflow,Keras,使用Keras来训练模型,两个GPU的Volatile-GPU-Util太低。 代码块: %%time np.random.seed(seed) model_d2v_01 = Sequential() model_d2v_01.add(Dense(64, activation='relu', input_dim=400)) model_d2v_01.add(Dense(1, activation='sigmoid')) model_d2v_01 = multi_gpu_model(mode

使用Keras来训练模型,两个GPU的Volatile-GPU-Util太低。

代码块:

%%time
np.random.seed(seed)

model_d2v_01 = Sequential()
model_d2v_01.add(Dense(64, activation='relu', input_dim=400))
model_d2v_01.add(Dense(1, activation='sigmoid'))

model_d2v_01 = multi_gpu_model(model_d2v_01, gpus=2)
model_d2v_01.compile(loss='binary_crossentropy',
                       optimizer='adam',
                       metrics=['accuracy'])

model_d2v_01.fit(train_vecs_dbow_dmm, y_train, validation_data=(validation_vecs_dbow_dmm, y_validation), epochs=5, batch_size=32*2, verbose=2)

如何修改此代码?请给我一些建议。

我发现这种情况很正常。原因是因为层次太少,所以模型太简单了。当添加更多模型层时,这种情况将得到改善。例如:

%%time
np.random.seed(seed)
# with tf.device('/cpu:0'):
model_d2v_12 = Sequential()
model_d2v_12.add(Dense(512, activation='relu', input_dim=400))
model_d2v_12.add(Dense(512, activation='relu'))
model_d2v_12.add(Dense(512, activation='relu'))
model_d2v_12.add(Dense(1, activation='sigmoid'))

model_d2v_12 = multi_gpu_model(model_d2v_12, gpus=2)
model_d2v_12.compile(loss='binary_crossentropy',
                       optimizer='adam',
                       metrics=['accuracy'])

model_d2v_12.fit(train_vecs_dbow_dmm, y_train, validation_data=(validation_vecs_dbow_dmm, y_validation), epochs=10, batch_size=2048*2, verbose=2)