如何解决;当使用形状[XXX]分配张量时,OOM;在tensorflow中(训练GCN时)

如何解决;当使用形状[XXX]分配张量时,OOM;在tensorflow中(训练GCN时),tensorflow,keras,graph,neural-network,conv-neural-network,Tensorflow,Keras,Graph,Neural Network,Conv Neural Network,所以。。。我已经检查了一些关于这个问题的帖子(应该有很多我没有检查过,但我认为现在就一个问题寻求帮助是合理的),但我还没有找到任何适合我的情况的解决方案 在第二轮训练循环中,以及在第一次运行后再次运行训练代码时,总是会出现此OOM错误消息(没有任何异常)。所以这可能是一个与这篇文章相关的问题,但我不确定我的问题在于哪个功能 我的NN是一个带有两个图形卷积层的GCN,我正在一个带有几个10GB Nvidia P102-100GPU的服务器上运行代码。已将批处理大小设置为1,但没有任何更改。我也使用

所以。。。我已经检查了一些关于这个问题的帖子(应该有很多我没有检查过,但我认为现在就一个问题寻求帮助是合理的),但我还没有找到任何适合我的情况的解决方案

在第二轮训练循环中,以及在第一次运行后再次运行训练代码时,总是会出现此OOM错误消息(没有任何异常)。所以这可能是一个与这篇文章相关的问题,但我不确定我的问题在于哪个功能

我的NN是一个带有两个图形卷积层的GCN,我正在一个带有几个10GB Nvidia P102-100GPU的服务器上运行代码。已将批处理大小设置为1,但没有任何更改。我也使用Jupyter笔记本,而不是用命令运行python脚本,因为在命令行中我甚至不能运行一轮。。。顺便问一下,有人知道为什么在命令行中弹出OOM时,某些代码可以在Jupyter上毫无问题地运行吗?我觉得有点奇怪

更新:用globalMapTool()替换flatte()后,错误消失了,我可以顺利运行代码。但是,如果我进一步添加一个GC层,错误将在第一轮中出现。因此,我想核心问题仍然存在

更新2:尝试用
tf.SparseTensor
替换
tf.Tensor
。成功但毫无用处。还试图建立镜像策略,如ML_引擎的回答中所述,但看起来其中一个GPU占用率最高,OOM仍然出现。也许这是一种“数据并行”,无法解决我的问题,因为我已将
batch\u size
设置为1

代码(改编自):

模型摘要

Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            (None, 13129, 2)     0                                            
__________________________________________________________________________________________________
input_1 (InputLayer)            (13129, 13129)       0                                            
__________________________________________________________________________________________________
graph_conv_1 (GraphConv)        (None, 13129, 32)    96          input_2[0][0]                    
                                                                 input_1[0][0]                    
__________________________________________________________________________________________________
graph_conv_2 (GraphConv)        (None, 13129, 32)    1056        graph_conv_1[0][0]               
                                                                 input_1[0][0]                    
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 420128)       0           graph_conv_2[0][0]               
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 512)          215106048   flatten_1[0][0]                  
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 1)            513         dense_1[0][0]                    
==================================================================================================
Total params: 215,107,713
Trainable params: 215,107,713
Non-trainable params: 0
__________________________________________________________________________________________________
batch size = 1
错误消息(请注意,在重新启动并清除输出后的第一轮中,此消息从未出现):

对2953个样本进行训练,对739个样本进行验证
纪元1/1
---------------------------------------------------------------------------
ResourceExhaustedError回溯(最近一次调用上次)
在()
62 mem=psutil.virtual_memory()
63打印(“当前成员”+str(四舍五入(成员百分比))+“%”
--->64历史=模型.fit(X_序列、y_序列、批次大小=批次大小、验证数据=验证数据、时代=时代、回调=回调)
65 mem=psutil.virtual_memory()
66打印(“当前成员”+str(四舍五入(成员百分比))+“%”
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training.py in fit(self、x、y、批大小、历元、冗余、回调、验证拆分、验证数据、洗牌、类权重、样本权重、初始历元、每个历元的步长、验证步骤、验证频率、最大队列大小、工作人员、使用多处理、**kwargs)
1237步/u历元=步/u历元,
1238验证步骤=验证步骤,
->1239验证频率=验证频率)
1240
1241 def评估(自我,
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training_arrays.py in-fit_循环(模型、fit_函数、fit_输入、out_标签、批量大小、历元、冗余、回调、val_函数、val_输入、无序、初始_历元、每历元步数、验证_步骤、验证频率)
194 ins_批次[i]=ins_批次[i].toarray()
195
-->196输出=配合功能(ins\U批量)
197 outs=待办名单(outs)
198用于l,o-in-zip(out_标签,out):
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/keras/backend.py in_u_调用(self,输入)
3290
3291 fetched=self.\u callable\u fn(*array\u vals,
->3292运行\u元数据=self.run\u元数据)
3293 self.\u call\u fetch\u callbacks(fetched[-len(self.\u fetches):]))
3294输出_结构=nest.pack_序列_as(
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/client/session.py in uuuuuu call(self,*args,**kwargs)
1456 ret=tf_session.tf_SessionRunCallable(self._session._session,
1457自动控制手柄,args,
->1458运行(元数据)
1459如果运行\u元数据:
1460 proto_data=tf_session.tf_GetBuffer(run_metadata_ptr)
ResourceExhaustedError:发现2个根错误。
(0)资源耗尽:当分配器GPU_0_bfc使用形状[420128512]和类型float on/job:localhost/replica:0/task:0/device:GPU:0分配tensor时OOM
[{{node training_1/Adam/mul_23}]
提示:如果您想在OOM发生时查看已分配的张量列表,请在OOM上添加report_tensor_allocations_on_to RunOptions以获取当前分配信息。
[[metrics_1/acc/Identity/_323]]
提示:如果您想在OOM发生时查看已分配的张量列表,请在OOM上添加report_tensor_allocations_on_to RunOptions以获取当前分配信息。
(1) 资源耗尽:通过分配器GPU\U 0\U bfc分配形状为[420128512]且类型为float on/job:localhost/replica:0/task:0/device:GPU:0的tensor时OOM
[{{node training_1/Adam/mul_23}]
提示:如果您想在OOM发生时查看已分配的张量列表,请在OOM上添加report_tensor_allocations_on_to RunOptions以获取当前分配信息。
0成功的操作。
忽略0个派生错误。

您可以在tensorflow中使用分布式策略,以确保您的多GPU设置得到适当使用:

mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
    for test_indel in range(1,11):
         <etc>
MirroredStrategy=tf.distribute.MirroredStrategy()
使用镜像的_策略。范围()
对于范围(1,11)内的试验指数:
见文件

镜像策略用于在单个服务器上跨多个GPU进行同步分布式培训,这听起来像您正在使用的设置
Train on 2953 samples, validate on 739 samples
Epoch 1/1
---------------------------------------------------------------------------
ResourceExhaustedError                    Traceback (most recent call last)
<ipython-input-5-943385df49dc> in <module>()
     62     mem = psutil.virtual_memory()
     63     print("current mem " + str(round(mem.percent))+'%')
---> 64     history = model.fit(X_train,y_train,batch_size=batch_size,validation_data=validation_data,epochs=epochs,callbacks=callbacks)
     65     mem = psutil.virtual_memory()
     66     print("current mem " + str(round(mem.percent))+'%')

/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1237                                         steps_per_epoch=steps_per_epoch,
   1238                                         validation_steps=validation_steps,
-> 1239                                         validation_freq=validation_freq)
   1240 
   1241     def evaluate(self,

/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training_arrays.py in fit_loop(model, fit_function, fit_inputs, out_labels, batch_size, epochs, verbose, callbacks, val_function, val_inputs, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq)
    194                     ins_batch[i] = ins_batch[i].toarray()
    195 
--> 196                 outs = fit_function(ins_batch)
    197                 outs = to_list(outs)
    198                 for l, o in zip(out_labels, outs):

/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/keras/backend.py in __call__(self, inputs)
   3290 
   3291     fetched = self._callable_fn(*array_vals,
-> 3292                                 run_metadata=self.run_metadata)
   3293     self._call_fetch_callbacks(fetched[-len(self._fetches):])
   3294     output_structure = nest.pack_sequence_as(

/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
   1456         ret = tf_session.TF_SessionRunCallable(self._session._session,
   1457                                                self._handle, args,
-> 1458                                                run_metadata_ptr)
   1459         if run_metadata:
   1460           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[420128,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[{{node training_1/Adam/mul_23}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[metrics_1/acc/Identity/_323]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  (1) Resource exhausted: OOM when allocating tensor with shape[420128,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[{{node training_1/Adam/mul_23}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
    for test_indel in range(1,11):
         <etc>