Pytorch 拥抱面-RuntimeError:在Azure DataRicks上的设备0上的副本0中捕获到RuntimeError

Pytorch 拥抱面-RuntimeError:在Azure DataRicks上的设备0上的副本0中捕获到RuntimeError,pytorch,databricks,azure-databricks,bert-language-model,huggingface-transformers,Pytorch,Databricks,Azure Databricks,Bert Language Model,Huggingface Transformers,我如何运行run_language_modeling.py脚本,从使用预训练的roberta案例模型拥抱面部到使用我自己在Azure databricks上的数据和GPU集群进行微调 使用Transformer版本2.9.1和3.0。 Python 3.6 火炬1.5.0 火炬视觉0.6 下面是我在Azure databricks上运行的脚本 %run '/dbfs/FileStore/tables/dev/run_language_modeling.py' \ --output_dir='

我如何运行run_language_modeling.py脚本,从使用预训练的roberta案例模型拥抱面部到使用我自己在Azure databricks上的数据和GPU集群进行微调

使用Transformer版本2.9.1和3.0。 Python 3.6 火炬1.5.0 火炬视觉0.6

下面是我在Azure databricks上运行的脚本

%run '/dbfs/FileStore/tables/dev/run_language_modeling.py' \
  --output_dir='/dbfs/FileStore/tables/final_train/models/roberta_base_reduce_n' \
  --model_type=roberta \
  --model_name_or_path=roberta-base \
  --do_train \
  --num_train_epochs 5 \
  --train_data_file='/dbfs/FileStore/tables/final_train/train_data/all_data_desc_list_full.txt' \
  --mlm 
这是运行上述命令后出现的错误

/dbfs/FileStore/tables/dev/run_language_modeling.py in <module>
   279 
   280 if __name__ == "__main__":
--> 281     main()

/dbfs/FileStore/tables/dev/run_language_modeling.py in main()
   243             else None
   244         )
--> 245         trainer.train(model_path=model_path)
   246         trainer.save_model()
   247         # For convenience, we also re-save the tokenizer to the same directory,

/databricks/python/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path)
   497                     continue
   498 
--> 499                 tr_loss += self._training_step(model, inputs, optimizer)
   500 
   501                 if (step + 1) % self.args.gradient_accumulation_steps == 0 or (

/databricks/python/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer)
   620             inputs["mems"] = self._past
   621 
--> 622         outputs = model(**inputs)
   623         loss = outputs[0]  # model outputs are always tuple in transformers (see doc)
   624 

/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
   548             result = self._slow_forward(*input, **kwargs)
   549         else:
--> 550             result = self.forward(*input, **kwargs)
   551         for hook in self._forward_hooks.values():
   552             hook_result = hook(self, input, result)

/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
   153             return self.module(*inputs[0], **kwargs[0])
   154         replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
--> 155         outputs = self.parallel_apply(replicas, inputs, kwargs)
   156         return self.gather(outputs, self.output_device)
   157 

/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)
   163 
   164     def parallel_apply(self, replicas, inputs, kwargs):
--> 165         return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
   166 
   167     def gather(self, outputs, output_device):

/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)
    83         output = results[i]
    84         if isinstance(output, ExceptionWrapper):
---> 85             output.reraise()
    86         outputs.append(output)
    87     return outputs

/databricks/python/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
   393             # (https://bugs.python.org/issue2651), so we work around it.
   394             msg = KeyErrorMessage(msg)
--> 395         raise self.exc_type(msg)

RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
   output = module(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
   result = self.forward(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 239, in forward
   output_hidden_states=output_hidden_states,
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
   result = self.forward(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 762, in forward
   output_hidden_states=output_hidden_states,
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
   result = self.forward(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 439, in forward
   output_attentions,
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
   result = self.forward(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 371, in forward
   hidden_states, attention_mask, head_mask, output_attentions=output_attentions,
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
   result = self.forward(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 315, in forward
   hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions,
 File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
   result = self.forward(*input, **kwargs)
 File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 240, in forward
   attention_scores = attention_scores / math.sqrt(self.attention_head_size)
RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 11.17 GiB total capacity; 10.68 GiB already allocated; 95.31 MiB free; 10.77 GiB reserved in total by PyTorch)```

Please how do I resolve this
中的
/dbfs/FileStore/tables/dev/run\u language\u modeling.py
279
280如果名称=“\uuuuu main\uuuuuuuu”:
-->281 main()
/main()中的dbfs/FileStore/tables/dev/run_language_modeling.py
243没有其他的
244         )
-->245培训师培训(模型路径=模型路径)
246培训师。保存_模型()
247#为了方便起见,我们还将标记器重新保存到同一目录中,
/列车中的databricks/python/lib/python3.7/site-packages/transformers/trainer.py(self,model_path)
497继续
498
-->499 tr_损失+=自我培训步骤(模型、输入、优化器)
500
501如果(步骤+1)%self.args.gradient\u累计\u步骤==0或(
/培训步骤中的databricks/python/lib/python3.7/site-packages/transformers/trainer.py(自我、模型、输入、优化器)
620输入[“mems”]=自身
621
-->622输出=型号(**输入)
623损耗=输出[0]#变压器中的模型输出始终为元组(见文件)
624
/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py in_u_调用(self,*input,**kwargs)
548结果=self.\u slow\u forward(*输入,**kwargs)
549其他:
-->550结果=自转发(*输入,**kwargs)
551用于钩住自身。\u向前\u钩住.values():
552钩子结果=钩子(自身、输入、结果)
/前进中的databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py(self,*输入,**kwargs)
153返回自我模块(*输入[0],**kwargs[0])
154个副本=自我复制(自我模块、自我设备\u ID[:len(输入)])
-->155个输出=自并行应用(副本、输入、KWARG)
156返回自聚集(输出、自输出设备)
157
/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self、副本、输入、kwargs)
163
164 def并行应用(自身、副本、输入、KWARG):
-->165返回并行应用(副本、输入、KWARG、自身设备\u ID[:len(副本)])
166
167 def采集(自身、输出、输出设备):
/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(模块、输入、kwargs_tup、设备)
83输出=结果[i]
84如果isinstance(输出,异常包装):
--->85输出。重新播放()
86输出。追加(输出)
87返回输出
/reraise中的databricks/python/lib/python3.7/site-packages/torch//u utils.py(self)
393             # (https://bugs.python.org/issue2651),所以我们努力解决这个问题。
394 msg=KeyErrorMessage(消息)
-->395提升自交换类型(msg)
RuntimeError:在设备0上的副本0中捕获到RuntimeError。
原始回溯(最近一次呼叫最后一次):
文件“/databricks/python/lib/python3.7/site packages/torch/nn/parallel/parallel_apply.py”,第60行,in_-worker
输出=模块(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/torch/nn/modules/module.py”,第550行,在调用中__
结果=自我转发(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/transformers/modeling_roberta.py”,第239行,向前
输出隐藏状态=输出隐藏状态,
文件“/databricks/python/lib/python3.7/site packages/torch/nn/modules/module.py”,第550行,在调用中__
结果=自我转发(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/transformers/modeling_bert.py”,第762行,向前
输出隐藏状态=输出隐藏状态,
文件“/databricks/python/lib/python3.7/site packages/torch/nn/modules/module.py”,第550行,在调用中__
结果=自我转发(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/transformers/modeling_bert.py”,第439行,向前
输出,输出,,
文件“/databricks/python/lib/python3.7/site packages/torch/nn/modules/module.py”,第550行,在调用中__
结果=自我转发(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/transformers/modeling_bert.py”,第371行,前进
隐藏状态、注意屏蔽、头部屏蔽、输出注意=输出注意,
文件“/databricks/python/lib/python3.7/site packages/torch/nn/modules/module.py”,第550行,在调用中__
结果=自我转发(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/transformers/modeling_bert.py”,第315行,向前
隐藏状态、注意屏蔽、头部屏蔽、编码器隐藏状态、编码器注意屏蔽、输出注意、,
文件“/databricks/python/lib/python3.7/site packages/torch/nn/modules/module.py”,第550行,在调用中__
结果=自我转发(*输入,**kwargs)
文件“/databricks/python/lib/python3.7/site packages/transformers/modeling_bert.py”,第240行,向前
注意分数=注意分数/math.sqrt(self.attention\u head\u size)
运行时错误:CUDA内存不足。尝试分配96.00个MiB(GPU 0;11.17 GiB总容量;10.68 GiB已分配;95.31个MiB空闲;PyTorch总共保留10.77 GiB)```
请问我如何解决这个问题

内存不足错误可能是由于未清理会话和/或释放GPU造成的

从类似的问题

这是因为数据的小批量不适合GPU内存。只需减小批量大小。当我为cifar10数据集设置批量大小为256时,我得到了相同的错误;然后我将批量大小设置为128,这就是解决方案