运行时错误:CUDA内存不足。尝试分配2.86 GiB(GPU 0;10.92 GiB总容量;PyTorch总共保留9.06 GiB)

运行时错误:CUDA内存不足。尝试分配2.86 GiB(GPU 0;10.92 GiB总容量;PyTorch总共保留9.06 GiB),pytorch,gpu,nvidia,Pytorch,Gpu,Nvidia,PyTorch总共保留的9.06 GiB是什么意思 如果我对同一个脚本使用较小的GPU7.80 GiB总容量,它表示PyTorch总共保留了6.20 GiB Pytorch中的保留是如何工作的?为什么保留的内存会根据GPU大小而变化 要解决错误消息,RuntimeError:CUDA内存不足。尝试分配2.86 GiB(GPU 0;10.92 GiB总容量;9.02 GiB已分配;1.29 GiB空闲;PyTorch总共保留9.06 GiB)我已尝试将批量大小从10减少到5到3。 我曾尝试使用de

PyTorch总共保留的
9.06 GiB是什么意思

如果我对同一个脚本使用较小的GPU
7.80 GiB总容量
,它表示PyTorch总共保留了
6.20 GiB
Pytorch中的保留是如何工作的?为什么保留的内存会根据GPU大小而变化

要解决错误消息,
RuntimeError:CUDA内存不足。尝试分配2.86 GiB(GPU 0;10.92 GiB总容量;9.02 GiB已分配;1.29 GiB空闲;PyTorch总共保留9.06 GiB)
我已尝试将批量大小从10减少到5到3。 我曾尝试使用
del x_train1
删除未使用的张量。我还尝试过使用
torch.cuda.empty\u cache()
。当在
x\u train1=bert\u模型(训练指数)[2]
应用预训练模型时,以及在训练和验证新模型时,我也使用了torch.no\u grad()
。但它们都不起作用

这是跟踪:

cuda:0
    x_train1 = bert_model(train_indices)[2]  # Models outputs are tuples
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 783, in forward
    input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/kosimadukwe/miniconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 177, in forward
    embeddings = inputs_embeds + position_embeddings + token_type_embeddings
RuntimeError: CUDA out of memory. Tried to allocate 2.86 GiB (GPU 0; 10.92 GiB total capacity; 9.02 GiB already allocated; 1.29 GiB free; 9.06 GiB reserved in total by PyTorch)
和英伟达smi推出

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.36       Driver Version: 440.36       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:3B:00.0 Off |                  N/A |
| 54%   79C    P2   233W / 250W |   8613MiB / 11178MiB |    100%   E. Process |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:AF:00.0 Off |                  N/A |
| 58%   79C    P2   247W / 250W |   4545MiB / 11178MiB |      0%   E. Process |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 108...  Off  | 00000000:D8:00.0 Off |                  N/A |
| 23%   29C    P0    56W / 250W |      0MiB / 11178MiB |      2%   E. Process |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0   1025219      C   /usr/pkg/bin/python3.8                      8601MiB |
|    1   1024440      C   /usr/pkg/bin/python3.8                      4535MiB |

os.environ['CUDA_VISIBLE_DEVICES'] = '2'