Java 运行时错误:CUDA错误:调用“cublasCreate(handle)”时CUBLAS_STATUS_ALLOC_失败```
我正在尝试将代码从java转换为python。对于我正在使用的,并且在我的上运行以下命令时,我得到以下错误(我猜所有依赖项都已安装): 命令Java 运行时错误:CUDA错误:调用“cublasCreate(handle)”时CUBLAS_STATUS_ALLOC_失败```,java,python,pytorch,Java,Python,Pytorch,我正在尝试将代码从java转换为python。对于我正在使用的,并且在我的上运行以下命令时,我得到以下错误(我猜所有依赖项都已安装): 命令 !python TransCoder/translate.py --src_lang java --tgt_lang python \\ --BPE_path TransCoder/data/BPE_with_comments_codes --model_path model_1.pth < a.java !python代码转换器/translat
!python TransCoder/translate.py --src_lang java --tgt_lang python \\
--BPE_path TransCoder/data/BPE_with_comments_codes --model_path model_1.pth < a.java
!python代码转换器/translate.py--src_lang java--tgt_lang python\\
--BPE_路径转码器/data/BPE_和注释_代码--model_路径model_1.pth
错误
Loading codes from /content/TransCoder/data/BPE_with_comments_codes ...
Read 50000 codes from the codes file.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: indexSelectLargeIndex: block: [59,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: indexSelectLargeIndex: block: [59,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: indexSelectLargeIndex: block: [59,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
.
.
.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: indexSelectLargeIndex: block: [59,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "TransCoder/translate.py", line 179, in <module>
input, lang1=params.src_lang, lang2=params.tgt_lang, beam_size=params.beam_size)
File "TransCoder/translate.py", line 129, in translate
langs=langs1, causal=False)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/TransCoder/XLM/src/model/transformer.py", line 328, in forward
return self.fwd(**kwargs)
File "/content/TransCoder/XLM/src/model/transformer.py", line 400, in fwd
attn = self.attentions[i](tensor, attn_mask, use_cache=use_cache)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/TransCoder/XLM/src/model/transformer.py", line 182, in forward
q = shape(self.q_lin(input))
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
从/content/TransCoder/data/BPE\u加载代码,并添加注释\u代码。。。
从代码文件中读取50000个代码。
/pytorch/aten/src/aten/native/cuda/index.cu:662:indexSelectLargeIndex:block:[59,0,0],线程:[64,0,0]断言`srcIndex 如果你想从java中使用pythRo火炬,而不是通过它的Python API,它可能会更直接地使用它的C++ API,比如PyTrac的JavaCpp预设。我正试图转换代码。