Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 我正在尝试运行“的代码”;GrouPy“;根据他们的指示,但这是给我的错误。我正在使用Python 3、Cuda 9和gcc6_Python 3.x_Gcc_Cuba - Fatal编程技术网

Python 3.x 我正在尝试运行“的代码”;GrouPy“;根据他们的指示,但这是给我的错误。我正在使用Python 3、Cuda 9和gcc6

Python 3.x 我正在尝试运行“的代码”;GrouPy“;根据他们的指示,但这是给我的错误。我正在使用Python 3、Cuda 9和gcc6,python-3.x,gcc,cuba,Python 3.x,Gcc,Cuba,我已经执行了下面的命令,就像自述文件中所要求的那样。主代码是为heaconv提供的。当我尝试运行GrouPy的单个代码时也会收到相同的错误 python train_cifar.py --modelfn=experiments/CIFAR10/models/P4WideResNet.py --epoch 300 --save_freq=100 --gpu 0 --opt=MomentumSGD --lr_decay_factor=0.1 --lr_decay_schedule=50-100-15

我已经执行了下面的命令,就像自述文件中所要求的那样。主代码是为heaconv提供的。当我尝试运行GrouPy的单个代码时也会收到相同的错误

python train_cifar.py --modelfn=experiments/CIFAR10/models/P4WideResNet.py --epoch 300 --save_freq=100 --gpu 0 --opt=MomentumSGD --lr_decay_factor=0.1 --lr_decay_schedule=50-100-150 --batchsize 125 --transformations='' --opt_kwargs="{'lr':0.05}" --datadir=/path/to/cifar10 --resultdir=/path/to/results
执行上述操作后,我收到以下错误:

{'datadir': '/workspace/hexaconv-master/experiments/CIFAR10/DataCifar', 'resultdir': '/workspace/hexaconv-master/experiments/CIFAR10/DataCifarResults', 'modelfn': '/workspace/hexaconv-master/experiments/CIFAR10/models/P4WideResNet.py', 'trainfn': 'train_all.npz', 'valfn': 'test.npz', 'epochs': 300, 'batchsize': 125, 'opt': 'MomentumSGD', 'opt_kwargs': {'lr': 0.05}, 'net_kwargs': {}, 'weight_decay': 0.001, 'lr_decay_schedule': '50-100-150', 'lr_decay_factor': 0.1, 'transformations': '', 'val_freq': 40, 'save_freq': 100, 'gpu': 0, 'seed': 0, 'hex_sampling': ''}
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 241, in compile
    nvrtc.compileProgram(self.ptr, options)
  File "cupy/cuda/nvrtc.pyx", line 98, in cupy.cuda.nvrtc.compileProgram
  File "cupy/cuda/nvrtc.pyx", line 108, in cupy.cuda.nvrtc.compileProgram
  File "cupy/cuda/nvrtc.pyx", line 53, in cupy.cuda.nvrtc.check_status
cupy.cuda.nvrtc.NVRTCError: NVRTC_ERROR_COMPILATION (6)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train_cifar.py", line 291, in <module>
    val_error, model = train(logme=vargs, **vargs)
  File "train_cifar.py", line 154, in train
    model, optimizer = get_model_and_optimizer(resultdir, modelfn, opt, opt_kwargs, net_kwargs, gpu)
  File "train_cifar.py", line 46, in get_model_and_optimizer
    module = imp.load_source(model_name, modelfn)
  File "/opt/conda/lib/python3.6/imp.py", line 172, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 684, in _load
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/workspace/hexaconv-master/experiments/CIFAR10/models/P4WideResNet.py", line 8, in <module>
    from groupy.gconv.gconv_chainer.p4_conv import P4ConvZ2, P4ConvP4
  File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/p4_conv.py", line 1, in <module>
    from groupy.gconv.gconv_chainer.splitgconv2d import SplitGConv2D
  File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/splitgconv2d.py", line 10, in <module>
    from groupy.gconv.gconv_chainer.TransformFilter import TransformGFilter
  File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/TransformFilter.py", line 8, in <module>
    from groupy.gconv.gconv_chainer.kernels.integer_indexing_cuda_kernel import grad_index_group_func_kernel
  File "/workspace/hexaconv-master/groupy/gconv/gconv_chainer/kernels/integer_indexing_cuda_kernel.py", line 61, in <module>
    _index_group_func_kernel32 = compile_with_cache(_index_group_func_str.format('float')).get_function('indexing_kernel')
  File "cupy/core/carray.pxi", line 125, in cupy.core.core.compile_with_cache
  File "cupy/core/carray.pxi", line 146, in cupy.core.core.compile_with_cache
  File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 164, in compile_with_cache
    ptx = compile_using_nvrtc(source, options, arch)
  File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 82, in compile_using_nvrtc
    ptx = prog.compile(options)
  File "/opt/conda/lib/python3.6/site-packages/cupy/cuda/compiler.py", line 245, in compile
    raise CompileException(log, self.src, self.name, options)
cupy.cuda.compiler.CompileException: /tmp/tmp_vh4y1f6/kern.cu(14): error: a value of type "const ptrdiff_t *" cannot be used to initialize an entity of type "const int *"

/tmp/tmp_vh4y1f6/kern.cu(15): error: a value of type "const ptrdiff_t *" cannot be used to initialize an entity of type "const int *"

2 errors detected in the compilation of "/tmp/tmp_vh4y1f6/kern.cu".
{'datadir':'/workspace/hexaconv master/experiments/CIFAR10/DataCifar','resultdir':'/workspace/hexaconv master/experiments/CIFAR10/models/P4WideResNet.py','trainfn':'train_all.npz','valfn':'test.npz',epochs':300',batchsize':125',opt':'MomentumSGD',opt kwar''gs':{'lr':0.05},'net_kwargs':{},'weight_Decause':0.001,'lr_Decause_schedule':'50-100-150','lr_Decause_factor':0.1,'transformations':'','val_freq':40,'save_freq':100,'gpu':0,'seed':0,'hex_sampling':''
回溯(最近一次呼叫最后一次):
文件“/opt/conda/lib/python3.6/site packages/cupy/cuda/compiler.py”,编译中第241行
nvrtc.compileProgram(self.ptr,选项)
文件“cupy/cuda/nvrtc.pyx”,第98行,位于cupy.cuda.nvrtc.compileProgram中
文件“cupy/cuda/nvrtc.pyx”,第108行,位于cupy.cuda.nvrtc.compileProgram中
文件“cupy/cuda/nvrtc.pyx”,第53行,在cupy.cuda.nvrtc.check_status中
cupy.cuda.nvrtc.NVRTCError:nvrtc\u错误\u编译(6)
在处理上述异常期间,发生了另一个异常:
回溯(最近一次呼叫最后一次):
文件“train_cifar.py”,第291行,在
val_错误,型号=列车(logme=变量,**变量)
列车中第154行的文件“train_cifar.py”
模型,优化器=获取模型和优化器(结果文件,模型FN,选项,选项kwargs,网络kwargs,gpu)
文件“train_cifar.py”,第46行,在get_model_和_优化器中
模块=imp.load\u源(型号名称,型号fn)
文件“/opt/conda/lib/python3.6/imp.py”,第172行,在load_source中
模块=_负载(规格)
文件“”,第684行,正在加载
文件“”,第665行,在“加载”中
exec_模块中第678行的文件“”
文件“”,第219行,在“调用”中,删除了“帧”
文件“/workspace/hexaconv master/experiments/CIFAR10/models/P4WideResNet.py”,第8行,在
从groupy.gconv.gconv_chainer.p4_conv导入P4ConvZ2,P4ConvP4
文件“/workspace/hexaconv master/groupy/gconv/gconv_chainer/p4_conv.py”,第1行,在
从groupy.gconv.gconv_chainer.splitgconv2d导入splitgconv2d
文件“/workspace/hexaconv master/groupy/gconv/gconv_chainer/splitgconv2d.py”,第10行,在
从groupy.gconv.gconv_chainer.TransformFilter导入TransformGFilter
文件“/workspace/hexaconv master/groupy/gconv/gconv_chainer/TransformFilter.py”,第8行,在
从groupy.gconv.gconv\u chainer.kernels.integer\u index\u cuda\u kernel导入grad\u index\u group\u func\u kernel
文件“/workspace/hexaconv master/groupy/gconv/gconv_chainer/kernels/integer_index_cuda_kernel.py”,第61行,在
_index\u group\u func\u kernel32=使用缓存(index\u group\u func\u str.format('float'))编译。get\u函数('index\u kernel'))
文件“cupy/core/carray.pxi”,第125行,在cupy.core.core.compile_中使用_缓存
文件“cupy/core/carray.pxi”,第146行,在cupy.core.core.compile_中使用_缓存
文件“/opt/conda/lib/python3.6/site packages/cupy/cuda/compiler.py”,第164行,使用缓存编译
ptx=使用nvrtc编译(源代码、选项、arch)
文件“/opt/conda/lib/python3.6/site packages/cupy/cuda/compiler.py”,第82行,使用nvrtc编译
ptx=程序编译(选项)
文件“/opt/conda/lib/python3.6/site packages/cupy/cuda/compiler.py”,编译中的第245行
引发CompileException(日志、self.src、self.name、选项)
cupy.cuda.compiler.CompileException:/tmp/tmp_vh4y1f6/kern.cu(14):错误:类型为“const ptrdiff_t*”的值不能用于初始化类型为“const int*”的实体
/tmp/tmp_vh4y1f6/kern.cu(15):错误:“const ptrdiff_t*”类型的值不能用于初始化“const int*”类型的实体
在编译“/tmp/tmp_vh4y1f6/kern.cu”时检测到2个错误。

我认为这与。请参阅底部的拉动请求