Cuda 为什么更改内核参数会耗尽我的资源?
我在下面制作了一个非常简单的内核来练习CUDACuda 为什么更改内核参数会耗尽我的资源?,cuda,pycuda,Cuda,Pycuda,我在下面制作了一个非常简单的内核来练习CUDA import pycuda.driver as cuda import pycuda.autoinit import numpy as np from pycuda.compiler import SourceModule from pycuda import gpuarray import cv2 def compile_kernel(kernel_code, kernel_name): mod = SourceModule(kernel_
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
from pycuda.compiler import SourceModule
from pycuda import gpuarray
import cv2
def compile_kernel(kernel_code, kernel_name):
mod = SourceModule(kernel_code)
func = mod.get_function(kernel_name)
return func
input_file = np.array(cv2.imread('clouds.jpg'))
height, width, channels = np.int32(input_file.shape)
my_kernel_code = """
__global__ void my_kernel(int width, int height) {
// This kernel trivially does nothing! Hurray!
}
"""
kernel = compile_kernel(my_kernel_code, 'my_kernel')
if __name__ == '__main__':
for i in range(0, 2):
print 'o'
kernel(width, height, block=(32, 32, 1), grid=(125, 71))
# When I take this line away, the error goes bye bye.
# What in the world?
width -= 1
现在,如果我们运行上面的代码,执行将顺利地完成for循环的第一次迭代。然而,在循环的第二次迭代中,我得到了以下错误
Traceback (most recent call last):
File "outOfResources.py", line 27, in <module>
kernel(width, height, block=(32, 32, 1), grid=(125, 71))
File "/software/linux/x86_64/epd-7.3-1-pycuda/lib/python2.7/site-packages/pycuda-2012.1-py2.7-linux-x86_64.egg/pycuda/driver.py", line 374, in function_call
func._launch_kernel(grid, block, arg_buf, shared, None)
pycuda._driver.LaunchError: cuLaunchKernel failed: launch out of resources
回溯(最近一次呼叫最后一次):
文件“outOfResources.py”,第27行,在
内核(宽度、高度、块=(32,32,1)、网格=(125,71))
文件“/software/linux/x86_64/epd-7.3-1-pycuda/lib/python2.7/site packages/pycuda-2012.1-py2.7-linux-x86_64.egg/pycuda/driver.py”,第374行,在函数调用中
函数启动内核(网格、块、参数、共享、无)
pycuda.\u driver.LaunchError:culunchkernel失败:启动资源不足
如果我去掉行宽度-=1
,错误就会消失。为什么呢?我不能第二次更改内核的参数吗?以下是clouds.jpg
,仅供参考
尽管错误消息不是特别有用,但请注意,您需要传入一个正确浇铸的
width
变量。比如:
width = np.int32(width - 1)
应该可以工作。您的GPU是什么?我猜这个区块很大!