Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 为什么我的CUDA内核(Numba)在使用相同输入的连续调用上表现不同?_Python 3.x_Cuda_Gpu_Race Condition_Numba - Fatal编程技术网

Python 3.x 为什么我的CUDA内核(Numba)在使用相同输入的连续调用上表现不同?

Python 3.x 为什么我的CUDA内核(Numba)在使用相同输入的连续调用上表现不同?,python-3.x,cuda,gpu,race-condition,numba,Python 3.x,Cuda,Gpu,Race Condition,Numba,我在python的numba+cuda中有一个noob bug。Numba版本为0.51,CUDA版本为10.2。当使用完全相同的输入重复调用时,下面的代码给出了非常不同的输出 import numpy as np from numba import cuda, jit @cuda.jit() def writeToArray(vec, array_in, array_out): ''' vec is a 3x1 vector, array_in is a 3D array, arra

我在python的numba+cuda中有一个noob bug。Numba版本为0.51,CUDA版本为10.2。当使用完全相同的输入重复调用时,下面的代码给出了非常不同的输出

import numpy as np
from numba import cuda, jit

@cuda.jit()
def writeToArray(vec, array_in, array_out):
    ''' vec is a 3x1 vector, array_in is a 3D array, array_out is a 3D array of the shape of array in'''
    i,j,k = cuda.grid(3)
    value = array_in[i,j,k] * vec[0] + array_in[i,j,k] * vec[0] + array_in[i,j,k] * vec[0]
    cuda.atomic.max(array_out,(i,j,k), value)
    # cuda.synchronize()

def test():
    
    threadsperblock = (8,8,8)
    blockspergrid_x = ( 17 + threadsperblock[0]) // threadsperblock[0]
    blockspergrid_y = ( 21 + threadsperblock[1]) // threadsperblock[1]
    blockspergrid_z = ( 5 + threadsperblock[2]) // threadsperblock[2]
    blockspergrid = (blockspergrid_x, blockspergrid_y, blockspergrid_z)
    array_in = np.random.rand(17,21,5).astype(np.float_)
    vec = np.array([1.0, -1.0, 1.0]).astype(np.float_)
    d_array_in = cuda.to_device(array_in)
    d_vec = cuda.to_device(vec)
    while True:
        array_out_1 = -999.999*np.ones_like(array_in)
        array_out_2 = -999.999*np.ones_like(array_in)
        d_array_out_1= cuda.to_device(array_out_2)
        d_array_out_2 = cuda.to_device(array_out_2)
        writeToArray[blockspergrid, threadsperblock](d_vec, d_array_in, d_array_out_1)
        writeToArray[blockspergrid, threadsperblock](d_vec, d_array_in, d_array_out_2)
        array_out_1_host = d_array_out_1.copy_to_host()
        array_out_2_host = d_array_out_2.copy_to_host()
        assert(np.allclose(array_out_1_host, array_out_2_host))

if __name__ == "__main__":
    test()

这应该不会中断,但经过大约10次while循环迭代后,断言最终会失败。我做错了什么?

您的内核代码正在进行非法的越界访问。按此方式调整栅格大小时:

blockspergrid_x = ( 17 + threadsperblock[0]) // threadsperblock[0]
blockspergrid_y = ( 21 + threadsperblock[1]) // threadsperblock[1]
blockspergrid_z = ( 5 + threadsperblock[2]) // threadsperblock[2]
您可以创建“额外”线程。这些线程的i、j、k索引在输入数组的“形状”之外。你不希望这些线程起作用。通常的方法是在内核代码中进行“线程检查”:

$ cat t31.py
import numpy as np
from numba import cuda, jit

@cuda.jit()
def writeToArray(vec, array_in, array_out):
    ''' vec is a 3x1 vector, array_in is a 3D array, array_out is a 3D array of the shape of array in'''
    i,j,k = cuda.grid(3)
    if i < array_in.shape[0] and j < array_in.shape[1] and k < array_in.shape[2]:
        value = array_in[i,j,k] * vec[0] + array_in[i,j,k] * vec[0] + array_in[i,j,k] * vec[0]
        cuda.atomic.max(array_out,(i,j,k), value)
    # cuda.synchronize()

def test():

    threadsperblock = (8,8,8)
    blockspergrid_x = ( 17 + threadsperblock[0] -1) // threadsperblock[0]
    blockspergrid_y = ( 21 + threadsperblock[1] -1) // threadsperblock[1]
    blockspergrid_z = (  5 + threadsperblock[2] -1) // threadsperblock[2]
    blockspergrid = (blockspergrid_x, blockspergrid_y, blockspergrid_z)
    array_in = np.random.rand(17,21,5).astype(np.float_)
    vec = np.array([1.0, -1.0, 1.0]).astype(np.float_)
    d_array_in = cuda.to_device(array_in)
    d_vec = cuda.to_device(vec)
    i=0
    while i<20:
        array_out_1 = -999.999*np.ones_like(array_in)
        array_out_2 = -999.999*np.ones_like(array_in)
        d_array_out_1= cuda.to_device(array_out_2)
        d_array_out_2 = cuda.to_device(array_out_2)
        writeToArray[blockspergrid, threadsperblock](d_vec, d_array_in, d_array_out_1)
        writeToArray[blockspergrid, threadsperblock](d_vec, d_array_in, d_array_out_2)
        array_out_1_host = d_array_out_1.copy_to_host()
        array_out_2_host = d_array_out_2.copy_to_host()
        assert(np.allclose(array_out_1_host, array_out_2_host))
        i+=1
        print(i)

if __name__ == "__main__":
    test()
$ cuda-memcheck python t31.py
========= CUDA-MEMCHECK
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
========= ERROR SUMMARY: 0 errors
$
$cat t31.py
将numpy作为np导入
来自numba import cuda,jit
@cuda.jit()
def writeToArray(向量、数组输入、数组输出):
''vec是一个3x1向量,array_in是一个3D数组,array_out是一个3D数组,其形状为array in''
i、 j,k=cuda.网格(3)
如果i