C++ 使用CUDA线程索引作为数字
我是CUDA和GPGPU的新手。我正在尝试检查一大组数字(大于32位)的属性,我想使用配备nVidia GTX 1080的Windows 7 64位机器来检查这些属性:C++ 使用CUDA线程索引作为数字,c++,cuda,gpu,gpgpu,C++,Cuda,Gpu,Gpgpu,我是CUDA和GPGPU的新手。我正在尝试检查一大组数字(大于32位)的属性,我想使用配备nVidia GTX 1080的Windows 7 64位机器来检查这些属性: Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 1080" CUDA Driver Version / Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version numb
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1080"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8192 MBytes (8589934592 bytes)
(20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1734 MHz (1.73 GHz)
Memory Clock rate: 5005 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
检测到1个支持CUDA的设备
设备0:“GeForce GTX 1080”
CUDA驱动程序版本/运行时版本8.0/8.0
CUDA能力主要/次要版本号:6.1
全局内存总量:8192 MB(8589934592字节)
(20) 多处理器,(128)CUDA内核/MP:2560 CUDA内核
GPU最大时钟频率:1734 MHz(1.73 GHz)
内存时钟频率:5005MHz
内存总线宽度:256位
二级缓存大小:2097152字节
最大纹理尺寸大小(x,y,z)1D=(131072),2D=(13107265536),3D=(163841638416384)
最大分层1D纹理大小,(num)层1D=(32768),2048层
最大分层2D纹理大小,(num)层2D=(32768,32768),2048层
恒定内存总量:65536字节
每个块的共享内存总量:49152字节
每个块可用的寄存器总数:65536
经纱尺寸:32
每个多处理器的最大线程数:2048
每个块的最大线程数:1024
螺纹块的最大尺寸(x、y、z):(1024、1024、64)
栅格尺寸的最大尺寸(x、y、z):(2147483647、65535、65535)
最大内存间距:2147483647字节
纹理对齐:512字节
并发复制和内核执行:有2个复制引擎
内核的运行时间限制:是
集成GPU共享主机内存:否
支持主机页锁定内存映射:是
表面对齐要求:是
设备具有ECC支持:已禁用
CUDA设备驱动程序模式(TCC或WDDM):WDDM(Windows显示驱动程序型号)
设备支持统一寻址(UVA):是
设备PCI域ID/总线ID/位置ID:0/1/0
计算模式:
当我运行以下代码时,“sum”的值是无意义的(28、20等),尽管我可以看到threadId从0到4095:
#include <cuda.h>
#include <cuda_runtime.h>
#include "device_launch_parameters.h"
#include "stdio.h"
__global__ void Simple(unsigned long long int *sum)
{
unsigned long long int blockId = blockIdx.x + blockIdx.y * gridDim.x + gridDim.x * gridDim.y * blockIdx.z;
unsigned long long int threadId = blockId * (blockDim.x * blockDim.y * blockDim.z)
+ (threadIdx.z * (blockDim.x * blockDim.y))
+ (threadIdx.y * blockDim.x)
+ threadIdx.x;
printf("threadId = %llu.\n", threadId);
// Check threadId for property. Possibly introduce a grid stride for loop to give each thread a range to check.
sum[0]++;
}
int main(int argc, char **argv)
{
unsigned long long int sum[] = { 0 };
unsigned long long int *dev_sum;
cudaMalloc((void**)&dev_sum, sizeof(unsigned long long int));
cudaMemcpy(dev_sum, sum, sizeof(unsigned long long int), cudaMemcpyHostToDevice);
dim3 grid(2, 1, 1);
dim3 block(1024, 1, 1);
printf("--------- Start kernel ---------\n\n");
Simple <<< grid, block >>> (dev_sum);
cudaDeviceSynchronize();
cudaMemcpy(sum, dev_sum, sizeof(unsigned long long int), cudaMemcpyDeviceToHost);
printf("sum = %llu.\n", sum[0]);
cudaFree(dev_sum);
getchar();
return 0;
}
#包括
#包括
#包括“设备启动参数.h”
#包括“stdio.h”
__全局无效简单(无符号长整型*和)
{
无符号long long int blockId=blockIdx.x+blockIdx.y*gridDim.x+gridDim.x*gridDim.y*blockIdx.z;
无符号long long int threadId=blockId*(blockDim.x*blockDim.y*blockDim.z)
+(螺纹IDX.z*(块尺寸x*块尺寸y))
+(螺纹内径x.y*blockDim.x)
+threadIdx.x;
printf(“threadId=%llu.\n”,threadId);
//检查threadId的属性。可能为循环引入一个网格步长,为每个线程提供一个要检查的范围。
和[0]++;
}
int main(int argc,字符**argv)
{
无符号长整型和[]={0};
无符号长整型*dev_和;
cudamaloc((void**)和dev_sum,sizeof(unsigned long int));
cudaMemcpy(dev_sum,sum,sizeof(unsigned long long int),cudaMemcpyHostToDevice);
dim3网格(2,1,1);
dim3块(1024,1,1);
printf(“-----------启动内核----------------\n\n”);
简单>(dev_sum);
cudaDeviceSynchronize();
cudaMemcpy(sum,dev_sum,sizeof(unsigned long long int),cudaMemcpyDeviceToHost);
printf(“sum=%llu.\n”,sum[0]);
cudaFree(dev_sum);
getchar();
返回0;
}
我如何修改这个内核调用,通过添加一个网格跨步循环,在一个数字范围内(比如0到10^12)运行最大线程数(使用我的设置)
dim3 grid(2, 1, 1);
dim3 block(1024, 1, 1);
Simple <<< grid, block >>> (dev_sum);
dim3网格(2,1,1);
dim3块(1024,1,1);
简单>(dev_sum);
所有线程都在内存中的同一位置执行递增操作,这会导致争用状态。这就是结果不正确的原因。你应该使用原子加法来使它正确(CUDA中有一个函数)。将sum[0]+
替换为atomicAdd(&sum[0],1)
。你得到了递增的竞态条件谢谢。这很有帮助。你能回答问题的后半部分吗?设置一个大的1D数据集的最大线程数?谢谢。这很有帮助。你能回答问题的后半部分吗?设置一个大的1D数据集的最大线程数?