Python 如何使用GPU实现更快的卷积2D
我最近正在学习PyCuda,并计划替换一些摄像头系统的代码,以加快图像处理速度。该部件最初使用的是cv2.filter2D。我的目的是用GPU加速处理Python 如何使用GPU实现更快的卷积2D,python,scipy,cv2,cupy,cusignal,Python,Scipy,Cv2,Cupy,Cusignal,我最近正在学习PyCuda,并计划替换一些摄像头系统的代码,以加快图像处理速度。该部件最初使用的是cv2.filter2D。我的目的是用GPU加速处理 Time for signal.convolve2d: 1.6639747619628906 Time for cusignal.convolve2d: 0.6955723762512207 Time for cv2.filter2D: 0.18787837028503418 然而,cv2.filter2D似乎仍然是三者中速度最快的。如果输入是
Time for signal.convolve2d: 1.6639747619628906
Time for cusignal.convolve2d: 0.6955723762512207
Time for cv2.filter2D: 0.18787837028503418
然而,cv2.filter2D似乎仍然是三者中速度最快的。如果输入是一个很长的图像列表,那么定制的PyCuda内核会超过cv2.filter2D吗
import time
import cv2
from cusignal.test.utils import array_equal
import cusignal
import cupy as cp
import numpy as np
from scipy import signal
from scipy import misc
ascent = misc.ascent()
ascent = np.array(ascent, dtype='int16')
ascentList = [ascent]*100
filterSize = 3
scharr = np.ones((filterSize, filterSize), dtype="float") * (1.0 / (filterSize*filterSize))
startTime = time.time()
for asc in ascentList:
grad = signal.convolve2d(asc, scharr, boundary='symm', mode='same')
endTime = time.time()
print("Time for signal.convolve2d: "+str(endTime - startTime))
startTime = time.time()
for asc in ascentList:
gpu_convolve2d = cp.asnumpy(cusignal.convolve2d(cp.asarray(asc), scharr, boundary='symm', mode='same'))
endTime = time.time()
print("Time for cusignal.convolve2d: "+str(endTime - startTime))
print("If signal equal to cusignal: "+ str(array_equal(grad, gpu_convolve2d)))
startTime = time.time()
for asc in ascentList:
opencvOutput = cv2.filter2D(asc, -1, scharr)
endTime = time.time()
print("Time for cv2.filter2D: "+str(endTime - startTime))
print("If cv2 equal to cusignal: "+ str(array_equal(opencvOutput, gpu_convolve2d)))
asc
复制到GPU、执行convolve2d
并将答案传回的时间。从总体上看,GPU与GPU之间的传输非常缓慢。如果您想要计算的真实比较,只需配置文件convalve2d
cuSignal.convolve2d
是用Numba编写的。我们正在将其移植到使用CuPy原始内核的过程中,将会有一个改进。我在convolve2d
上没有预计到达时间scipy.ndimage.filters.convolve
-