Matlab与Python二维卷积性能比较
我喜欢在Matlab中创建算法的原型,但我需要将它们放在一个运行大量Python代码的服务器上。因此,我很快将代码转换为Python并比较了两者。Matlab实现的运行速度快了约1000倍(从计时函数调用-无评测)。有人能马上知道为什么Python的性能如此之慢吗 MatlabMatlab与Python二维卷积性能比较,python,matlab,optimization,convolution,Python,Matlab,Optimization,Convolution,我喜欢在Matlab中创建算法的原型,但我需要将它们放在一个运行大量Python代码的服务器上。因此,我很快将代码转换为Python并比较了两者。Matlab实现的运行速度快了约1000倍(从计时函数调用-无评测)。有人能马上知道为什么Python的性能如此之慢吗 Matlab 好的,多亏@Yves Daust的建议,我的问题解决了 filterscipy.ndimage.filters.gaussian_filter利用内核的可分性,并将运行时间减少到matlab实现的单个数量级以内 impo
好的,多亏@Yves Daust的建议,我的问题解决了 filter
scipy.ndimage.filters.gaussian_filter
利用内核的可分性,并将运行时间减少到matlab实现的单个数量级以内
import numpy as np
from scipy.ndimage.filters import gaussian_filter as gaussian
# Test data parameters
w = 800
h = 1200
npts = 250
# generate data
xvals = np.random.randint(w, size=npts)
yvals = np.random.randint(h, size=npts)
# Heatmap parameters
gaussianSize = 250
nbreaks = 25
# Preliminary function definitions
def populateMat(w, h, xvals, yvals):
container = np.zeros((w,h))
for idx in range(0,xvals.size):
x = xvals[idx]
y = yvals[idx]
container[x,y] += 1
return container
# Create the data matrix
dmat = populateMat(w,h,xvals,yvals)
# Convolve
dmat2 = gaussian(dmat, gaussianSize/7)
# Scaling etc
dmat2 = dmat2 / dmat2.max()
dmat2 = np.round(nbreaks*dmat2)/nbreaks
# Show
imshow(dmat2)
使用numpy进行卷积。请看一看,您是说
scipy.signal.convolve2d
中的实现是罪魁祸首吗?我读了这篇文章,似乎说使用numpy的唯一原因是为了避免scipy依赖。我没有那个限制。尝试ndimage.CONVALVE,但这会引发记忆错误。我猜matlab使用了一个优化的高斯滤波器算法,利用了滤波器的可分性,可能还使用了递归近似。尝试在Python下使用可分性。这很简单:定义两个一维高斯分布,水平和垂直,并按顺序应用它们。
import numpy as np
from scipy.signal import convolve2d as conv2
# Test data parameters
w = 800
h = 1200
npts = 250
# generate data
xvals = np.random.randint(w, size=npts)
yvals = np.random.randint(h, size=npts)
# Heatmap parameters
gaussianSize = 250
nbreaks = 25
# Preliminary function definitions
def populateMat(w, h, xvals, yvals):
container = np.zeros((w,h))
for idx in range(0,xvals.size):
x = xvals[idx]
y = yvals[idx]
container[x,y] += 1
return container
def makeGaussian(size, fwhm):
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
x0 = y0 = size // 2
return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)
# Create the data matrix
dmat = populateMat(w,h,xvals,yvals)
h = makeGaussian(gaussianSize, fwhm=gaussianSize/2)
# Convolve
dmat2 = conv2(dmat, h, mode='same')
# Scaling etc
dmat2 = dmat2 / dmat2.max()
dmat2 = np.round(nbreaks*dmat2)/nbreaks
# Show
imshow(dmat2)
import numpy as np
from scipy.ndimage.filters import gaussian_filter as gaussian
# Test data parameters
w = 800
h = 1200
npts = 250
# generate data
xvals = np.random.randint(w, size=npts)
yvals = np.random.randint(h, size=npts)
# Heatmap parameters
gaussianSize = 250
nbreaks = 25
# Preliminary function definitions
def populateMat(w, h, xvals, yvals):
container = np.zeros((w,h))
for idx in range(0,xvals.size):
x = xvals[idx]
y = yvals[idx]
container[x,y] += 1
return container
# Create the data matrix
dmat = populateMat(w,h,xvals,yvals)
# Convolve
dmat2 = gaussian(dmat, gaussianSize/7)
# Scaling etc
dmat2 = dmat2 / dmat2.max()
dmat2 = np.round(nbreaks*dmat2)/nbreaks
# Show
imshow(dmat2)