Python 规范化numpy阵列的性能更快?
目前,我正在用python规范化一个numpy数组,该数组是由windows以一个可以创建20K补丁的跨步拼接图像创建的。当前的规范化实现是我运行时的一大难点,我正试图用可能在C扩展中完成的相同功能来替换它。我想看看社区有什么建议可以让这件事简单易行? 当前的运行时间大约是0.34s,只是对于规范化部分,我正在尝试使其低于0.1s或更好。您可以看到,在view_as_窗口中创建补丁非常有效,我正在寻找类似的规范化方法。注意,您可以简单地注释/取消注释标记为“#----Normalization”的行,以查看不同实现的运行时 以下是当前的实施情况:Python 规范化numpy阵列的性能更快?,python,arrays,performance,numpy,normalization,Python,Arrays,Performance,Numpy,Normalization,目前,我正在用python规范化一个numpy数组,该数组是由windows以一个可以创建20K补丁的跨步拼接图像创建的。当前的规范化实现是我运行时的一大难点,我正试图用可能在C扩展中完成的相同功能来替换它。我想看看社区有什么建议可以让这件事简单易行? 当前的运行时间大约是0.34s,只是对于规范化部分,我正在尝试使其低于0.1s或更好。您可以看到,在view_as_窗口中创建补丁非常有效,我正在寻找类似的规范化方法。注意,您可以简单地注释/取消注释标记为“#----Normalization”
import gc
import cv2, time
from libraries import GCN
from skimage.util.shape import view_as_windows
def create_imageArray(patch_list):
returnImageArray = numpy.zeros(shape=(len(patch_list), 1, 40, 60))
idx = 0
for patch, name, coords in patch_list:
imgArray = numpy.asarray(patch[:,:], dtype=numpy.float32)
imgArray = imgArray[numpy.newaxis, ...]
returnImageArray[idx] = imgArray
idx += 1
return returnImageArray
# print "normImgArray[0]:",normImgArray[0]
def NormalizeData(imageArray):
tempImageArray = imageArray
# Normalize the data in batches
batchSize = 25000
dataSize = tempImageArray.shape[0]
imageChannels = tempImageArray.shape[1]
imageHeight = tempImageArray.shape[2]
imageWidth = tempImageArray.shape[3]
for i in xrange(0, dataSize, batchSize):
stop = i + batchSize
print("Normalizing data [{0} to {1}]...".format(i, stop))
dataTemp = tempImageArray[i:stop]
dataTemp = dataTemp.reshape(dataTemp.shape[0], imageChannels * imageHeight * imageWidth)
#print("Performing GCN [{0} to {1}]...".format(i, stop))
dataTemp = GCN(dataTemp)
#print("Reshaping data again [{0} to {1}]...".format(i, stop))
dataTemp = dataTemp.reshape(dataTemp.shape[0], imageChannels, imageHeight, imageWidth)
#print("Updating data with new values [{0} to {1}]...".format(i, stop))
tempImageArray[i:stop] = dataTemp
del dataTemp
gc.collect()
return tempImageArray
start_time = time.time()
img1_path = "777628-1032-0048.jpg"
img_list = ["images/1.jpg", "images/2.jpg", "images/3.jpg", "images/4.jpg", "images/5.jpg"]
patchWidth = 60
patchHeight = 40
channels = 1
stride = patchWidth/6
multiplier = 1.31
finalImgArray = []
vaw_time = 0
norm_time = 0
array_time = 0
for im_path in img_list:
start = time.time()
baseFileWithExt = os.path.basename(im_path)
baseFile = os.path.splitext(baseFileWithExt)[0]
img = cv2.imread(im_path, cv2.IMREAD_GRAYSCALE)
nxtWidth = 800
nxtHeight = 1200
patchesList = []
for i in xrange(7):
img = cv2.resize(img, (nxtWidth, nxtHeight))
nxtWidth = int(nxtWidth//multiplier)
nxtHeight = int(nxtHeight//multiplier)
patches = view_as_windows(img, (patchHeight, patchWidth), stride)
cols = patches.shape[0]
rows = patches.shape[1]
patchCount = cols*rows
print "patchCount:",patchCount, " patches.shape:",patches.shape
returnImageArray = numpy.zeros(shape=(patchCount, channels, patchHeight, patchWidth))
idx = 0
for col in xrange(cols):
for row in xrange(rows):
patch = patches[col][row]
imageName = "{0}-patch{1}-{2}.jpg".format(baseFile, i, idx)
patchCoodrinates = (0, 1, 2, 3) # don't need these for example
patchesList.append((patch, imageName, patchCoodrinates))
# ---- Normalization inside 7 iterations <> Part 1
# imgArray = numpy.asarray(patch[:,:], dtype=numpy.float32)
# imgArray = patch.astype(numpy.float32)
# imgArray = imgArray[numpy.newaxis, ...] # Add a new axis for channel so goes from shape [40,60] to [1,40,60]
# returnImageArray[idx] = imgArray
idx += 1
# if i == 0: finalImgArray = returnImageArray
# else: finalImgArray = numpy.concatenate((finalImgArray, returnImageArray), axis=0)
vaw_time += time.time() - start
# ---- Normalizaion inside 7 iterations <> Part 2
# start = time.time()
# normImageArray = NormalizeData(finalImgArray)
# norm_time += time.time() - start
# print "returnImageArray.shape:", finalImgArray.shape
# ---- Normalization outside 7 iterations
start = time.time()
imgArray = create_imageArray(patchesList)
array_time += time.time() - start
start = time.time()
normImgArray = NormalizeData( imgArray )
norm_time += time.time() - start
print "len(patchesList):",len(patchesList)
total_time = (time.time() - start_time)/len(img_list)
print "\npatches_time per img: {0:.3f} s".format(vaw_time/len(img_list))
print "create imgArray per img: {0:.3f} s".format(array_time/len(img_list))
print "normalization_time per img: {0:.3f} s".format(norm_time/len(img_list))
print "total time per image: {0:.3f} s \n".format(total_time)
用于在7次迭代之外创建imageArray和规范化的运行时:
patches_time per img: 0.040 s
create imgArray per img: 0.146 s
normalization_time per img: 0.339 s
total time per image: 0.524 s
我以前没有看到这一点,但创建阵列似乎也需要一些时间。几个要点(进一步调查)1。不要使用数组,尝试使用更直接的二进制结构(不确定是什么,因为我不使用numpy)2。尝试将嵌套循环减少为一个循环(循环组合),并尝试在适用的情况下使用循环展开3。缓存重新使用而不是重新计算的值
NormalizeData
过程究竟做了什么(可能是高斯归一化?)?从您的度量标准来看,标准化\u时间
似乎是瓶颈,这既取决于它的功能,也取决于所选的用作参数的数据表示形式。我忘了添加标准化数据代码。我在我的手机上做了一个pastebin,明天我会把它添加到问题中:Eech。代码很混乱-如果您隔离实际执行规范化的代码,您将获得更多帮助。也就是说,看起来您正在基于给定像素周围的面片进行规格化。在这种情况下,您可以使用积分图像有效地获得像素周围正方形的亮度总和。
patches_time per img: 0.040 s
create imgArray per img: 0.146 s
normalization_time per img: 0.339 s
total time per image: 0.524 s