Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/365.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python-PNG的快速批量修改_Python_Image Processing_Python Imaging Library - Fatal编程技术网

Python-PNG的快速批量修改

Python-PNG的快速批量修改,python,image-processing,python-imaging-library,Python,Image Processing,Python Imaging Library,我为OpenGL着色器编写了一个python脚本,它以独特的方式组合图像。问题是我有大量非常大的地图,需要很长时间来处理。有没有办法写得更快 import numpy as np map_data = {} image_data = {} for map_postfix in names: file_name = inputRoot + '-' + map_postfix + resolution + '.png' print 'Loading

我为OpenGL着色器编写了一个python脚本,它以独特的方式组合图像。问题是我有大量非常大的地图,需要很长时间来处理。有没有办法写得更快

    import numpy as np

    map_data = {}
    image_data = {}
    for map_postfix in names:
    file_name = inputRoot + '-' + map_postfix + resolution + '.png'
    print 'Loading ' + file_name
    image_data[map_postfix] = Image.open(file_name, 'r')
    map_data[map_postfix] = image_data[map_postfix].load()


    color = mapData['ColorOnly']
    ambient = mapData['AmbientLight']
    shine = mapData['Shininess']

    width = imageData['ColorOnly'].size[0]
    height = imageData['ColorOnly'].size[1]

    arr = np.zeros((height, width, 4), dtype=int)

    for i in range(width):
        for j in range(height):
            ambient_mod = ambient[i,j][0] / 255.0
            arr[j, i, :] = [color[i,j][0] * ambient_mod , color[i,j][1] * ambient_mod , color[i,j][2] * ambient_mod , shine[i,j][0]]

    print 'Converting Color Map to image'
    return Image.fromarray(arr.astype(np.uint8))

这只是大量批处理过程的一个示例,因此我更感兴趣的是,是否有一种更快的方法来迭代和修改图像文件。几乎所有的时间都花在嵌套循环上,而不是加载和保存。

矢量化代码示例--在
timeit
zmq.Stopwatch()中测试对您的影响。

报告加速22.14秒>>0.1624秒

虽然您的代码似乎在RGBA[x,y]上循环,但让我展示一个“矢量化的”代码语法,它得益于
numpy
矩阵操作实用程序(忘记RGB/YUV操作(最初基于OpenCV而不是PIL),但重新使用矢量化语法方法以避免for循环,并对其进行调整,使其有效地用于微积分。错误的操作顺序可能会使您的处理时间增加一倍以上

并使用测试/优化/重新测试循环加速

对于测试,如果分辨率足够,请使用标准python
timeit

如果需要进入
[usec]
分辨率,请选择
zmq.StopWatch()

# Vectorised-code example, to see the syntax & principles
#                          do not mind another order of RGB->BRG layers
#                          it has been OpenCV traditional convention
#                          it has no other meaning in this demo of VECTORISED code

def get_YUV_U_Cb_Rec709_BRG_frame( brgFRAME ):  # For the Rec. 709 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE
    out =  numpy.zeros(            brgFRAME.shape[0:2] )
    out -= 0.09991 / 255 *         brgFRAME[:,:,1]  # // Red
    out -= 0.33601 / 255 *         brgFRAME[:,:,2]  # // Green
    out += 0.436   / 255 *         brgFRAME[:,:,0]  # // Blue
    return out
# normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ...
# on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]
那么它在您的算法中会是什么样子呢?

一个人不必拥有彼得·杰克逊令人印象深刻的预算和时间当他在制作《指环王》时,曾在新西兰的一个机库里计划、跨越和执行了3年的大量数字运算,那里挤满了一群SGI工作站全数字母盘生产线,通过逐帧像素操作,认识到批量生产流水线中的毫秒、微秒甚至纳秒都很重要。

因此,深呼吸,测试并重新测试,以优化您的真实图像处理性能,使其达到项目所需的水平

希望这对您有所帮助:

# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
from zmq import Stopwatch                       # _MICROSECOND_ timer
#                                               # timer-resolution step ~ 21 nsec
#                                               # Yes, NANOSECOND-s
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
arr        = np.zeros( ( height, width, 4 ), dtype = int )
aStopWatch = zmq.Stopwatch()                    # ||||||||||||||||||||||||||||||||
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< your original code segment          
#  aStopWatch.start()                           # |||||||||||||__.start
#  for i in range(     width  ):
#      for j in range( height ):
#          ambient_mod  = ambient[i,j][0] / 255.0
#          arr[j, i, :] = [ color[i,j][0] * ambient_mod, \
#                           color[i,j][1] * ambient_mod, \
#                           color[i,j][2] * ambient_mod, \
#                           shine[i,j][0]                \
#                           ]
#  usec_for = aStopWatch.stop()                 # |||||||||||||__.stop
#  print 'Converting Color Map to image'
#  print '           FOR processing took ', usec_for, ' [usec]'
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< proposed alternative
aStopWatch.start()                              # |||||||||||||__.start
# reduced numpy broadcasting one dimension less # ref. comments below
arr[:,:, 0]  = color[:,:,0] * ambient[:,:,0]    # MUL ambient[0]  * [{R}]
arr[:,:, 1]  = color[:,:,1] * ambient[:,:,0]    # MUL ambient[0]  * [{G}]
arr[:,:, 2]  = color[:,:,2] * ambient[:,:,0]    # MUL ambient[0]  * [{B}]
arr[:,:,:3] /= 255                              # DIV 255 to normalise
arr[:,:, 3]  = shine[:,:,0]                     # STO shine[  0] in [3]
usec_Vector  = aStopWatch.stop()                # |||||||||||||__.stop
print 'Converting Color Map to image'
print '           Vectorised processing took ', usec_Vector, ' [usec]'
return Image.fromarray( arr.astype( np.uint8 ) )
#性能测试可选------------------||||||||||||||||||||||||||||||||
从zmq导入秒表#(微秒)计时器
##计时器分辨率阶跃~21毫微秒
##是的,纳秒秒秒
#性能测试可选------------------||||||||||||||||||||||||||||||||
arr=np.zeros((高度,宽度,4),dtype=int)
aStopWatch=zmq.Stopwatch()||||||||||||||||||||||||||||||||

#/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\\\/\\当您尝试一次操作整个数组(或至少一次操作整个向量)时,
numpy
工作得更快,您熟悉这种想法吗,而不是通过单个元素循环?这个问题似乎是该问题的一个非常典型的例子。我忘了在顶部显示我的导入语句。这是正确的用法吗。我还应该使用Numpy吗?你应该尝试执行乘法、除法等。总体上
color
shine
数组,而不是数组中的单个元素,同样创建一个
环境_mod
数组,类似于
环境_mod_arr=ambient[:,:,0]/255.0
。这种方法一开始很难让你动脑,我也很难用一个简单的答案来解释,但它是有效使用numpy的基础。好的,这很有意义。这就是OpenGL SL中的数学工作原理,所以我只需要在这个上下文中找到向量数学的语法,除了马吕斯建议,您可以尝试优化器(BSD许可证).Numba允许选择方法并进行JIT编译。我以前忘记在顶部显示导入语句。将numpy作为np导入。我还应该使用它吗?没问题,David。不需要其他导入。numpy设计了内部功能来分析和加速迭代矩阵运算的顺序/规模,并考虑到考虑到它的内部数据表示(FORTRAN排序、C排序、实际数据单元的稀疏映射,所以不要考虑内部性,而是保持在numpy数组抽象之上)。另外,您使用的是字节编码的RGBA,因此将大部分操作保留在numpy.int中,这样可以避免将数据类型重新分配到浮点或在舍入时丢失精度。[:,:,0]足以告诉[i,j][0]中的“所有i-s,j-s”。测试它。我的初始测试表明这将是一个巨大的帮助!不幸的是,其中一行不正确,我似乎无法理解语法:arr[:,:,:3]=color[:,:,:,:3]*ambient[:,:,:,0]这会导致ValueError:操作数无法与形状一起广播(13332000,3)(13332000)。它似乎没有意识到它应该是一个乘以每个向量的标量。我如何纠正这个问题?@David ref。更新了语法,将numpy向量化维度减少了一到二维。期待您的性能测量。好的,使用代码,我从22.14秒增加到了0.1624秒!p中的底部代码ost没有运行。我使用了与上面的代码类似的代码(直到我更正后才看到)。您可能希望对其进行编辑,以便最终答案具有正确的代码arr[:,:,0]=color[:,:,0]*(环境[:,:,0]/255.0)arr[:,:,1]=color[:,:,1]*(环境[:,:,0]/255.0)arr[:,,:,2]=color[,,:,:/255.0)arr[:,:,3]=shine[:,:,0]
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
from zmq import Stopwatch                       # _MICROSECOND_ timer
#                                               # timer-resolution step ~ 21 nsec
#                                               # Yes, NANOSECOND-s
# OPTIONAL for performance testing -------------# ||||||||||||||||||||||||||||||||
arr        = np.zeros( ( height, width, 4 ), dtype = int )
aStopWatch = zmq.Stopwatch()                    # ||||||||||||||||||||||||||||||||
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< your original code segment          
#  aStopWatch.start()                           # |||||||||||||__.start
#  for i in range(     width  ):
#      for j in range( height ):
#          ambient_mod  = ambient[i,j][0] / 255.0
#          arr[j, i, :] = [ color[i,j][0] * ambient_mod, \
#                           color[i,j][1] * ambient_mod, \
#                           color[i,j][2] * ambient_mod, \
#                           shine[i,j][0]                \
#                           ]
#  usec_for = aStopWatch.stop()                 # |||||||||||||__.stop
#  print 'Converting Color Map to image'
#  print '           FOR processing took ', usec_for, ' [usec]'
# /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\# <<< proposed alternative
aStopWatch.start()                              # |||||||||||||__.start
# reduced numpy broadcasting one dimension less # ref. comments below
arr[:,:, 0]  = color[:,:,0] * ambient[:,:,0]    # MUL ambient[0]  * [{R}]
arr[:,:, 1]  = color[:,:,1] * ambient[:,:,0]    # MUL ambient[0]  * [{G}]
arr[:,:, 2]  = color[:,:,2] * ambient[:,:,0]    # MUL ambient[0]  * [{B}]
arr[:,:,:3] /= 255                              # DIV 255 to normalise
arr[:,:, 3]  = shine[:,:,0]                     # STO shine[  0] in [3]
usec_Vector  = aStopWatch.stop()                # |||||||||||||__.stop
print 'Converting Color Map to image'
print '           Vectorised processing took ', usec_Vector, ' [usec]'
return Image.fromarray( arr.astype( np.uint8 ) )