Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/performance/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Performance 对numpy数组子维的python操作_Performance_Pandas_Numpy_Multidimensional Array - Fatal编程技术网

Performance 对numpy数组子维的python操作

Performance 对numpy数组子维的python操作,performance,pandas,numpy,multidimensional-array,Performance,Pandas,Numpy,Multidimensional Array,许多numpy函数提供了在轴=参数的特定轴上操作的选项。我的问题是 如何实现这种“沿轴”操作?或者,更直接的问题 我如何有效地编写自己的函数来提供类似的选项 我注意到numpy提供了一个函数,如果基本函数输入是1-D数组,该函数将用作答案 但是如果我的基函数需要多维输入呢?例如,沿前二维(5,6)求形状为(5,6,2,3,4)的np矩阵A的二维移动平均值B?就像一个普通函数B=f_移动_平均值(a,轴=(0,1)) 我当前的解决方案是使用numpy.swapaxes和numpy.reforme来

许多numpy函数提供了在轴=参数的特定轴上操作的选项。我的问题是

  • 如何实现这种“沿轴”操作?或者,更直接的问题
  • 我如何有效地编写自己的函数来提供类似的选项
  • 我注意到numpy提供了一个函数,如果基本函数输入是1-D数组,该函数将用作答案

    但是如果我的基函数需要多维输入呢?例如,沿前二维(5,6)求形状为(5,6,2,3,4)的np矩阵A的二维移动平均值B?就像一个普通函数B=f_移动_平均值(a,轴=(0,1))

    我当前的解决方案是使用numpy.swapaxes和numpy.reforme来完成这一任务。一维移动平均函数的示例代码为:

    import pandas as pd
    import numpy as np
    def nanmoving_mean(data,window,axis=0):
        kw = {'center':True,'window':window,'min_periods':1}
        if len(data.shape)==1:
            return pd.Series(data).rolling(**kw).mean().as_matrix()
        elif len(data.shape)>=2:
            tmp = np.swapaxes(data,0,axis)
            tmpshp = tmp.shape
            tmp = np.reshape( tmp, (tmpshp[0],-1), order='C' )
            tmp = pd.DataFrame(tmp).rolling(**kw).mean().as_matrix()
            tmp = np.reshape( tmp, tmpshp, order='C' )
            return np.swapaxes(tmp,0,axis)
        else:
            print('Invalid dimension!')
            return None
    
    data = np.random.randint(10,size=(2,3,6))
    print(data)
    nanmoving_mean(data,window=3,axis=2)
    
    这是问题2的一种常见/有效的实施方式吗?欢迎任何改进/建议/新方法

    另外,我之所以在这里涉及pandas,是因为它的rolling(…).mean()方法能够正确处理nan数据

    编辑:
    我想问这个问题的另一种方式可能是:在“动态”维度数上循环的语法是什么?

    不要过多地讨论你的问题,这里是
    沿轴应用功能的关键部分(通过Ipython查看)

    它们构造了两个不同的索引对象,
    i
    ind
    。假设我们指定轴=2,那么这个代码会

    outarr[i,j,l] = func1d( arr[i,j,:,l], ...)
    
    对于
    i
    j
    l
    的所有可能值。所以有很多代码用于一个非常基本的迭代计算

    ind = [0]*(nd-1)   # ind is just a nd-1 list
    
    i = zeros(nd, 'O')        # i is a 1d array with a `slice` object
    i[axis] = slice(None, None)
    
    我不熟悉熊猫的滚动。但是有许多
    numpy
    滚动平均问题
    scipy.signal.convolve2d
    可能有用
    np.lib.stride\u技巧。as\u stride
    也被使用

    使用
    重塑
    交换轴
    (或
    转置
    )来降低维度空间的复杂性的想法也是很好的


    (这不是一个解决方案;而是扔掉脑海中出现的一些想法,记住其他的“移动平均线”问题。现在进行更多的研究已经太迟了。)

    我们可以使用

    基本步骤是:

    • 作为预处理步骤,将
      NaNs
      替换为
      0s
      ,因为我们需要对输入数据进行加窗求和
    • 获取数据值的加窗求和 还有
      NaNs
      的掩码。我们将使用边界元素作为零
    • 从窗口大小中减去
      NaNs
      的窗口计数,以获得负责求和的有效元素的计数
    • 对于边界元素,我们将逐渐使用较小的元素来计算总和
    现在,这些
    区间求和
    也可以通过相对更有效的方法获得。另一个好处是,我们可以指定执行这些求和/平均的轴

    有了Scipy的
    2D卷积
    1D均匀滤波器
    的混合,我们将有下面列出的几种方法

    导入相关的Scipy函数-

    from scipy.signal import convolve2d as conv2
    from scipy.ndimage.filters import uniform_filter1d as uniff
    
    方法#1:

    def nanmoving_mean_numpy(data, W): # data: input array, W: Window size
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)
    
        value_sums = conv2(data1.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
        nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
    
        value_sums.shape = data.shape
        nan_sums.shape = data.shape
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
        return value_sums/(count - nan_sums)
    
    def nanmoving_mean_numpy_v2(data, W): # data: input array, W: Window size    
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)
    
        value_sums = uniff(data1,size=W, axis=-1, mode='constant')*W
        nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
        nan_sums.shape = data.shape
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW,dtype=int), b_sizes[::-1] ))
        out =  value_sums/(count - nan_sums)
        out = np.where(np.isclose( count, nan_sums), np.nan, out)
        return out
    
    def nanmoving_mean_numpy_v3(data, W): # data: input array, W: Window size
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)    
        nan_avgs = uniff(nan_mask.astype(float),size=W, axis=-1, mode='constant')
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
        scale = ((count/float(W)) - nan_avgs)
        out = uniff(data1,size=W, axis=-1, mode='constant')/scale
        out = np.where(np.isclose( scale, 0), np.nan, out)
        return out
    
    方法#2:

    def nanmoving_mean_numpy(data, W): # data: input array, W: Window size
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)
    
        value_sums = conv2(data1.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
        nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
    
        value_sums.shape = data.shape
        nan_sums.shape = data.shape
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
        return value_sums/(count - nan_sums)
    
    def nanmoving_mean_numpy_v2(data, W): # data: input array, W: Window size    
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)
    
        value_sums = uniff(data1,size=W, axis=-1, mode='constant')*W
        nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
        nan_sums.shape = data.shape
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW,dtype=int), b_sizes[::-1] ))
        out =  value_sums/(count - nan_sums)
        out = np.where(np.isclose( count, nan_sums), np.nan, out)
        return out
    
    def nanmoving_mean_numpy_v3(data, W): # data: input array, W: Window size
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)    
        nan_avgs = uniff(nan_mask.astype(float),size=W, axis=-1, mode='constant')
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
        scale = ((count/float(W)) - nan_avgs)
        out = uniff(data1,size=W, axis=-1, mode='constant')/scale
        out = np.where(np.isclose( scale, 0), np.nan, out)
        return out
    
    方法#3:

    def nanmoving_mean_numpy(data, W): # data: input array, W: Window size
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)
    
        value_sums = conv2(data1.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
        nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
    
        value_sums.shape = data.shape
        nan_sums.shape = data.shape
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
        return value_sums/(count - nan_sums)
    
    def nanmoving_mean_numpy_v2(data, W): # data: input array, W: Window size    
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)
    
        value_sums = uniff(data1,size=W, axis=-1, mode='constant')*W
        nan_sums = conv2(nan_mask.reshape(-1,N),np.ones((1,W)),'same', boundary='fill')
        nan_sums.shape = data.shape
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW,dtype=int), b_sizes[::-1] ))
        out =  value_sums/(count - nan_sums)
        out = np.where(np.isclose( count, nan_sums), np.nan, out)
        return out
    
    def nanmoving_mean_numpy_v3(data, W): # data: input array, W: Window size
        N = data.shape[-1]
        hW = (W-1)//2
    
        nan_mask = np.isnan(data)
        data1 = np.where(nan_mask,0,data)    
        nan_avgs = uniff(nan_mask.astype(float),size=W, axis=-1, mode='constant')
    
        b_sizes = hW+1+np.arange(hW) # Boundary sizes
        count = np.hstack(( b_sizes , W*np.ones(N-2*hW), b_sizes[::-1] ))
        scale = ((count/float(W)) - nan_avgs)
        out = uniff(data1,size=W, axis=-1, mode='constant')/scale
        out = np.where(np.isclose( scale, 0), np.nan, out)
        return out
    
    运行时测试

    数据集#1:

    数据集#2[更大的数据集]:

    In [811]: # Create random input array and insert NaNs
     ...: data = np.random.randint(10,size=(120,130,160)).astype(float)
     ...: 
     ...: # Add 10% NaNs across the data randomly
     ...: idx = np.random.choice(data.size,size=int(data.size*0.1),replace=0)
     ...: data.ravel()[idx] = np.nan
     ...: 
    
    In [812]: %timeit nanmoving_mean(data,window=W,axis=2)
         ...: %timeit nanmoving_mean_numpy(data, W)
         ...: %timeit nanmoving_mean_numpy_v2(data, W)
         ...: %timeit nanmoving_mean_numpy_v3(data, W)
         ...: 
    1 loops, best of 3: 796 ms per loop
    1 loops, best of 3: 486 ms per loop
    1 loops, best of 3: 275 ms per loop
    10 loops, best of 3: 161 ms per loop
    

    我不知道在哪里看沿轴应用的代码,但它是如何构造I和ind的?@ShichuZhu如果你在寻找性能,
    apply\u沿轴应用
    不会有帮助。@Divakar如果基函数只处理1D,那么在所有其他维度上循环是唯一的方法。除非修改基函数以包含向量化操作,否则我猜@ShichuZhu无需循环。有许多矢量化选项可用于在windows中求和元素。