Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/357.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
python中多个函数的有序reduce 有序列表缩减_Python_Performance_List_Reduce_Partial Application - Fatal编程技术网

python中多个函数的有序reduce 有序列表缩减

python中多个函数的有序reduce 有序列表缩减,python,performance,list,reduce,partial-application,Python,Performance,List,Reduce,Partial Application,我需要减少一些列表,其中,根据元素类型,二进制操作的速度和实现会有所不同,也就是说,可以通过首先使用特定函数减少一些对来获得较大的速度降低。 例如foo(a[0],bar(a[1],a[2])) 可能比条慢很多(foo(a[0],a[1]),a[2]),但在本例中给出相同的结果 我已经有了以元组列表(pair\u index,binary\u function)的形式生成最佳排序的代码。我正在努力实现一个有效的函数来执行缩减,理想情况下,这个函数返回一个新的分部函数,然后可以在相同类型的列表上重

我需要减少一些列表,其中,根据元素类型,二进制操作的速度和实现会有所不同,也就是说,可以通过首先使用特定函数减少一些对来获得较大的速度降低。 例如
foo(a[0],bar(a[1],a[2]))
可能比
条慢很多(foo(a[0],a[1]),a[2])
,但在本例中给出相同的结果

我已经有了以元组列表
(pair\u index,binary\u function)
的形式生成最佳排序的代码。我正在努力实现一个有效的函数来执行缩减,理想情况下,这个函数返回一个新的分部函数,然后可以在相同类型的列表上重复使用,排序相同,但值不同

简单慢(?)解 下面是我的简单解决方案,涉及到for循环、删除元素和
(pair\u index,binary\u function)
列表上的闭包,以返回一个“预计算”函数

def ordered_reduce(a, pair_indexes, binary_functions, precompute=False):
    """
    a: list to reduce, length n
    pair_indexes: order of pairs to reduce, length (n-1)
    binary_functions: functions to use for each reduction, length (n-1)
    """
    def ord_red_func(x):
        y = list(x)  # copy so as not to eat up
        for p, f in zip(pair_indexes, binary_functions):
            b = f(y[p], y[p+1])
            # Replace pair
            del y[p]
            y[p] = b
        return y[0]

    return ord_red_func if precompute else ord_red_func(a)

>>> foos = (lambda a, b: a - b, lambda a, b: a + b, lambda a, b: a * b)
>>> ordered_reduce([1, 2, 3, 4], (2, 1, 0), foos)
1
>>> 1 * (2 + (3-4))
1
以及预计算的工作原理:

>>> foo = ordered_reduce(None, (0, 1, 0), foos)
>>> foo([1, 2, 3, 4])
-7
>>> (1 - 2) * (3 + 4)
-7
但是,它需要复制整个列表,而且(因此?)速度也很慢。有没有更好的/标准的方法

(编辑:)一些计时: 这是一种最坏的情况,稍微有点作弊,因为reduce不接受一个iterable函数,但一个接受(但没有顺序)的函数仍然非常快:

def multi_reduce(fs, xs):
    xs = iter(xs)
    x = next(xs)
    for f, nx in zip(fs, xs):
        x = f(x, nx)
    return x

>>> %timeit multi_reduce(fs, xs)
100 loops, best of 3: 8.71 ms per loop
(EDIT2):有趣的是,一个大规模作弊的“编译”版本的性能,它给出了一些发生的总开销的概念

from numba import jit

@jit(nopython=True)
def numba_sum(xs):
    y = 0
    for x in xs:
        y += x
    return y

>>> %timeit numba_sum(xs)
1000 loops, best of 3: 1.46 ms per loop

当我读到这个问题时,我立刻想到了(RPN)。虽然这可能不是最好的方法,但在这种情况下,它仍然会大大加快速度

我的第二个想法是,如果您只是适当地重新排序序列
xs
,以除去
dely[p]
,您可能会得到相同的结果。(如果整个reduce过程都是用C编写的,那么可以说可以获得最佳性能。但这是另一回事。)

反向波兰符号

如果您不熟悉RPN,请阅读维基百科文章中的简短解释。基本上,所有操作都可以不用括号写下来,例如
(1-2)*(3+4)
在RPN中是
12-34+*
,而
1-(2*(3+4))
变成
1234+*-

下面是RPN解析器的一个简单实现。我将对象列表与RPN序列分开,以便同一序列可以直接用于不同的列表

def rpn(arr, seq):
    '''
    Reverse Polish Notation algorithm
    (this version works only for binary operators)
    arr: array of objects 
    seq: rpn sequence containing indices of objects from arr and functions
    '''
    stack = []
    for x in seq:
        if isinstance(x, int):
        # it's an object: push it to stack
            stack.append(arr[x])
        else:
        # it's a function: pop two objects, apply the function, push the result to stack 
            b = stack.pop()
            #a = stack.pop()
            #stack.append(x(a,b))
            ## shortcut:
            stack[-1] = x(stack[-1], b)
    return stack.pop()
用法示例:

# Say we have an array 
arr = [100, 210, 42, 13] 
# and want to calculate 
(100 - 210) * (42 + 13) 
# It translates to RPN: 
100 210 - 42 13 + * 
# or 
arr[0] arr[1] - arr[2] arr[3] + * 
# So we apply `
rpn(arr,[0, 1, subtract, 2, 3, add, multiply])
要将RPN应用于您的案例,您需要从头开始生成RPN序列,或者将
(成对索引、二进制函数)
转换为它们。我还没有想过转换器,但它肯定可以做到

测试

首先是您的原始测试:

r = 100000
xs = [random() for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add)) for x in (i,f)]
rpn_seq2 = list(range(r)) + list(repeat(add,r-1))
# Here rpn_seq denotes (_ + (_ + (_ +( ... )...))))
# and rpn_seq2 denotes ((...( ... _)+ _) + _).
# Obviously, they are not equivalent but with 'add' they yield the same result. 

%timeit reduce(add, xs)
100 loops, best of 3: 7.37 ms per loop
%timeit foo(xs)
1 loops, best of 3: 1.71 s per loop
%timeit rpn(xs, rpn_seq)
10 loops, best of 3: 79.5 ms per loop
%timeit rpn(xs, rpn_seq2)
10 loops, best of 3: 73 ms per loop

# Pure numpy just out of curiosity:
%timeit np.sum(np.asarray(xs))
100 loops, best of 3: 3.84 ms per loop
xs_np = np.asarray(xs)
%timeit np.sum(xs_np)
The slowest run took 4.52 times longer than the fastest. This could mean that an intermediate result is being cached 
10000 loops, best of 3: 48.5 µs per loop
因此,
rpn
reduce
慢10倍,但比
ordered\u reduce
快20倍左右

add_or_dot_b = 1
def add_or_dot(x,y):
    '''calls 'add' and 'np.dot' alternately'''
    global add_or_dot_b
    if add_or_dot_b:
        out = x+y
    else:
        out = np.dot(x,y)
    add_or_dot_b = 1 - add_or_dot_b
    # normalizing out to avoid `inf` in results
    return out/np.max(out)

r = 100001      # +1 for convenience
                # (we apply an even number of functions) 
xs = [np.random.rand(2,2) for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add_or_dot)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add_or_dot)) for x in (i,f)]

%timeit reduce(add_or_dot, xs)
1 loops, best of 3: 894 ms per loop
%timeit foo(xs)
1 loops, best of 3: 2.72 s per loop
%timeit rpn(xs, rpn_seq)
1 loops, best of 3: 1.17 s per loop
现在,让我们尝试更复杂的方法:交替地对矩阵进行加法和乘法。我需要一个特殊的函数来测试
reduce

add_or_dot_b = 1
def add_or_dot(x,y):
    '''calls 'add' and 'np.dot' alternately'''
    global add_or_dot_b
    if add_or_dot_b:
        out = x+y
    else:
        out = np.dot(x,y)
    add_or_dot_b = 1 - add_or_dot_b
    # normalizing out to avoid `inf` in results
    return out/np.max(out)

r = 100001      # +1 for convenience
                # (we apply an even number of functions) 
xs = [np.random.rand(2,2) for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add_or_dot)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add_or_dot)) for x in (i,f)]

%timeit reduce(add_or_dot, xs)
1 loops, best of 3: 894 ms per loop
%timeit foo(xs)
1 loops, best of 3: 2.72 s per loop
%timeit rpn(xs, rpn_seq)
1 loops, best of 3: 1.17 s per loop

在这里,
rpn
大约比
reduce
慢25%,比
ordered\u-reduce
快2倍多。你能举一些例子说明
ordered\u-reduce
比直接计算慢(如
foo([1,2,3,4])
vs
(1-2)*(3+4)
?增加了一些计时@PTRJ只有在发布我的答案后,我才在你的
ord\u red\u func
上运行了一个分析器。正如我所料,
dely[p]
是罪魁祸首。在本例中,程序几乎90%的时间花在这一行上,而在我的示例中,大约50%的时间花在矩阵上。因此,我认为,任何解决方案都可以摆脱这种
del
。以前听说过RPN,但没有研究过它,这似乎是一种自然的方法,可以在“收缩”二叉树时最大限度地减少存储的计算对数(可能是最优的?)。此外,出于兴趣,我认为您的
rpn
函数似乎是
deque
的最佳位置(仅追加/弹出右键?),但它比简单列表慢,至少对于me@jawknee不知道RPN在二叉树中是否是最优的。至于
deque
,我没有想到。如果我们只在最后弹出/推送,通常的列表是相当有效的。
add_or_dot_b = 1
def add_or_dot(x,y):
    '''calls 'add' and 'np.dot' alternately'''
    global add_or_dot_b
    if add_or_dot_b:
        out = x+y
    else:
        out = np.dot(x,y)
    add_or_dot_b = 1 - add_or_dot_b
    # normalizing out to avoid `inf` in results
    return out/np.max(out)

r = 100001      # +1 for convenience
                # (we apply an even number of functions) 
xs = [np.random.rand(2,2) for _ in range(r)]
ps = [0]*(r-1)
fs = repeat(add_or_dot)
foo = ordered_reduce(None, ps, fs, precompute=True)
rpn_seq = [0] + [x for i, f in zip(range(1,r), repeat(add_or_dot)) for x in (i,f)]

%timeit reduce(add_or_dot, xs)
1 loops, best of 3: 894 ms per loop
%timeit foo(xs)
1 loops, best of 3: 2.72 s per loop
%timeit rpn(xs, rpn_seq)
1 loops, best of 3: 1.17 s per loop