Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/330.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何在Numpy中加快转换矩阵的创建?_Python_Numpy_Scipy - Fatal编程技术网

Python 如何在Numpy中加快转换矩阵的创建?

Python 如何在Numpy中加快转换矩阵的创建?,python,numpy,scipy,Python,Numpy,Scipy,以下是我所知道的计算马尔可夫链中的转移并使用它填充转移矩阵的最基本方法: def increment_counts_in_matrix_from_chain(markov_chain, transition_counts_matrix): for i in xrange(1, len(markov_chain)): old_state = markov_chain[i - 1] new_state = markov_chain[i] tra

以下是我所知道的计算马尔可夫链中的转移并使用它填充转移矩阵的最基本方法:

def increment_counts_in_matrix_from_chain(markov_chain, transition_counts_matrix):
    for i in xrange(1, len(markov_chain)):
        old_state = markov_chain[i - 1]
        new_state = markov_chain[i]
        transition_counts_matrix[old_state, new_state] += 1
我试着用三种不同的方式来加速它:

1) 基于此Matlab代码,使用稀疏矩阵一行:

transition_matrix = full(sparse(markov_chain(1:end-1), markov_chain(2:end), 1))
在Numpy/SciPy中,如下所示:

def get_sparse_counts_matrix(markov_chain, number_of_states):
    return coo_matrix(([1]*(len(markov_chain) - 1), (markov_chain[0:-1], markov_chain[1:])), shape=(number_of_states, number_of_states)) 
我还尝试了一些Python调整,比如使用zip()

和队列:

old_and_new_states_holder = Queue(maxsize=2)
old_and_new_states_holder.put(markov_chain[0])
for new_state in markov_chain[1:]:
    old_and_new_states_holder.put(new_state)
    old_state = old_and_new_states_holder.get()
    transition_counts_matrix[old_state, new_state] += 1
但这三种方法都没有加快速度。事实上,除了zip()解决方案之外,其他所有解决方案都比我原来的解决方案慢了至少10倍

还有其他值得研究的解决方案吗



从大量链构建转移矩阵的改进解决方案
对上述问题的最佳答案是DSM。但是,对于任何想要基于数百万马尔可夫链列表填充转移矩阵的人来说,最快的方法是:

def fast_increment_transition_counts_from_chain(markov_chain, transition_counts_matrix):
    flat_coords = numpy.ravel_multi_index((markov_chain[:-1], markov_chain[1:]), transition_counts_matrix.shape)
    transition_counts_matrix.flat += numpy.bincount(flat_coords, minlength=transition_counts_matrix.size)

def get_fake_transitions(markov_chains):
    fake_transitions = []
    for i in xrange(1,len(markov_chains)):
        old_chain = markov_chains[i - 1]
        new_chain = markov_chains[i]
        end_of_old = old_chain[-1]
        beginning_of_new = new_chain[0]
        fake_transitions.append((end_of_old, beginning_of_new))
    return fake_transitions

def decrement_fake_transitions(fake_transitions, counts_matrix):
    for old_state, new_state in fake_transitions:
        counts_matrix[old_state, new_state] -= 1

def fast_get_transition_counts_matrix(markov_chains, number_of_states):
    """50% faster than original, but must store 2 additional slice copies of all markov chains in memory at once.
    You might need to break up the chains into manageable chunks that don't exceed your memory.
    """
    transition_counts_matrix = numpy.zeros([number_of_states, number_of_states])
    fake_transitions = get_fake_transitions(markov_chains)
    markov_chains = list(itertools.chain(*markov_chains))
    fast_increment_transition_counts_from_chain(markov_chains, transition_counts_matrix)
    decrement_fake_transitions(fake_transitions, transition_counts_matrix)
    return transition_counts_matrix

这里有一个更快的方法。其思想是计算每个转换的出现次数,并在矩阵的矢量化更新中使用这些计数。(我假设相同的转换可以在
markov_链
中多次出现)使用
集合
库中的
计数器
类来计算每个转换的出现次数

from collections import Counter

def update_matrix(chain, counts_matrix):
    counts = Counter(zip(chain[:-1], chain[1:]))
    from_, to = zip(*counts.keys())
    counts_matrix[from_, to] += counts.values()
定时示例,在ipython中:

In [64]: t = np.random.randint(0,50, 500)

In [65]: m1 = zeros((50,50))

In [66]: m2 = zeros((50,50))

In [67]: %timeit increment_counts_in_matrix_from_chain(t, m1)
1000 loops, best of 3: 895 us per loop

In [68]: %timeit update_matrix(t, m2)
1000 loops, best of 3: 504 us per loop

它更快,但不是数量级更快。对于真正的加速,你可以考虑在Cython中实现这一点。

< P> OK,很少有可以篡改的想法,有些改进(以人类无法承受的代价)

让我们从长度为3000的0到9之间的整数的随机向量开始:

L = 3000
N = 10
states = array(randint(N),size=L)
transitions = np.zeros((N,N))
在我的机器上,您的方法的timeit性能为11.4 ms

第一点改进是避免读取数据两次,将其存储在临时变量中:

old = states[0]
for i in range(1,len(states)):
    new = states[i]
    transitions[new,old]+=1
    old=new
这将使您提高约10%,并将时间降至10.9 ms

一种更复杂的方法使用了以下步骤:

def rolling(a, window):
    shape = (a.size - window + 1, window)
    strides = (a.itemsize, a.itemsize)
    return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)

state_2 = rolling(states, 2)
for i in range(len(state_2)):
    l,m = state_2[i,0],state_2[i,1]
    transitions[m,l]+=1
步幅允许您读取数组的连续数字,从而使数组认为行是以不同的方式开始的(好的,它没有很好的描述,但是如果您花一些时间阅读有关步幅的内容,您将得到它) 这种方法会失去性能,达到12.2 ms,但它是欺骗系统的过道。将过渡矩阵和跨步阵列展平为一维阵列,可以进一步提高性能:

transitions = np.zeros(N*N)
state_2 = rolling(states, 2)
state_flat = np.sum(state_2 * array([1,10]),axis=1)
for i in state_flat:
    transitions[i]+=1
transitions.reshape((N,N))

这下降到7.75 ms。这不是一个数量级,但无论如何,它比以前好了30%:)

这样的东西怎么样,利用
np.bincount
?不是超级健壮,但功能强大。[感谢@Warren Weckesser的设置。]

import numpy as np
from collections import Counter

def increment_counts_in_matrix_from_chain(markov_chain, transition_counts_matrix):
    for i in xrange(1, len(markov_chain)):
        old_state = markov_chain[i - 1]
        new_state = markov_chain[i]
        transition_counts_matrix[old_state, new_state] += 1

def using_counter(chain, counts_matrix):
    counts = Counter(zip(chain[:-1], chain[1:]))
    from_, to = zip(*counts.keys())
    counts_matrix[from_, to] = counts.values()

def using_bincount(chain, counts_matrix):
    flat_coords = np.ravel_multi_index((chain[:-1], chain[1:]), counts_matrix.shape)
    counts_matrix.flat = np.bincount(flat_coords, minlength=counts_matrix.size)

def using_bincount_reshape(chain, counts_matrix):
    flat_coords = np.ravel_multi_index((chain[:-1], chain[1:]), counts_matrix.shape)
    return np.bincount(flat_coords, minlength=counts_matrix.size).reshape(counts_matrix.shape)
其中:

In [373]: t = np.random.randint(0,50, 500)
In [374]: m1 = np.zeros((50,50))
In [375]: m2 = m1.copy()
In [376]: m3 = m1.copy()

In [377]: timeit increment_counts_in_matrix_from_chain(t, m1)
100 loops, best of 3: 2.79 ms per loop

In [378]: timeit using_counter(t, m2)
1000 loops, best of 3: 924 us per loop

In [379]: timeit using_bincount(t, m3)
10000 loops, best of 3: 57.1 us per loop
[编辑]

避免
扁平化
(以不到位为代价)可以为小矩阵节省一些时间:

In [80]: timeit using_bincount_reshape(t, m3)
10000 loops, best of 3: 22.3 us per loop

只是为了好玩,因为我一直想尝试一下,所以我申请了你的问题。在代码中,这只需要添加一个装饰器(尽管我已经直接调用了它,以便测试numba在这里提供的jit变体):

然后是时间安排:

In [10]: %timeit increment_counts_in_matrix_from_chain(t,m1)
100 loops, best of 3: 2.38 ms per loop

In [11]: %timeit autojit_func(t,m2)                         

10000 loops, best of 3: 67.5 us per loop

In [12]: %timeit jit_func(t,m3)
100000 loops, best of 3: 4.93 us per loop
autojit
方法根据运行时输入进行一些猜测,
jit
函数具有指定的类型。您必须稍微小心一点,因为在这些早期阶段,如果您为输入传递了错误的类型,numba不会告知您,jit存在错误。它只会吐出一个不正确的答案

尽管如此,在我的书中,在没有任何代码更改的情况下获得35倍和485倍的速度,只需添加一个对numba的调用(也可以称为decorator)就相当令人印象深刻了。使用cython可能会得到类似的结果,但这需要更多的样板文件和编写setup.py文件


我也喜欢这个解决方案,因为代码保持可读性,您可以按照您最初想到的实现算法的方式来编写它。

我接受这个答案,但我想再问一个问题。当我重复使用bincount来填充基于数千个马尔可夫链的转移计数矩阵时,我的原始代码速度更快。我假设这是因为counts\u matrix.flat+=numpy.bincount(flat\u coords,minlength=counts\u matrix.size)在更新counts\u矩阵时比我的原始代码慢。对此有何想法?更新一下:我找到的基于大量马尔可夫链填充转移矩阵的最快解决方案是将链一个接一个地合并在一起,使用bincounts,然后获得假转移(从一个链的末端到下一个链的开始),然后减少每个假转移的计数。这个解决方案比我原来的解决方案快了25%左右。@someguy:请随意选择您为您的用例找到的最佳解决方案,将其作为答案发布,然后接受它。太棒了!启动成本是多少?@DSM不确定这是否是计时的最佳方式,但
%timeit autojit\u func=numba.autojit();autojit_func(t,m2)
给出了81个参数。当我对纯
jit
执行类似操作时,我会收到一堆垃圾收集警告,我认为这些警告会破坏时间安排。
import numpy as np
import numba

def increment_counts_in_matrix_from_chain(markov_chain, transition_counts_matrix):
    for i in xrange(1, len(markov_chain)):
        old_state = markov_chain[i - 1]
        new_state = markov_chain[i]
        transition_counts_matrix[old_state, new_state] += 1

autojit_func = numba.autojit()(increment_counts_in_matrix_from_chain)
jit_func = numba.jit(argtypes=[numba.int64[:,::1],numba.double[:,::1]])(increment_counts_in_matrix_from_chain)

t = np.random.randint(0,50, 500)
m1 = np.zeros((50,50))
m2 = np.zeros((50,50))
m3 = np.zeros((50,50))
In [10]: %timeit increment_counts_in_matrix_from_chain(t,m1)
100 loops, best of 3: 2.38 ms per loop

In [11]: %timeit autojit_func(t,m2)                         

10000 loops, best of 3: 67.5 us per loop

In [12]: %timeit jit_func(t,m3)
100000 loops, best of 3: 4.93 us per loop