Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/sorting/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 稀疏矩阵求和_Python_Arrays_Numpy_Multidimensional Array_Sparse Matrix - Fatal编程技术网

Python 稀疏矩阵求和

Python 稀疏矩阵求和,python,arrays,numpy,multidimensional-array,sparse-matrix,Python,Arrays,Numpy,Multidimensional Array,Sparse Matrix,我有一个三维数组(np.ndarray),其中大部分为0。现在我想在第一维度上求和,但这相当慢。我已经研究了csr_矩阵,但csr不支持三维数组。有没有更快的方法来求几乎稀疏的nd数组的和?下面是我当前代码的摘录 相关问题: (创建一个自制的稀疏ndarray类,是否过高?) 编辑 在hpaulj给出下面的答案之后,我又做了一些计时测试,见下文。似乎重塑对求和没有多大好处,而将它们转换为csr_矩阵并返回numpy会降低性能。我仍然在考虑直接使用这些索引(下面称为rand_persons,ran

我有一个三维数组(np.ndarray),其中大部分为0。现在我想在第一维度上求和,但这相当慢。我已经研究了csr_矩阵,但csr不支持三维数组。有没有更快的方法来求几乎稀疏的nd数组的和?下面是我当前代码的摘录

相关问题: (创建一个自制的稀疏ndarray类,是否过高?)

编辑

在hpaulj给出下面的答案之后,我又做了一些计时测试,见下文。似乎重塑对求和没有多大好处,而将它们转换为csr_矩阵并返回numpy会降低性能。我仍然在考虑直接使用这些索引(下面称为
rand_persons
rand_articles
rand_days
),因为在我最初的问题中,我使用这些索引进行了大量的研究

from timeit import timeit
from scipy.sparse import csr_matrix
import numpy as np

def create_test_data():
    '''
    dtype = int64
    1% nonzero, 1000x1000x100: 1.3 s, 
    1% nonzero, 10000x1000x100: 13.3 s
    0.1% nonzero, 10000x1000x100: 2.7 s
    1ppm nonzero, 10000x1000x100: 0.007 s
    '''
    global purchases
    N_persons = 10000
    N_articles = 1000
    N_days = 100
    purchases = np.zeros(shape=(N_days, N_persons, N_articles), dtype=int)
    N_elements = N_persons * N_articles * N_days
    rand_persons = np.random.choice(a=range(N_persons), size=N_elements / 1e6, replace=True)
    rand_articles = np.random.choice(a=range(N_articles), size=N_elements / 1e6, replace=True)
    rand_days = np.random.choice(a=range(N_days), size=N_elements / 1e6, replace=True)
    for (i, j, k) in zip(rand_persons, rand_articles, rand_days):
        purchases[k, i, j] += 1

def sum_over_first_dim_A():
    '''
    0.1% nonzero, 10000x1000x99: 1.57s (average over 10)
    1ppm nonzero, 10000x1000x99: 1.70s (average over 10)
    '''
    global purchases
    d = purchases[:99, :, :]
    np.sum(d, axis=0)
def sum_over_first_dim_B():
    '''
    0.1% nonzero, 10000x1000x99: 1.55s (average over 10)
    1ppm nonzero, 10000x1000x99: 1.37s (average over 10)
    '''
    global purchases
    d = purchases[:99, :, :]
    (N_days, N_persons, N_articles) = d.shape 
    d.reshape(N_days, -1).sum(0).reshape(N_persons, N_articles) 
def sum_over_first_dim_C():
    '''
    0.1% nonzero, 10000x1000x99: 7.54s (average over 10)
    1ppm nonzero, 10000x1000x99: 7.44s (average over 10)
    '''
    global purchases
    d = purchases[:99, :, :]
    (N_days, N_persons, N_articles) = d.shape 
    r = csr_matrix(d.reshape(N_days, -1))
    t = r.sum(axis=0)
    np.reshape(t, newshape=(N_persons, N_articles))

if __name__ == '__main__':
    print (timeit(create_test_data, number=10))
    print (timeit(sum_over_first_dim_A, number=10))
    print (timeit(sum_over_first_dim_B, number=10))
    print (timeit(sum_over_first_dim_C, number=10))
编辑2

我现在发现了一种更快的求和方法:我用稀疏矩阵创建一个numpy数组。但是,在这些矩阵的初始创建过程中还有一些时间。我现在用循环来完成。有什么方法可以加快这个过程吗

def create_test_data():
    [ ... ]
    '''
    0.1% nonzero, 10000x1000x100: 2.1 s
    1ppm nonzero, 10000x1000x100: 0.45 s
    '''
    global sp_purchases
    sp_purchases = np.empty(N_days, dtype=lil_matrix)
    for i in range(N_days):
        sp_purchases[i] = lil_matrix((N_persons, N_articles))
    for (i, j, k) in zip(rand_persons, rand_articles, rand_days):
        sp_purchases[k][i, j] += 1

def sum_over_first_dim_D():
    '''
    0.1% nonzero, 10000x1000x99: 0.47s (average over 10)
    1ppm nonzero, 10000x1000x99: 0.41s (average over 10)
    '''
    global sp_purchases
    d = sp_purchases[:99]
    np.sum(d)

您可以重塑阵列的形状,使其为二维,进行求和,然后重新成形

r.reshape(4,-1).sum(0).reshape(3,4)   # == r.sum(0)

这种整形不会增加太多的处理时间。你可以将2d转换为稀疏格式,看看是否能节省时间。我的猜测是,你的数组必须非常大,非常稀疏,才能击败直接的
numpy
sum。如果你有其他理由使用稀疏格式,那么它可能是值得的,但简单地做这个求和,不需要。但是自己测试一下。

因为您的数据已经是稀疏格式(索引和值),您可以自己进行求和。只需创建一个与最终求和数组大小相同的数组,然后循环索引,将相应的值求和到正确的插槽中。下面的
sum2d
函数显示了在第一个维度上求和时的操作方法:

import timeit
import numpy as np

n = 1000
s = 1000
inds = np.random.randint(0, n, size=(s, 3))
vals = np.random.normal(size=s)


def sum3d():
    a = np.zeros((n, n, n))
    for [i, j, k], v in zip(inds, vals):
        a[i, j, k] = v

    return a.sum(axis=0)


def sum2d():
    b = np.zeros((n, n))
    for [i, j, k], v in zip(inds, vals):
        b[j, k] += v

    return b


kwargs = dict(repeat=3, number=1)
print(min(timeit.repeat('sum3d()', 'from __main__ import sum3d', **kwargs)))
print(min(timeit.repeat('sum2d()', 'from __main__ import sum2d', **kwargs)))
assert np.allclose(sum3d(), sum2d())

Numpy可能会有所帮助,尽管它们不是存储稀疏矩阵的有效方法,因此取决于数组的大小和稀疏程度,它可能不会有用。数据的起始格式是什么?如果数据已经在一个数组中,那么我认为将其转换为稀疏格式并求和可能不会更快。原始起始数据结构真是三个(i,j,k)索引数组和一个同样长的值数组。我用这个数组做了一个numpy数组,它很快,因为它是稀疏的(1000000个元素中约有1个不是零)。
import timeit
import numpy as np

n = 1000
s = 1000
inds = np.random.randint(0, n, size=(s, 3))
vals = np.random.normal(size=s)


def sum3d():
    a = np.zeros((n, n, n))
    for [i, j, k], v in zip(inds, vals):
        a[i, j, k] = v

    return a.sum(axis=0)


def sum2d():
    b = np.zeros((n, n))
    for [i, j, k], v in zip(inds, vals):
        b[j, k] += v

    return b


kwargs = dict(repeat=3, number=1)
print(min(timeit.repeat('sum3d()', 'from __main__ import sum3d', **kwargs)))
print(min(timeit.repeat('sum2d()', 'from __main__ import sum2d', **kwargs)))
assert np.allclose(sum3d(), sum2d())