Python 对scipy.sparse矩阵应用卷积

Python 对scipy.sparse矩阵应用卷积,python,scipy,signals,sparse-matrix,convolution,Python,Scipy,Signals,Sparse Matrix,Convolution,我试图在一个稀疏矩阵上计算卷积。代码如下: import numpy as np import scipy.sparse, scipy.signal M = scipy.sparse.csr_matrix([[0,1,0,0],[1,0,0,1],[1,0,1,0],[0,0,0,0]]) kernel = np.ones((3,3)) kernel[1,1]=0 X = scipy.signal.convolve(M, kernel, mode='same') row_ind, col_i

我试图在一个稀疏矩阵上计算卷积。代码如下:

import numpy as np
import scipy.sparse, scipy.signal

M = scipy.sparse.csr_matrix([[0,1,0,0],[1,0,0,1],[1,0,1,0],[0,0,0,0]])
kernel = np.ones((3,3))
kernel[1,1]=0
X = scipy.signal.convolve(M, kernel, mode='same')
row_ind, col_ind = M.nonzero() 
X = scipy.sparse.csr_matrix((M.shape[0]+2, M.shape[1]+2))
for i in [0, 1, 2]:
    for j in [0, 1, 2]:
        if i!= 1 or j !=1:
            X += scipy.sparse.csr_matrix( (M.data, (row_ind+i, col_ind+j)), (M.shape[0]+2, M.shape[1]+2))
X = X[1:-1, 1:-1]
这将产生以下错误:

ValueError: volume and kernel should have the same dimensionality
计算scipy.signal.convalve(M.todense(),kernel,mode='same')提供了预期的结果。但是,我希望保持计算稀疏

更一般地说,我的目标是计算稀疏矩阵M的1-hop邻域和。如果您对如何在稀疏矩阵上计算这一点有任何好的想法,我很乐意听到

编辑:

我只是尝试了一个针对这个特定内核(邻居之和)的解决方案,这个解决方案并不比密集版本快(尽管我没有尝试在非常高的维度上)。代码如下:

import numpy as np
import scipy.sparse, scipy.signal

M = scipy.sparse.csr_matrix([[0,1,0,0],[1,0,0,1],[1,0,1,0],[0,0,0,0]])
kernel = np.ones((3,3))
kernel[1,1]=0
X = scipy.signal.convolve(M, kernel, mode='same')
row_ind, col_ind = M.nonzero() 
X = scipy.sparse.csr_matrix((M.shape[0]+2, M.shape[1]+2))
for i in [0, 1, 2]:
    for j in [0, 1, 2]:
        if i!= 1 or j !=1:
            X += scipy.sparse.csr_matrix( (M.data, (row_ind+i, col_ind+j)), (M.shape[0]+2, M.shape[1]+2))
X = X[1:-1, 1:-1]
为什么海报上显示的是可运行的代码,而不是结果?我们大多数人无法在头脑中运行这样的代码

In [5]: M.A
Out[5]: 
array([[0, 1, 0, 0],
       [1, 0, 0, 1],
       [1, 0, 1, 0],
       [0, 0, 0, 0]])
您的备选方案-虽然结果是稀疏矩阵,但会填充所有值。即使
M
更大更稀疏,
X
也会更密集

In [7]: row_ind, col_ind = M.nonzero()
   ...: X = sparse.csr_matrix((M.shape[0]+2, M.shape[1]+2))
   ...: for i in [0, 1, 2]:
   ...:     for j in [0, 1, 2]:
   ...:         if i!= 1 or j !=1:
   ...:             X += sparse.csr_matrix( (M.data, (row_ind+i, col_ind+j)), (M
   ...: .shape[0]+2, M.shape[1]+2))
   ...: X = X[1:-1, 1:-1]
In [8]: X
Out[8]: 
<4x4 sparse matrix of type '<class 'numpy.float64'>'
    with 16 stored elements in Compressed Sparse Row format>
In [9]: X.A
Out[9]: 
array([[2., 1., 2., 1.],
       [2., 4., 3., 1.],
       [1., 3., 1., 2.],
       [1., 2., 1., 1.]])
===

我的方法明显更快(但仍然远远落后于密集卷积)<代码>稀疏。csr_矩阵(…)非常慢,因此重复执行不是一个好主意。稀疏加法也不是很好

In [13]: %%timeit
    ...: row_ind, col_ind = M.nonzero()
    ...: data, row, col = [],[],[]
    ...: for i in [0, 1, 2]:
    ...:     for j in [0, 1, 2]:
    ...:         if i!= 1 or j !=1:
    ...:             data.extend(M.data)
    ...:             row.extend(row_ind+i)
    ...:             col.extend(col_ind+j)
    ...: X = sparse.csr_matrix( (data, (row, col)), (M.shape[0]+2, M.shape[1]+2)
    ...: )
    ...: X = X[1:-1, 1:-1]
    ...: 
    ...: 
793 µs ± 20 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [14]: %%timeit
    ...: row_ind, col_ind = M.nonzero()
    ...: X = sparse.csr_matrix((M.shape[0]+2, M.shape[1]+2))
    ...: for i in [0, 1, 2]:
    ...:     for j in [0, 1, 2]:
    ...:         if i!= 1 or j !=1:
    ...:             X += sparse.csr_matrix( (M.data, (row_ind+i, col_ind+j)), (
    ...: M.shape[0]+2, M.shape[1]+2))
    ...: X = X[1:-1, 1:-1]
    ...: 
    ...: 
4.72 ms ± 92.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [15]: timeit X = signal.convolve(M.A, kernel, mode='same')

85.9 µs ± 339 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

sparse
矩阵不是
ndarray
的子类,因此
numpy
和其他
scipy
模块通常不能正确处理它们。这就是为什么需要使用
来加密
。稀疏矩阵最适合于矩阵乘法和不改变稀疏性的数学。添加矩阵相对较慢。重复创建矩阵也是如此,就像在循环中一样。但是您可以在
coo
样式数组中收集所有
M.data
行ind+i
等值,并在最后进行一次矩阵构造。重复的值被求和。对,只调用一次sparse.csr_矩阵构造函数比我的简单解决方案要好得多!我想这是给定这个特定内核的最佳解决方案。如果M很大(并且是稀疏的),那么这个解决方案也比密集版本(使用卷积)快得多。