Python 利用lil_矩阵拆分多关系图

Python 利用lil_矩阵拆分多关系图,python,scipy,sparse-matrix,Python,Scipy,Sparse Matrix,我使用稀疏lil_矩阵格式存储具有两种关系的图形。我就是这样做的: e=15 k= 2 X = [lil_matrix((e,e)) for i in range(k)] #storing type 0 relation# X[0][0,14] =1 X[0][0,8] =1 X[0][0,9] =1 X[0][0,10] =1 X[0][1,14] =1 X[0][1,6] =1 X[0][1,7] =1 X[0][2,8] =1 X[0][2,9] =1 X[0][2,10] =1 X[0]

我使用稀疏lil_矩阵格式存储具有两种关系的图形。我就是这样做的:

e=15
k= 2
X = [lil_matrix((e,e)) for i in range(k)]
#storing type 0 relation#
X[0][0,14] =1
X[0][0,8] =1
X[0][0,9] =1
X[0][0,10] =1
X[0][1,14] =1
X[0][1,6] =1
X[0][1,7] =1
X[0][2,8] =1
X[0][2,9] =1
X[0][2,10] =1
X[0][2,12] =1
X[0][3,6] =1
X[0][3,12] =1
X[0][3,11] =1
X[0][3,13] =1
X[0][4,11] =1
X[0][4,13] =1
X[0][5,13] =1
X[0][5,11] =1
X[0][5,10] =1
X[0][5,12] =1
#storing type 1 relation#
X[1][14,7] =1
X[1][14,6] =1
X[1][6,7] =1
X[1][6,8] =1
X[1][6,9] =1
X[1][10,9] =1
X[1][10,8] =1
X[1][10,11] =1
X[1][12,8] =1
X[1][12,10] =1
X[1][12,11] =1
X[1][12,13] =1
X[1][14,12] =1
X[1][11,9] =1
X[1][8,7] =1
X[1][8,9] =1
我想修剪只包含50%节点的网络。我的方法是:

nodes_list = range(e)
total_nodes = len(nodes_list)
get_percentage_of_prune_nodes = np.int(total_nodes * 0.5)
new_nodes = sorted(random.sample(nodes_list,get_percentage_of_prune_nodes))
e_new= get_percentage_of_prune_nodes
k_new= 2
#Y is the pruned matrix#
Y = [lil_matrix((e_new,e_new)) for i in range(k_new)]     
for i in xrange(e):
    for j in xrange(e):
        for rel in xrange(k_new):
            if i in new_nodes and j in new_nodes:
                if X[rel][i,j]==1:
                    Y[rel][new_nodes.index(i),new_nodes.index(j)] = 1

如果原始矩阵(X)是巨大的,这不是非常有效的方法。有没有最快或最聪明的方法来删减这些内容?

只关注矩阵:

In [318]: X=X[0].astype(int)
In [327]: X.A
Out[327]: 
array([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])

In [331]: new_nodes=sorted(random.sample(np.arange(e).tolist(),7))
In [332]: new_nodes
Out[332]: [0, 1, 2, 5, 8, 12, 13]

In [333]: Y=sparse.lil_matrix((7,7),dtype=int)
In [334]: for i in range(15):
     ...:     for j in range(e):
     ...:         if i in new_nodes and j in new_nodes:
     ...:             if X[i,j]:
     ...:                 Y[new_nodes.index(i),new_nodes.index(j)]=1
     ...:                 
In [335]: Y
Out[335]: 
<7x7 sparse matrix of type '<class 'numpy.int32'>'
    with 5 stored elements in LInked List format>
In [336]: Y.A
Out[336]: 
array([[0, 0, 0, 0, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 1, 0],
       [0, 0, 0, 0, 0, 1, 1],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]])
密集阵列的索引速度更快:

In [341]: timeit X[np.ix_(new_nodes,new_nodes)]
188 µs ± 1.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [342]: timeit X[np.ix_(new_nodes,new_nodes)].A
222 µs ± 6.77 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [343]: timeit X.A[np.ix_(new_nodes,new_nodes)]
62 µs ± 654 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
密集阵列方法可能会遇到内存错误。但是稀疏索引也会有内存问题


快速阅读,很难想象修剪的效果。但我可以想象两种方法来改善这一点。1) 了解如何使用密集阵列和整个阵列操作来实现这一点。2) 探索开销较小的结构。例如,一个
dok
矩阵,甚至是一个带有
(i,j)
元组键的普通字典。您没有使用任何特殊的稀疏矩阵功能。这正是我想要的。你解释得很好。计算速度也得到了提高。我会接受答案。
In [341]: timeit X[np.ix_(new_nodes,new_nodes)]
188 µs ± 1.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [342]: timeit X[np.ix_(new_nodes,new_nodes)].A
222 µs ± 6.77 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [343]: timeit X.A[np.ix_(new_nodes,new_nodes)]
62 µs ± 654 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)