Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/performance/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Numpy中稀疏双邻接矩阵的高效构造_Python_Performance_Numpy_Graph - Fatal编程技术网

Python Numpy中稀疏双邻接矩阵的高效构造

Python Numpy中稀疏双邻接矩阵的高效构造,python,performance,numpy,graph,Python,Performance,Numpy,Graph,我正在尝试将此CSV文件加载到稀疏numpy矩阵中,该矩阵表示此用户到子Reddit二部图的双邻接矩阵: 以下是一个示例: 603,politics,trees,pics 604,Metal,AskReddit,tattoos,redditguild,WTF,cocktails,pics,funny,gaming,Fitness,mcservers,TeraOnline,GetMotivated,itookapicture,Paleo,trackers,Minecraft,gainit 605,

我正在尝试将此CSV文件加载到稀疏numpy矩阵中,该矩阵表示此用户到子Reddit二部图的双邻接矩阵:

以下是一个示例:

603,politics,trees,pics
604,Metal,AskReddit,tattoos,redditguild,WTF,cocktails,pics,funny,gaming,Fitness,mcservers,TeraOnline,GetMotivated,itookapicture,Paleo,trackers,Minecraft,gainit
605,politics,IAmA,AdviceAnimals,movies,smallbusiness,Republican,todayilearned,AskReddit,WTF,IWantOut,pics,funny,DIY,Frugal,relationships,atheism,Jeep,Music,grandrapids,reddit.com,videos,yoga,GetMotivated,bestof,ShitRedditSays,gifs,technology,aww
共有876961行(每个用户一行)和15122个子Reddit,以及8495597个用户到子Reddit关联

下面是我现在掌握的代码,在我的MacBook Pro上运行需要20分钟:

import numpy as np
from scipy.sparse import csr_matrix 

row_list = []
entry_count = 0
all_reddits = set()
with open("reddit_user_posting_behavior.csv", 'r') as f:
    for x in f:
        pieces = x.rstrip().split(",")
        user = pieces[0]
        reddits = pieces[1:]
        entry_count += len(reddits)
        for r in reddits: all_reddits.add(r)
        row_list.append(np.array(reddits))

reddits_list = np.array(list(all_reddits))

# 5s to get this far

rows = np.zeros((entry_count,))
cols = np.zeros((entry_count,))
data =  np.ones((entry_count,))
i=0
user_idx = 0
for row in row_list:
    for reddit_idx in np.nonzero(np.in1d(reddits_list,row))[0]:
        cols[i] = user_idx
        rows[i] = reddit_idx
        i+=1
    user_idx+=1
adj = csr_matrix( (data,(rows,cols)), shape=(len(reddits_list), len(row_list)) )

似乎很难相信这是如此之快。。。将82MB的文件加载到列表中需要5秒,但构建稀疏矩阵需要200倍的时间。我能做些什么来加快速度?是否有一些文件格式,我可以在不到20分钟的时间内将此CSV转换为可以更快导入的格式?我在这里做的手术是不是很昂贵而且不好?我试着构建一个密集矩阵,我试着创建一个
lil\u矩阵和一个
dok\u矩阵
,并将
1
一次分配一个,但这并不快。

首先,您可以将
For
的内部替换为:

reddit_idx = np.nonzero(np.in1d(reddits_list,row))[0]
sl = slice(i,i+len(reddit_idx))
cols[sl] = user_idx
rows[sl] = reddit_idx
i = sl.stop
使用
nonzero(in1d())
查找匹配项看起来不错,但我还没有探索其他方法。通过切片进行赋值的另一种方法是扩展列表,但这可能会比较慢,尤其是对于许多行

构建行时,cols是迄今为止最慢的部分。对csr\u矩阵的调用很小

因为有比subreddit多得多的行(用户),所以为每个subreddit收集一个用户ID列表可能是值得的。您已在集合中收集子Reddit。相反,您可以在默认字典中收集它们,并从中构建矩阵。当在你的3条线路上进行测试时,复制100000次,速度明显更快

from collections import defaultdict
red_dict = defaultdict(list)
user_idx = 0
with open("reddit_user_posting_behavior.csv", 'r') as f:
    for x in f:
        pieces = x.rstrip().split(",")
        user = pieces[0]
        reddits = pieces[1:]
        for r in reddits:
            red_dict[r] += [user_idx]
        user_idx += 1

print 'done 2nd'
x =  red_dict.values()
adj1 = sparse.lil_matrix((len(x), user_idx), dtype=int)
for i,j in enumerate(x):
    adj1[i,j] = 1

睡不着,做了最后一件事。。。通过这种方式,我可以将时间缩短到10秒,最后:

import numpy as np
from scipy.sparse import csr_matrix 

user_ids = []
subreddit_ids = []
subreddits = {}
i=0
with open("reddit_user_posting_behavior.csv", 'r') as f:
    for line in f:
        for sr in line.rstrip().split(",")[1:]: 
            if sr not in subreddits: 
                subreddits[sr] = len(subreddits)
            user_ids.append(i)
            subreddit_ids.append(subreddits[sr])
        i+=1

adj = csr_matrix( 
    ( np.ones((len(userids),)), (np.array(subreddit_ids),np.array(user_ids)) ), 
    shape=(len(subreddits), i) )

什么时候开始双倍循环?csr电话?我首先尝试对内部for循环进行矢量化,即一次分配多个值。谢谢!它在我的机器上运行大约14秒,但最终输出看起来不正确。第一行总和应在10000范围内,但对于此输出,它是1。如果你看我的答案的话,我可以使用一个有点类似的方法,一次通过一个字典,把时间缩短到10秒。我的
adj1
与你原来的
adj
匹配。您的第二个脚本将生成一个不同的
adj
。我认为它们的行数相同,但顺序不同-字典顺序v出现顺序?与adj的每一行对应的单词是什么?看起来您必须根据值对
子reddits
键进行排序?