Python 优化计算文档频率
这需要太长时间:Python 优化计算文档频率,python,nlp,Python,Nlp,这需要太长时间: # Document-frequency phrases_final["doc_freq"] = len(phrases_final) * [0] # for each phrase, compute the number of clusters that phrase occurs in for phrase in phrases_final["extracted_phrases"]: for i in cluster_n
# Document-frequency
phrases_final["doc_freq"] = len(phrases_final) * [0]
# for each phrase, compute the number of clusters that phrase occurs in
for phrase in phrases_final["extracted_phrases"]:
for i in cluster_name:
all_tweets = ""
for tweet in df["tweets_to_consider"][df.cl_num == i]:
all_tweets = all_tweets + tweet + ". "
if phrase in all_tweets:
phrases_final["doc_freq"][
(phrases_final.extracted_phrases == phrase) & (phrases_final.cluster_num == i)
] = (
phrases_final["doc_freq"][
(phrases_final.extracted_phrases == phrase) & (phrases_final.cluster_num == i)
]
+ 1
)
- 您可能应该为每个集群预先计算
,而不是为每个短语再次计算。所有tweet
- 或者,您可能根本不想构建
,因为所有tweet
会很慢;考虑一组集合,也许?if短语in(此处为长字符串)
<> LI>而不是直接计算结果到数据文件中(至少我认为它是索引中的数据文件,虽然老实说,你正在初始化<代码> PraseSuthEng/<代码>作为一个整数列表,所以索引无论如何都是假的),考虑一个<代码>集合。(或一个 - 或者,您可能根本不想构建
collections.defaultdict(collections.Counter)
由cluster\u num
索引,然后是phrase
)
multiprocessing.Pool()
在短语或集群上并行执行此操作- 您可能应该为每个集群预先计算
,而不是为每个短语再次计算。所有tweet
-
<> LI>您可能不想构造<代码> AltTwitter ,因为<代码>(Lang-StrugIn)中的短语将是慢的;考虑一组集合,也许?
collections.defaultdict(collections.Counter)
由cluster\u num
索引,然后是phrase
)
multiprocessing.Pool()
在短语或集群上并行执行此操作