Python 2.7 过滤掉gensim字典中只出现一次的标记

Python 2.7 过滤掉gensim字典中只出现一次的标记,python-2.7,gensim,Python 2.7,Gensim,gensim dictionary对象有一个非常好的过滤功能,可以删除出现在少于一定数量文档中的标记。但是,我希望删除语料库中只出现一次的标记。有人知道一种快速简便的方法吗?在Gensim中找到: 基本上,遍历包含整个语料库的列表,如果每个单词只出现一次,则将其添加到标记列表中。然后遍历每个文档中的每个单词,如果该单词在语料库中出现过一次的标记列表中,则将其删除 我假设这是最好的方法,否则教程会提到其他内容。但我可能错了 您可能应该在问题中包含一些可复制的代码;但是,我将使用上一篇文章中的文档。

gensim dictionary对象有一个非常好的过滤功能,可以删除出现在少于一定数量文档中的标记。但是,我希望删除语料库中只出现一次的标记。有人知道一种快速简便的方法吗?

在Gensim中找到:

基本上,遍历包含整个语料库的列表,如果每个单词只出现一次,则将其添加到标记列表中。然后遍历每个文档中的每个单词,如果该单词在语料库中出现过一次的标记列表中,则将其删除


我假设这是最好的方法,否则教程会提到其他内容。但我可能错了

您可能应该在问题中包含一些可复制的代码;但是,我将使用上一篇文章中的文档。我们可以在不使用gensim的情况下实现您的目标

from collections import defaultdict
documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
              "The EPS user interface management system",
              "System and human system engineering testing of EPS",
              "Relation of user perceived response time to error measurement",
              "The generation of random binary unordered trees",
              "The intersection graph of paths in trees",
              "Graph minors IV Widths of trees and well quasi ordering",
              "Graph minors A survey"]

# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents]

# word frequency
d=defaultdict(int)
for lister in texts:
    for item in lister:
        d[item]+=1

# remove words that appear only once
tokens=[key for key,value in d.items() if value>1]
texts = [[word for word in document if word in tokens] for document in texts]


为了添加一些信息,您可能会认为GENSIM教程除了前面提到的方法之外,还具有更高效的内存技术。我添加了一些打印语句,以便您可以看到每一步都发生了什么。您的具体问题在听写器步骤中得到回答;我意识到下面的答案对于你的问题来说可能有些过分,但是如果你需要做任何类型的主题建模,那么这些信息将是朝着正确方向迈出的一步

$cat mycorpus.txt

Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey  
运行以下create_corpus.py命令:

#!/usr/bin/env python
from gensim import corpora, models, similarities

stoplist = set('for a of the and to in'.split())

class MyCorpus(object):
    def __iter__(self):
        for line in open('mycorpus.txt'):
            # assume there's one document per line, tokens separated by whitespace
            yield dictionary.doc2bow(line.lower().split()) 

# TOKENIZERATOR: collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
print (dictionary)
print (dictionary.token2id)

# DICTERATOR: remove stop words and words that appear only once 
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in dictionary.dfs.iteritems() if docfreq == 1]
dictionary.filter_tokens(stop_ids + once_ids)
print (dictionary)
print (dictionary.token2id)

dictionary.compactify() # remove gaps in id sequence after words that were removed
print (dictionary)
print (dictionary.token2id)

# VECTORERATOR: map tokens frequency per doc to vectors
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
for item in corpus_memory_friendly:
    print item

祝你好运

您可能需要查找gensim字典方法:

def get_term_frequency(dictionary,cutoff_freq):
    """This returns a list of tuples (term,frequency) after removing all tuples with frequency smaller than cutoff_freq
       dictionary (gensim.corpora.Dictionary): corpus dictionary
       cutoff_freq (int): terms with whose frequency smaller than this will be dropped
    """
    tf = []
    for k,v in dictionary.dfs.iteritems():
        tf.append((str(dictionary.get(k)),v))
    return reduce(lambda t:t[1]>cutoff_freq)
过滤极端值(不低于=5,不高于=0.5,保持=100000)

def get_term_frequency(dictionary,cutoff_freq):
    """This returns a list of tuples (term,frequency) after removing all tuples with frequency smaller than cutoff_freq
       dictionary (gensim.corpora.Dictionary): corpus dictionary
       cutoff_freq (int): terms with whose frequency smaller than this will be dropped
    """
    tf = []
    for k,v in dictionary.dfs.iteritems():
        tf.append((str(dictionary.get(k)),v))
    return reduce(lambda t:t[1]>cutoff_freq)