Python 试图用gensim模仿Scikit Ingram

Python 试图用gensim模仿Scikit Ingram,python,scikit-learn,gensim,Python,Scikit Learn,Gensim,我试图用gensim模仿CountVectorizer中的n_gram参数。我的目标是能够将LDA与Scikit或Gensim一起使用,并找到非常相似的Bigram 例如,我们可以用scikit找到以下二元图:abc计算机、二进制无序和gensim A调查、图形子项 我在下面附上了我的代码,以便在bigrams/unigrams方面对Gensim和Scikit进行比较 谢谢你的帮助 documents = [["Human" ,"machine" ,"interface" ,"for" ,"la

我试图用gensim模仿CountVectorizer中的n_gram参数。我的目标是能够将LDA与Scikit或Gensim一起使用,并找到非常相似的Bigram

例如,我们可以用scikit找到以下二元图:abc计算机、二进制无序和gensim A调查、图形子项

我在下面附上了我的代码,以便在bigrams/unigrams方面对Gensim和Scikit进行比较

谢谢你的帮助

documents = [["Human" ,"machine" ,"interface" ,"for" ,"lab", "abc" ,"computer" ,"applications"],
      ["A", "survey", "of", "user", "opinion", "of", "computer", "system", "response", "time"],
      ["The", "EPS", "user", "interface", "management", "system"],
      ["System", "and", "human", "system", "engineering", "testing", "of", "EPS"],
      ["Relation", "of", "user", "perceived", "response", "time", "to", "error", "measurement"],
      ["The", "generation", "of", "random", "binary", "unordered", "trees"],
      ["The", "intersection", "graph", "of", "paths", "in", "trees"],
      ["Graph", "minors", "IV", "Widths", "of", "trees", "and", "well", "quasi", "ordering"],
      ["Graph", "minors", "A", "survey"]]
使用gensim模型,我们可以找到48个唯一的标记,我们可以使用printdictionary.token2id打印unigram/Bigram

# 1. Gensim
from gensim.models import Phrases

# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(documents, min_count=1)
for idx in range(len(documents)):
    for token in bigram[documents[idx]]:
        if '_' in token:
            # Token is a bigram, add to document.
            documents[idx].append(token)

documents = [[doc.replace("_", " ") for doc in docs] for docs in documents]
print(documents)

dictionary = corpora.Dictionary(documents)
print(dictionary.token2id)
有了scikit 96独特的代币,我们可以用printvocab打印scikit的词汇表

# 2. Scikit
import re
token_pattern = re.compile(r"\b\w\w+\b", re.U)

def custom_tokenizer( s, min_term_length = 1 ):
    """
    Tokenizer to split text based on any whitespace, keeping only terms of at least a certain length which start with an alphabetic character.
    """
    return [x.lower() for x in token_pattern.findall(s) if (len(x) >= min_term_length and x[0].isalpha() ) ]

from sklearn.feature_extraction.text import CountVectorizer

def preprocess(docs, min_df = 1, min_term_length = 1, ngram_range = (1,1), tokenizer=custom_tokenizer ):
    """
    Preprocess a list containing text documents stored as strings.
    doc : list de string (pas tokenizé)
    """
    # Build the Vector Space Model, apply TF-IDF and normalize lines to unit length all in one call
    vec = CountVectorizer(lowercase=True,
                      strip_accents="unicode",
                      tokenizer=tokenizer,
                      min_df = min_df,
                      ngram_range = ngram_range,
                      stop_words = None
                     ) 
    X = vec.fit_transform(docs)
    vocab = vec.get_feature_names()

    return (X,vocab)

docs_join = list()

for i in documents:
    docs_join.append(' '.join(i))

(X, vocab) = preprocess(docs_join, ngram_range = (1,2))

print(vocab)
gensim短语类设计用于从句子流中自动检测常见短语和多词表达式。 因此,它只会给你比预期更频繁出现的大字。这就是为什么使用gensim软件包时,您只会得到一些Bigram,如:“响应时间”、“图表”、“调查”

如果您查看bigram.vocab,您将看到这些bigram出现2次,而所有其他bigram只出现一次

scikit learn的CountVectorizer类为您提供所有Bigram