Python 如何使用Scikit Learn CountVectorizer获取语料库中的词频?

Python 如何使用Scikit Learn CountVectorizer获取语料库中的词频?,python,scikit-learn,Python,Scikit Learn,我正在尝试使用scikit learn的CountVectorizer计算一个简单的词频 import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer texts=["dog cat fish","dog cat cat","fish bird","bird"] cv = CountVectorizer() cv_fit=cv.fit_transform(te

我正在尝试使用scikit learn的
CountVectorizer
计算一个简单的词频

import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer

texts=["dog cat fish","dog cat cat","fish bird","bird"]
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)

print cv.vocabulary_
{u'bird': 0, u'cat': 1, u'dog': 2, u'fish': 3}

我希望它返回
{u'bird':2,u'cat':3,u'dog':2,u'fish':2}
cv。在这个例子中,词汇表是一个dict,其中键是您找到的单词(特征),值是索引,这就是为什么它们是
0,1,2,3
。这只是运气不好,它看起来像你的计数:)

您需要使用
cv_fit
对象来获取计数

from sklearn.feature_extraction.text import CountVectorizer

texts=["dog cat fish","dog cat cat","fish bird", 'bird']
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)

print(cv.get_feature_names())
print(cv_fit.toarray())
#['bird', 'cat', 'dog', 'fish']
#[[0 1 1 1]
# [0 2 1 0]
# [1 0 0 1]
# [1 0 0 0]]
数组中的每一行是一个原始文档(字符串),每一列是一个特征(单词),元素是该特定单词和文档的计数。您可以看到,如果对每列求和,您将得到正确的数字

print(cv_fit.toarray().sum(axis=0))
#[2 3 2 2]
老实说,我建议使用
collections.Counter
或NLTK中的一些东西,除非您有特定的理由使用scikit learn,因为这样会更简单。

cv\u fit.toarray().sum(axis=0)
肯定会给出正确的结果,但在稀疏矩阵上执行求和,然后将其转换为数组的速度要快得多:

np.asarray(cv_fit.sum(axis=0))

我们将使用zip方法从单词列表及其计数列表中生成dict

import pandas as pd
import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts=["dog cat fish","dog cat cat","fish bird","bird"]    

cv = CountVectorizer()   
cv_fit=cv.fit_transform(texts)    
word_list = cv.get_feature_names();    
count_list = cv_fit.toarray().sum(axis=0)    
打印单词列表

[‘鸟’、‘猫’、‘狗’、‘鱼’]
打印计数\u列表

[2 3 2]
print dict(zip(单词列表、计数列表))


{'fish':2,'dog':2,'bird':2,'cat':3}

结合其他人的观点和我自己的观点:) 这是我给你的

from collections import Counter
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize

text='''Note that if you use RegexpTokenizer option, you lose 
natural language features special to word_tokenize 
like splitting apart contractions. You can naively 
split on the regex \w+ without any need for the NLTK.
'''

# tokenize
raw = ' '.join(word_tokenize(text.lower()))

tokenizer = RegexpTokenizer(r'[A-Za-z]{2,}')
words = tokenizer.tokenize(raw)

# remove stopwords
stop_words = set(stopwords.words('english'))
words = [word for word in words if word not in stop_words]

# count word frequency, sort and return just 20
counter = Counter()
counter.update(words)
most_common = counter.most_common(20)
most_common
#输出 (全部)

[(“注”,1), ('use',1), ('regexptokenizer',1), (“选项”,1), ('lose',1), (“自然”,1), (“语言”,1), (“特征”,1), (“特殊”,1), (“单词”,1), (“标记化”,1), ('like',1), (“分裂”,1), (‘分开’,1), (‘收缩’,1), (“天真地”,1), (“拆分”,1), ('regex',1), (“无”,1), (“需要”,1)]
就效率而言,你可以做得更好,但如果你不太担心的话,这段代码是最好的。

将@YASH-GUPTA的答案与@pieterbons的答案结合起来,以获得可读的结果,并获得RAM效率,但需要进行调整并添加几个括号。 工作代码:

import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts = ["dog cat fish", "dog cat cat", "fish bird", "bird"]    

cv = CountVectorizer()   
cv_fit = cv.fit_transform(texts)    
word_list = cv.get_feature_names()

# Added [0] here to get a 1d-array for iteration by the zip function. 
count_list = np.asarray(cv_fit.sum(axis=0))[0]

print(dict(zip(word_list, count_list)))
# Output: {'bird': 2, 'cat': 3, 'dog': 2, 'fish': 2}

创建“术语到特征索引的映射”-如果您只需要频率,为什么不使用?
cv_fit.toarray()。sum(axis=0)
使RAM爆炸,因为它需要加密稀疏矩阵。查看@pieterbons answer以获得更好的方法。旧版本的代码抛出了以下导入错误:“NameError:name'word_tokenize'未定义”,只是将导入添加到第4行。干得好,解决方案不错@Pradeep-Singh。谢谢,批准@MJM
import numpy as np    
from sklearn.feature_extraction.text import CountVectorizer

texts = ["dog cat fish", "dog cat cat", "fish bird", "bird"]    

cv = CountVectorizer()   
cv_fit = cv.fit_transform(texts)    
word_list = cv.get_feature_names()

# Added [0] here to get a 1d-array for iteration by the zip function. 
count_list = np.asarray(cv_fit.sum(axis=0))[0]

print(dict(zip(word_list, count_list)))
# Output: {'bird': 2, 'cat': 3, 'dog': 2, 'fish': 2}