python的tfidf算法

python的tfidf算法,python,scikit-learn,tf-idf,Python,Scikit Learn,Tf Idf,我有这段代码用于使用tf-idf计算文本相似性 from sklearn.feature_extraction.text import TfidfVectorizer documents = [doc1,doc2] tfidf = TfidfVectorizer().fit_transform(documents) pairwise_similarity = tfidf * tfidf.T print pairwise_similarity.A 问题是,这段代码将普通字符串作为输入,我希望通

我有这段代码用于使用tf-idf计算文本相似性

from sklearn.feature_extraction.text import TfidfVectorizer

documents = [doc1,doc2]
tfidf = TfidfVectorizer().fit_transform(documents)
pairwise_similarity = tfidf * tfidf.T
print pairwise_similarity.A
问题是,这段代码将普通字符串作为输入,我希望通过删除停止字、词干和tokkenize来准备文档。因此,输入将是一个列表。如果使用tokkenized文档调用
文档=[doc1,doc2]
,则错误为:

    Traceback (most recent call last):
  File "C:\Users\tasos\Desktop\my thesis\beta\similarity.py", line 18, in <module>
    tfidf = TfidfVectorizer().fit_transform(documents)
  File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 1219, in fit_transform
    X = super(TfidfVectorizer, self).fit_transform(raw_documents)
  File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 780, in fit_transform
    vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
  File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 715, in _count_vocab
    for feature in analyze(doc):
  File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 229, in <lambda>
    tokenize(preprocess(self.decode(doc))), stop_words)
  File "C:\Python27\lib\site-packages\scikit_learn-0.14.1-py2.7-win32.egg\sklearn\feature_extraction\text.py", line 195, in <lambda>
    return lambda x: strip_accents(x.lower())
AttributeError: 'unicode' object has no attribute 'apply_freq_filter'
回溯(最近一次呼叫最后一次):
文件“C:\Users\tasos\Desktop\my thesis\beta\similarity.py”,第18行,在
tfidf=TfidfVectorizer().fit_转换(文档)
文件“C:\Python27\lib\site packages\scikit\u learn-0.14.1-py2.7-win32.egg\sklearn\feature\u extraction\text.py”,第1219行,在fit\u transform中
X=super(TfidfVectorizer,self).fit\u转换(原始文档)
文件“C:\Python27\lib\site packages\scikit\u learn-0.14.1-py2.7-win32.egg\sklearn\feature\u extraction\text.py”,第780行,在fit\u转换中
词汇表,X=self.\u count\u vocab(原始文档,self.fixed\u词汇表)
文件“C:\Python27\lib\site packages\scikit\u learn-0.14.1-py2.7-win32.egg\sklearn\feature\u extraction\text.py”,第715行,在\u count\u vocab中
对于分析中的功能(文档):
文件“C:\Python27\lib\site packages\scikit\u learn-0.14.1-py2.7-win32.egg\sklearn\feature\u extraction\text.py”,第229行,在
标记化(预处理(自解码(doc))、停止字)
文件“C:\Python27\lib\site packages\scikit\u learn-0.14.1-py2.7-win32.egg\sklearn\feature\u extraction\text.py”,第195行,中
返回lambda x:strip_重音(x.lower())
AttributeError:“unicode”对象没有属性“apply\u freq\u filter”

有没有办法更改代码并使其接受列表,或者让我再次将tokkenized文档更改为字符串?

尝试将预处理跳过为小写,并提供您自己的“nop”标记器:

tfidf = TfidfVectorizer(tokenizer=lambda doc: doc, lowercase=False).fit_transform(documents)

您还应该检查其他参数,如
stop\u words
,以避免重复预处理。

看起来您缺少了实际的错误消息(您已经包含了回溯,但没有包含引发的错误)。@Tasos完成了我的回答,还是您仍然有问题?如果我的解决方案不起作用,您能给出一个
doc1
/
doc2
的最小示例吗?