Python UnicodeDecodeError:&x27;ascii';编解码器可以';t解码文本排序代码中的字节

Python UnicodeDecodeError:&x27;ascii';编解码器可以';t解码文本排序代码中的字节,python,nltk,summarization,Python,Nltk,Summarization,当我执行下面的代码时 import networkx as nx import numpy as np from nltk.tokenize.punkt import PunktSentenceTokenizer from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer def textrank(document): sentence_tokenizer = PunktSentenceTo

当我执行下面的代码时

import networkx as nx
import numpy as np
from nltk.tokenize.punkt import PunktSentenceTokenizer
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer

def textrank(document):
    sentence_tokenizer = PunktSentenceTokenizer()
    sentences = sentence_tokenizer.tokenize(document)

    bow_matrix = CountVectorizer().fit_transform(sentences)
    normalized = TfidfTransformer().fit_transform(bow_matrix)

    similarity_graph = normalized * normalized.T

    nx_graph = nx.from_scipy_sparse_matrix(similarity_graph)
    scores = nx.pagerank(nx_graph)
    return sorted(((scores[i],s) for i,s in enumerate(sentences)), reverse=True)

fp = open("QC")    
txt = fp.read()
sents = textrank(txt)
print sents
我得到以下错误

Traceback (most recent call last):
  File "Textrank.py", line 44, in <module>
    sents = textrank(txt)
  File "Textrank.py", line 10, in textrank
    sentences = sentence_tokenizer.tokenize(document)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1237, in tokenize
    return list(self.sentences_from_text(text, realign_boundaries))
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1285, in sentences_from_text
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1276, in span_tokenize
    return [(sl.start, sl.stop) for sl in slices]
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1316, in _realign_boundaries
    for sl1, sl2 in _pair_iter(slices):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 311, in _pair_iter
    for el in it:
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1291, in _slices_from_text
    if self.text_contains_sentbreak(context):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1337, in text_contains_sentbreak
    for t in self._annotate_tokens(self._tokenize_words(text)):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1472, in _annotate_second_pass
    for t1, t2 in _pair_iter(tokens):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 310, in _pair_iter
    prev = next(it)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 577, in _annotate_first_pass
    for aug_tok in tokens:
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 542, in _tokenize_words
    for line in plaintext.split('\n'):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 9: ordinal not in range(128)
回溯(最近一次呼叫最后一次):
文件“Textrank.py”,第44行,在
sents=textrank(txt)
文件“Textrank.py”,第10行,在Textrank中
句子=句子\标记器。标记化(文档)
文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第1237行,在tokenize中
返回列表(self.句子来自文本(文本,重新对齐边界))
文件“/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py”,第1285行,来自文本的句子
返回[self.span\u标记化(文本,重新对齐\u边界)中s,e的文本[s:e]
span_tokenize中的文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第1276行
返回[(sl.start,sl.stop)用于切片中的sl]
文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第1316行,位于“重新对齐”边界中
对于sl1、sl2成对iter(切片):
文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第311行,成对输入
对于it中的el:
文件“/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py”,第1291行,在文本中的
如果self.text\u包含\u sentbreak(上下文):
文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第1337行,文本包含
对于self中的t,注释标记(self.\u标记化单词(文本)):
文件“/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py”,第1472行,第二遍注释
对于t1、t2成对iter(令牌):
文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第310行,成对输入
上一个=下一个(it)
文件“/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py”,第577行,在第一遍注释中
对于aug_tok in代币:
文件“/usr/local/lib/python2.7/dist packages/nltk/tokenize/punkt.py”,第542行,以“tokenize”字表示
对于纯文本中的行。拆分('\n'):
UnicodeDecodeError:“ascii”编解码器无法解码第9位的字节0xe2:序号不在范围内(128)
我正在Ubuntu中执行代码。为了得到文本,我参考了这个网站 . 我创建了一个QC文件(不是QC.txt),并将数据逐段复制粘贴到该文件中。 请帮我解决这个错误。
谢谢您

如果以下内容对您有效,请尝试

import networkx as nx
import numpy as np
import sys

reload(sys)
sys.setdefaultencoding('utf8')

from nltk.tokenize.punkt import PunktSentenceTokenizer
from sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer

def textrank(document):
    sentence_tokenizer = PunktSentenceTokenizer()
    sentences = sentence_tokenizer.tokenize(document)

    bow_matrix = CountVectorizer().fit_transform(sentences)
    normalized = TfidfTransformer().fit_transform(bow_matrix)

    similarity_graph = normalized * normalized.T

    nx_graph = nx.from_scipy_sparse_matrix(similarity_graph)
    scores = nx.pagerank(nx_graph)
    return sorted(((scores[i],s) for i,s in enumerate(sentences)), reverse=True)

fp = open("QC")    
txt = fp.read()
sents = textrank(txt.encode('utf-8'))
print sents

欢迎来到Stackoverflow!请看。另外,在发布问题之前,请先在谷歌或其他地方搜索问题。对不起,我无法理解现有的解决方案。编辑:我刚刚重新检查了链接,现在它有点意义了。我想不出这个解决方案是如何适用的。我是Python新手,对NLP有过深入的了解,所以我很容易就不知所措。请原谅,非常感谢。一旦我得到这些句子,我就把它们打印成sents中的s的
st=str(s[1])
print st
当我打印sents时,会显示很多unicode类型的东西,但当我将它们转换成字符串时,它们就消失了。为什么会发生这种情况?