Python UnicodeDecodeError:&x27;ascii';编解码器可以';在位置6对字节0xe2进行t解码:序号不在范围内(128)

Python UnicodeDecodeError:&x27;ascii';编解码器可以';在位置6对字节0xe2进行t解码:序号不在范围内(128),python,python-2.7,python-3.x,Python,Python 2.7,Python 3.x,帮助我找出python代码的错误 这就是密码 import nltk import re import pickle raw = open('tom_sawyer_shrt.txt').read() ### this is how the basic Punkt sentence tokenizer works #sent_tokenizer=nltk.data.load('tokenizers/punkt/english.pickle') #sents = sent_tokenizer.

帮助我找出python代码的错误

这就是密码

import nltk
import re
import pickle


raw = open('tom_sawyer_shrt.txt').read()

### this is how the basic Punkt sentence tokenizer works
#sent_tokenizer=nltk.data.load('tokenizers/punkt/english.pickle')
#sents = sent_tokenizer.tokenize(raw)

### train & tokenize text using text
sent_trainer = nltk.tokenize.punkt.PunktSentenceTokenizer().train(raw)
sent_tokenizer = nltk.tokenize.punkt.PunktSentenceTokenizer(sent_trainer)
# break in to sentences
sents = sent_tokenizer.tokenize(raw)
# get sentence start/stop indexes
sentspan = sent_tokenizer.span_tokenize(raw)



###  Remove \n in the middle of setences, due to fixed-width formatting
for i in range(0,len(sents)-1):
    sents[i] = re.sub('(?<!\n)\n(?!\n)',' ',raw[sentspan[i][0]:sentspan[i+1][0]])

for i in range(1,len(sents)):
    if (sents[i][0:3] == '"\n\n'):
        sents[i-1] = sents[i-1]+'"\n\n'
        sents[i] = sents[i][3:]


### Loop thru each sentence, fix to 140char
i=0
tweet=[]
while (i<len(sents)):
    if (len(sents[i]) > 140):
        ntwt = int(len(sents[i])/140) + 1
        words = sents[i].split(' ')
        nwords = len(words)
        for k in range(0,ntwt):
            tweet = tweet + [
                re.sub('\A\s|\s\Z', '', ' '.join(
                words[int(k*nwords/float(ntwt)):
                      int((k+1)*nwords/float(ntwt))]
                ))]
        i=i+1
    else:
        if (i<len(sents)-1):
            if (len(sents[i])+len(sents[i+1]) <140):
                nextra = 1
                while (len(''.join(sents[i:i+nextra+1]))<140):
                    nextra=nextra+1
                tweet = tweet+[
                    re.sub('\A\s|\s\Z', '',''.join(sents[i:i+nextra]))
                    ]        
                i = i+nextra
            else:
                tweet = tweet+[re.sub('\A\s|\s\Z', '',sents[i])]
                i=i+1
        else:
            tweet = tweet+[re.sub('\A\s|\s\Z', '',sents[i])]
            i=i+1


### A last pass to clean up leading/trailing newlines/spaces.
for i in range(0,len(tweet)):
    tweet[i] = re.sub('\A\s|\s\Z','',tweet[i])

for i in range(0,len(tweet)):
    tweet[i] = re.sub('\A"\n\n','',tweet[i])


###  Save tweets to pickle file for easy reading later
output = open('tweet_list.pkl','wb')
pickle.dump(tweet,output,-1)
output.close()


listout = open('tweet_lis.txt','w')
for i in range(0,len(tweet)):
    listout.write(tweet[i])
    listout.write('\n-----------------\n')

listout.close()
导入nltk
进口稀土
进口泡菜
raw=open('tom\u sawyer\u shrt.txt')。read()
###这就是基本Punkt语句标记器的工作原理
#sent_tokenizer=nltk.data.load('tokenizers/punkt/english.pickle'))
#sents=sent\u tokenizer.tokenize(原始)
###使用文本对文本进行训练和标记化(&T)
sent_trainer=nltk.tokenize.punkt.PunktSentenceTokenizer().train(原始)
sent_tokenizer=nltk.tokenize.punkt.PunktSentenceTokenizer(sent_trainer)
#破译成句子
sents=sent\u tokenizer.tokenize(原始)
#获取句子开始/停止索引
sentspan=sent\u tokenizer.span\u tokenize(原始)
由于固定宽度格式化,在集合的中间去掉\n
对于范围(0,len(sents)-1)内的i:

sents[i]=re.sub(“(?
UnicodeDecodeError
发生在字符串中包含一些Unicode时。基本上,Python字符串只处理
ascii
值,这就是为什么当您将文本发送到
标记器
时,它必须包含一些不在
ascii
列表中的字符

那么如何修复它呢?

您可以将文本转换为
ascii
字符,而忽略“Unicode”字符

raw = raw..encode('ascii', 'ignore')
此外,您还可以阅读此文档来处理
Unicode
错误