Python 如何在句子和单词中标记大文本

Python 如何在句子和单词中标记大文本,python,nltk,tokenize,Python,Nltk,Tokenize,我在用葡萄牙语与nltk合作 这是我的文字: import numpy as np from nltk.corpus import machado, mac_morpho, floresta, genesis from nltk.text import Text ptext1 = Text(machado.words('romance/marm05.txt'), name="Memórias Póstumas de Brás Cubas (1881)") ptext2 = Text(mach

我在用葡萄牙语与nltk合作

这是我的文字:

import numpy as np 
from nltk.corpus import machado, mac_morpho, floresta, genesis

from nltk.text import Text
ptext1 = Text(machado.words('romance/marm05.txt'), name="Memórias Póstumas de Brás Cubas (1881)")
ptext2 = Text(machado.words('romance/marm08.txt'), name="Dom Casmurro (1899)")
ptext3 = Text(genesis.words('portuguese.txt'), name="Gênesis")
ptext4 = Text(mac_morpho.words('mu94se01.txt'), name="Folha de Sao Paulo (1994)")
根据示例,我想把ptext4分成句子,然后再分成单词:

sentencas = nltk.sent_tokenize(ptext4)
palavras = nltk.word_tokenize(ptext4)
但它不起作用:错误应该是字符串或类似对象的字节

我试着这样做:

sentencas = [row for row in nltk.sent_tokenize(row)]
但结果不是预期的:

[In]sentencas
[Out] ['Fujimori']

请问我能做什么?这方面我是新手。

如果您只想从
machado
语料库中获取单词列表,请使用
.words()
函数

word_token  =  list(pytext1)  # if you want to have only word token from pytext1
print(word_token[0:10]) # printing first 10 token

#op
['Romance',',','Memórias','Póstumas','de','Brás','Cubas',',','1880','Memórias'] 

#if you want sent_token of text using sent_tokenize, read textfile in raw form 
raw_text = machado.raw('romance/marm05.txt')

print(raw_text[0:100]) # printing first 100 character from sentence
#op
'Romance, Memórias Póstumas de Brás Cubas, 1880\n\nMemórias Póstumas de\nBrás Cubas\n\nTexto-fonte:\nObra C'

sent_token = nltk.sent_tokenize(raw_text)
print(sent_token[0:2]) # printing 2 sentence, which is tokenized from text

['Romance, Memórias Póstumas de Brás Cubas, 1880\n\nMemórias Póstumas de\nBrás 
Cubas\n\nTexto-fonte:\nObra Completa, Machado de\nAssis,\nRio\nde Janeiro: Editora 
Nova Aguilar, 1994.',
'Publicado originalmente em\nfolhetins, a partir de março de 1880, na Revista Brasileira.']
>>> from nltk.corpus import machado
>>> machado.words()
但是如果您想处理原始文本,例如

>>> text = machado.raw('romance/marm08.txt')
>>> print(text)
用这个成语

>>> from nltk import word_tokenize, sent_tokenize
>>> text = machado.raw('romance/marm08.txt')
>>> tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(text)]
要遍历
标记化的_文本
,这是一个
列表(list(str))
,请执行以下操作:

>>> for sent in tokenize_text:
...     for word in sent:
...         print(word)
...     break
... 

如果您只需要
machado
语料库中的单词列表,请使用
.words()
函数

>>> from nltk.corpus import machado
>>> machado.words()
但是如果您想处理原始文本,例如

>>> text = machado.raw('romance/marm08.txt')
>>> print(text)
用这个成语

>>> from nltk import word_tokenize, sent_tokenize
>>> text = machado.raw('romance/marm08.txt')
>>> tokenized_text = [word_tokenize(sent) for sent in sent_tokenize(text)]
要遍历
标记化的_文本
,这是一个
列表(list(str))
,请执行以下操作:

>>> for sent in tokenize_text:
...     for word in sent:
...         print(word)
...     break
... 

然后,根据@qaiser和@alvas,有两种方法可以回答这个问题。两种答案都以不同的方式解决问题。 第二个答案包含负行代码:

import numpy as np 
from nltk.corpus import machado
import nltk

#if you want sent_token of text using sent_tokenize, read textfile in raw form 
raw_text = machado.raw('romance/marm05.txt')


word_token = nltk.word_tokenize(raw_text)
sent_token = nltk.sent_tokenize(raw_text)

[In]:print(sent_token[0:2]) # printing 2 sentence, which is tokenized from text
[Out]: ['Romance, Memórias Póstumas de Brás Cubas, 1880\n\nMemórias Póstumas de\nBrás Cubas\n\nTexto-fonte:\nObra Completa, Machado de\nAssis,\nRio\nde Janeiro: Editora Nova Aguilar, 1994.', 'Publicado originalmente em\nfolhetins, a partir de março de 1880, na Revista Brasileira.']

[In]:print(word_token[0:20]) # printing 20 words, wich is tokenized from text
[Out]:['Romance', ',', 'Memórias', 'Póstumas', 'de', 'Brás', 'Cubas', ',', '1880', 'Memórias', 'Póstumas', 'de', 'Brás', 'Cubas', 'Texto-fonte', ':', 'Obra', 'Completa', ',', 'Machado']

然后,根据@qaiser和@alvas,有两种方法可以回答这个问题。两种答案都以不同的方式解决问题。 第二个答案包含负行代码:

import numpy as np 
from nltk.corpus import machado
import nltk

#if you want sent_token of text using sent_tokenize, read textfile in raw form 
raw_text = machado.raw('romance/marm05.txt')


word_token = nltk.word_tokenize(raw_text)
sent_token = nltk.sent_tokenize(raw_text)

[In]:print(sent_token[0:2]) # printing 2 sentence, which is tokenized from text
[Out]: ['Romance, Memórias Póstumas de Brás Cubas, 1880\n\nMemórias Póstumas de\nBrás Cubas\n\nTexto-fonte:\nObra Completa, Machado de\nAssis,\nRio\nde Janeiro: Editora Nova Aguilar, 1994.', 'Publicado originalmente em\nfolhetins, a partir de março de 1880, na Revista Brasileira.']

[In]:print(word_token[0:20]) # printing 20 words, wich is tokenized from text
[Out]:['Romance', ',', 'Memórias', 'Póstumas', 'de', 'Brás', 'Cubas', ',', '1880', 'Memórias', 'Póstumas', 'de', 'Brás', 'Cubas', 'Texto-fonte', ':', 'Obra', 'Completa', ',', 'Machado']

另外,请看一看谢谢,你的回答也帮助了我。我是在你的回答之后做的,有可能得到文本的单词标记化和句子标记化:从nltk导入单词标记化,发送标记化text=machado.raw('roman/marm08.txt')标记化的单词=[word\u标记化(sent)for sent in发送标记化(text)]标记化的文本=[sent\u标记化(text)],也请看一下谢谢,你的回答对我也有帮助。我是在你回答之后做的,可以得到文本的单词标记化和句子标记化:从nltk导入单词标记化,sent单词标记化text=machado.raw('roman/marm08.txt')标记化单词=[word单词标记化(sent)用于sent单词标记化(text)]标记化单词标记化(sent)=[sent单词标记化(sent)]我需要把单词和句子标记化。我是在你的回答之后做的:word_-token=nltk.word_-tokenize(原始文本)sent_-tokenize=nltk.sent_-tokenize(原始文本)print(原始文本)print(原始文本)[0:2])printing 2个句子,这是从文本打印(word_-token[0:20])中标记出来的。然后,我想我知道了如何在文本中标记设置和单词。谢谢你的帮助。我需要把单词和句子标记化。我是在你的回答之后做的:word_-token=nltk.word_-tokenize(原始文本)sent_-tokenize=nltk.sent_-tokenize(原始文本)print(原始文本)print(原始文本)[0:2])printing 2个句子,这是从文本打印(word_-token[0:20])中标记出来的。然后,我想我知道了如何在文本中标记设置和单词。谢谢你帮助我。