Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/322.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用NLTK创建新语料库_Python_Nlp_Nltk_Corpus - Fatal编程技术网

Python 使用NLTK创建新语料库

Python 使用NLTK创建新语料库,python,nlp,nltk,corpus,Python,Nlp,Nltk,Corpus,我想我的标题的答案通常是去看纪录片,但我浏览了一遍,但没有给出答案。我对Python有点陌生 我有一堆.txt文件,我希望能够使用NLTK为语料库NLTK\u数据提供的语料库函数 我已经尝试了plaintextcompusreader,但我只能做到以下几点: >>>import nltk >>>from nltk.corpus import PlaintextCorpusReader >>>corpus_root = './' >>

我想我的标题的答案通常是去看纪录片,但我浏览了一遍,但没有给出答案。我对Python有点陌生

我有一堆
.txt
文件,我希望能够使用NLTK为语料库
NLTK\u数据提供的语料库函数

我已经尝试了
plaintextcompusreader
,但我只能做到以下几点:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()
如何使用punkt分割
newcorpus
句子?我尝试使用punkt函数,但punkt函数无法读取
PlaintextCorpusReader


你还可以告诉我如何将分割的数据写入文本文件吗?

我认为
PlaintextCorpusReader
已经用punkt标记器对输入进行了分割,至少如果你的输入语言是英语的话

您可以向读者传递单词和句子标记器,但对于后者,默认值已经是
nltk.data.LazyLoader('tokenizers/punkt/english.pickle')

对于单个字符串,将按如下方式使用标记器(解释见第5节punkt标记器)

>>导入nltk
>>>从nltk.corpus导入PlaintextCorpusReader
>>>语料库_根='。/'
>>>newcorpus=PlaintextCorpusReader(语料库_根,'%1!')
"""
如果./dir包含文件my_corpus.txt,则
你可以这样说所有的话吗
"""
>>>newcorpus.words('my_corpus.txt'))

经过几年的研究,了解了它的工作原理,下面是

如何创建包含文本文件目录的NLTK语料库?

主要的想法是利用包装。如果您有一个英文文本文件目录,最好使用

如果您有一个如下所示的目录:

newcorpus/
         file1.txt
         file2.txt
         ...
只需使用这些代码行,您就可以获得语料库:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

corpusdir = 'newcorpus/' # Directory of corpus.

newcorpus = PlaintextCorpusReader(corpusdir, '.*')
注意:明文微粒读取器将使用默认的
nltk.tokenize.sent\u tokenize()
nltk.tokenize.word\u tokenize()
将文本拆分为句子和单词,这些函数是为英语构建的,它可能不适用于所有语言

以下是创建测试文本文件的完整代码,以及如何使用NLTK创建语料库以及如何在不同级别访问语料库:

import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

# Let's create a corpus with 2 texts in different textfile.
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
corpus = [txt1,txt2]

# Make new dir for the corpus.
corpusdir = 'newcorpus/'
if not os.path.isdir(corpusdir):
    os.mkdir(corpusdir)

# Output the files into the directory.
filename = 0
for text in corpus:
    filename+=1
    with open(corpusdir+str(filename)+'.txt','w') as fout:
        print>>fout, text

# Check that our corpus do exist and the files are correct.
assert os.path.isdir(corpusdir)
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
    assert open(corpusdir+infile,'r').read().strip() == text.strip()


# Create a new corpus by specifying the parameters
# (1) directory of the new corpus
# (2) the fileids of the corpus
# NOTE: in this case the fileids are simply the filenames.
newcorpus = PlaintextCorpusReader('newcorpus/', '.*')

# Access each file in the corpus.
for infile in sorted(newcorpus.fileids()):
    print infile # The fileids of each file.
    with newcorpus.open(infile) as fin: # Opens the file.
        print fin.read().strip() # Prints the content of the file
print

# Access the plaintext; outputs pure string/basestring.
print newcorpus.raw().strip()
print 

# Access paragraphs in the corpus. (list of list of list of strings)
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
#       nltk.tokenize.word_tokenize.
#
# Each element in the outermost list is a paragraph, and
# Each paragraph contains sentence(s), and
# Each sentence contains token(s)
print newcorpus.paras()
print

# To access pargraphs of a specific fileid.
print newcorpus.paras(newcorpus.fileids()[0])

# Access sentences in the corpus. (list of list of strings)
# NOTE: That the texts are flattened into sentences that contains tokens.
print newcorpus.sents()
print

# To access sentences of a specific fileid.
print newcorpus.sents(newcorpus.fileids()[0])

# Access just tokens/words in the corpus. (list of strings)
print newcorpus.words()

# To access tokens of a specific fileid.
print newcorpus.words(newcorpus.fileids()[0])
最后,要阅读文本目录并用其他语言创建NLTK语料库,必须首先确保您有一个python可调用的单词标记化句子标记化模块,该模块接受字符串/基串输入并产生这样的输出:

>>> from nltk.tokenize import sent_tokenize, word_tokenize
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
>>> sent_tokenize(txt1)
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
>>> word_tokenize(sent_tokenize(txt1)[0])
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']

谢谢你的解释。知道了。但是我如何将分节的句子输出到一个单独的txt文件中呢?谢谢你的澄清。默认情况下支持多种语言。如果任何人出现
属性错误:\uuu退出\uuu
错误。使用
open()。
import os
from nltk.corpus.reader.plaintext import PlaintextCorpusReader

# Let's create a corpus with 2 texts in different textfile.
txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
corpus = [txt1,txt2]

# Make new dir for the corpus.
corpusdir = 'newcorpus/'
if not os.path.isdir(corpusdir):
    os.mkdir(corpusdir)

# Output the files into the directory.
filename = 0
for text in corpus:
    filename+=1
    with open(corpusdir+str(filename)+'.txt','w') as fout:
        print>>fout, text

# Check that our corpus do exist and the files are correct.
assert os.path.isdir(corpusdir)
for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
    assert open(corpusdir+infile,'r').read().strip() == text.strip()


# Create a new corpus by specifying the parameters
# (1) directory of the new corpus
# (2) the fileids of the corpus
# NOTE: in this case the fileids are simply the filenames.
newcorpus = PlaintextCorpusReader('newcorpus/', '.*')

# Access each file in the corpus.
for infile in sorted(newcorpus.fileids()):
    print infile # The fileids of each file.
    with newcorpus.open(infile) as fin: # Opens the file.
        print fin.read().strip() # Prints the content of the file
print

# Access the plaintext; outputs pure string/basestring.
print newcorpus.raw().strip()
print 

# Access paragraphs in the corpus. (list of list of list of strings)
# NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
#       nltk.tokenize.word_tokenize.
#
# Each element in the outermost list is a paragraph, and
# Each paragraph contains sentence(s), and
# Each sentence contains token(s)
print newcorpus.paras()
print

# To access pargraphs of a specific fileid.
print newcorpus.paras(newcorpus.fileids()[0])

# Access sentences in the corpus. (list of list of strings)
# NOTE: That the texts are flattened into sentences that contains tokens.
print newcorpus.sents()
print

# To access sentences of a specific fileid.
print newcorpus.sents(newcorpus.fileids()[0])

# Access just tokens/words in the corpus. (list of strings)
print newcorpus.words()

# To access tokens of a specific fileid.
print newcorpus.words(newcorpus.fileids()[0])
>>> from nltk.tokenize import sent_tokenize, word_tokenize
>>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
>>> sent_tokenize(txt1)
['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
>>> word_tokenize(sent_tokenize(txt1)[0])
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']
from nltk.corpus.reader.plaintext import PlaintextCorpusReader


filecontent1 = "This is a cow"
filecontent2 = "This is a Dog"

corpusdir = 'nltk_data/'
with open(corpusdir + 'content1.txt', 'w') as text_file:
    text_file.write(filecontent1)
with open(corpusdir + 'content2.txt', 'w') as text_file:
    text_file.write(filecontent2)

text_corpus = PlaintextCorpusReader(corpusdir, ["content1.txt", "content2.txt"])

no_of_words_corpus1 = len(text_corpus.words("content1.txt"))
print(no_of_words_corpus1)
no_of_unique_words_corpus1 = len(set(text_corpus.words("content1.txt")))

no_of_words_corpus2 = len(text_corpus.words("content2.txt"))
no_of_unique_words_corpus2 = len(set(text_corpus.words("content2.txt")))

enter code here