Python 大文本文件中基于nltk的句子切分

Python 大文本文件中基于nltk的句子切分,python,nltk,tokenize,Python,Nltk,Tokenize,我需要使用nltk.sent\u tokenize()从大文本文件中提取句子。文件大小从1MB到400MB不等,因此无法完全加载文件,因为内存有限,我认为无法使用nltk.sent\u tokenize()逐行读取文件 您建议如何执行此任务?在逐行阅读文件时,对文件进行流式处理 如果存储令牌的内存存在问题,则逐行或成批写入流程令牌 逐行: from __future__ import print_function from nltk import word_tokenize with open(

我需要使用
nltk.sent\u tokenize()
从大文本文件中提取句子。文件大小从1MB到400MB不等,因此无法完全加载文件,因为内存有限,我认为无法使用
nltk.sent\u tokenize()
逐行读取文件


您建议如何执行此任务?

在逐行阅读文件时,对文件进行流式处理

如果存储令牌的内存存在问题,则逐行或成批写入流程令牌

逐行:

from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    for line in fin:
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        print(tokenized_line, end='\n', file=fout)
from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    processed_lines = []
    for i, line in enumerate(fin):
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        processed_lines.append(tokenized_line)
        if i % 1000 = 0:
            print('\n'.join(processed_lines), end='\n', file=fout)
            processed_lines = []
成批(1000份):

from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    for line in fin:
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        print(tokenized_line, end='\n', file=fout)
from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    processed_lines = []
    for i, line in enumerate(fin):
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        processed_lines.append(tokenized_line)
        if i % 1000 = 0:
            print('\n'.join(processed_lines), end='\n', file=fout)
            processed_lines = []

流式传输文件,并在逐行读取文件时对其进行处理

如果存储令牌的内存存在问题,则逐行或成批写入流程令牌

逐行:

from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    for line in fin:
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        print(tokenized_line, end='\n', file=fout)
from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    processed_lines = []
    for i, line in enumerate(fin):
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        processed_lines.append(tokenized_line)
        if i % 1000 = 0:
            print('\n'.join(processed_lines), end='\n', file=fout)
            processed_lines = []
成批(1000份):

from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    for line in fin:
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        print(tokenized_line, end='\n', file=fout)
from __future__ import print_function
from nltk import word_tokenize
with open('input.txt', 'r') as fin, open('output.txt', 'w') as fout:
    processed_lines = []
    for i, line in enumerate(fin):
        tokenized_line = ' '.join(word_tokenize(line.strip()))
        processed_lines.append(tokenized_line)
        if i % 1000 = 0:
            print('\n'.join(processed_lines), end='\n', file=fout)
            processed_lines = []

你试过只用阅读器吗?
nltk
语料库阅读器旨在以增量方式交付文本,在后台从磁盘读取大数据块,而不是整个文件。所以只要在你的整个语料库上打开一个
plaintextcopusreader
,它就应该一句一句地传递你的整个语料库,而不需要任何诡计。例如:

reader = nltk.corpus.reader.PlaintextCorpusReader("path/to/corpus", r".*\.txt")
for sent in reader.sents():
    if "shenanigans" in sent:
        print(" ".join(sent))

你试过只用阅读器吗?
nltk
语料库阅读器旨在以增量方式交付文本,在后台从磁盘读取大数据块,而不是整个文件。所以只要在你的整个语料库上打开一个
plaintextcopusreader
,它就应该一句一句地传递你的整个语料库,而不需要任何诡计。例如:

reader = nltk.corpus.reader.PlaintextCorpusReader("path/to/corpus", r".*\.txt")
for sent in reader.sents():
    if "shenanigans" in sent:
        print(" ".join(sent))

你可以将文件分成更小的块,并按该粒度进行处理。你可以将文件分成更小的块,并按该粒度进行处理。