Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/353.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python NLTK:从字符串中提取实体名称_Python_Nltk - Fatal编程技术网

Python NLTK:从字符串中提取实体名称

Python NLTK:从字符串中提取实体名称,python,nltk,Python,Nltk,这里是Python和NLTK noob。搞什么鬼 我有一个字符串,其中包含来自pdf文档的文本,我正在尝试使用nltk库提取实体名称 with open(filename, 'r') as f: str_output = f.readlines() str_output = clean_str(str(str_output)) sentences = nltk.sent_tokenize(str_output) tokenized_sentences = [nltk.word_

这里是Python和NLTK noob。搞什么鬼

我有一个字符串,其中包含来自pdf文档的文本,我正在尝试使用nltk库提取实体名称

with open(filename, 'r') as f:
    str_output = f.readlines()   

str_output = clean_str(str(str_output))

sentences = nltk.sent_tokenize(str_output)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.ne_chunk_sents(tagged_sentences, binary=True)
我完成了导入数据、清理字符串和预处理字符串的步骤。如何从字符串中获取不同的实体名称?

这应该可以:

import nltk

with open('sample.txt', 'r') as f:
    sample = f.read()

sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.ne_chunk_sents(tagged_sentences, binary=True)

def extract_entity_names(t):
    entity_names = []

    if hasattr(t, 'node') and t.node:
        if t.node == 'NE':
            entity_names.append(' '.join([child[0] for child in t]))
        else:
            for child in t:
                entity_names.extend(extract_entity_names(child))

    return entity_names

entity_names = []
for tree in chunked_sentences:
    # Print results per sentence
    # print extract_entity_names(tree)

    entity_names.extend(extract_entity_names(tree))

# Print all entity names
#print entity_names

# Print unique entity names
print set(entity_names)

你把这个句子重复了好几遍。不要这样做。请参阅不要通过重用多次遍历数据的代码来鼓励提问者。