Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/328.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/ant/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在nltk ne_块中显示PERSON和GPE等实体_Python_Nltk - Fatal编程技术网

Python 在nltk ne_块中显示PERSON和GPE等实体

Python 在nltk ne_块中显示PERSON和GPE等实体,python,nltk,Python,Nltk,输出为: 你好,我叫舍法利,住在内布拉斯加州。 [树('S',[('hello','NN'),('my','PRP$),('name','NN'),('is','VBZ'),树('PERSON',[('Shefali','NNP')),('and','CC'),('I','PRP'),('live','VBP'),('in','in'),树('GPE',[('Nebraska','NNP'))),(' 当我写print(chunked_句子)时,它会给我以下输出:我只想提取人物和GPE并打印出来

输出为:

你好,我叫舍法利,住在内布拉斯加州。
[树('S',[('hello','NN'),('my','PRP$),('name','NN'),('is','VBZ'),树('PERSON',[('Shefali','NNP')),('and','CC'),('I','PRP'),('live','VBP'),('in','in'),树('GPE',[('Nebraska','NNP'))),('


当我写
print(chunked_句子)
时,它会给我以下输出:
我只想提取人物和GPE并打印出来。我该怎么做?生成器对象是什么

您可以尝试从结果中筛选吗?生成器是简单的函数,它以一种特殊的方式一次返回一组iterable项。因此,生成器计算值并在运行中忘记。您必须迭代生成器的结果,因此当您执行list(result)时,您使用的是list迭代器。如何过滤?一些代码会有帮助!您可以尝试从结果中筛选吗?生成器是简单的函数,它以一种特殊的方式一次返回一组iterable项。因此,生成器计算值并在运行中忘记。您必须迭代生成器的结果,因此当您执行list(result)时,您使用的是list迭代器。如何过滤?一些代码会有帮助!
def test():
    sample = "hello my name is Shefali and I live in Nebraska."
    print sample
    sentences = nltk.sent_tokenize(sample)
    tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
    tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
    chunked_sentences = nltk.ne_chunk_sents(tagged_sentences)

    print(list(chunked_sentences))