Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 使用PyNER的Stanford名称实体识别器(NER)不工作_Python 3.x_Nlp_Nltk_Stanford Nlp_Spacy - Fatal编程技术网

Python 3.x 使用PyNER的Stanford名称实体识别器(NER)不工作

Python 3.x 使用PyNER的Stanford名称实体识别器(NER)不工作,python-3.x,nlp,nltk,stanford-nlp,spacy,Python 3.x,Nlp,Nltk,Stanford Nlp,Spacy,我正在尝试使用斯坦福大学的名称实体识别器(NER) 我从以下位置下载了zip文件: 使用:python setup.py install安装 现在,当我运行下面的命令时,我得到的是空白输出 import ner tagger =ner.SocketNER(host='localhost',port=31752,output_format='slashTags') tagger.get_entities("University of California is located in Califor

我正在尝试使用斯坦福大学的名称实体识别器(NER)

我从以下位置下载了zip文件:

使用:python setup.py install安装

现在,当我运行下面的命令时,我得到的是空白输出

import ner
tagger =ner.SocketNER(host='localhost',port=31752,output_format='slashTags')
tagger.get_entities("University of California is located in California, United States")

Output:
{}
我遗漏了什么吗?

这个工具已经严重过时了

如果您正在使用NLTK,请首先更新您的
NLTK
版本:

pip3 install -U nltk
然后仍然在终点站:

wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-02-27.zip
unzip stanford-corenlp-full-2018-02-27.zip
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000 &
然后在Python3中:

>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]
>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]

窗户 您可以使用
powershell
(您确实是这样做的)使用上述功能,但如果您喜欢单击鼠标

步骤1:从下载zip文件

第二步:解压

第3步:打开命令提示符并转到文件解压缩的文件夹

步骤4:运行命令:
pip3安装-U nltk

步骤5:现在运行命令:

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000 &
然后在Python3中:

>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]
>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]
这工具已经严重过时了

如果您正在使用NLTK,请首先更新您的
NLTK
版本:

pip3 install -U nltk
然后仍然在终点站:

wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-02-27.zip
unzip stanford-corenlp-full-2018-02-27.zip
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000 &
然后在Python3中:

>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]
>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]

窗户 您可以使用
powershell
(您确实是这样做的)使用上述功能,但如果您喜欢单击鼠标

步骤1:从下载zip文件

第二步:解压

第3步:打开命令提示符并转到文件解压缩的文件夹

步骤4:运行命令:
pip3安装-U nltk

步骤5:现在运行命令:

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -preload tokenize,ssplit,pos,lemma,ner,parse,depparse -status_port 9000 -port 9000 -timeout 15000 &
然后在Python3中:

>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]
>>> from nltk.parse import CoreNLPParser
>>> tagger = CoreNLPParser(url='http://localhost:9000', tagtype='ner')
>>> tokens = 'Rami Eid is studying at Stony Brook University in NY'.split()
>>> 
>>> tagger.tag(tokens)
[('Rami', 'PERSON'), ('Eid', 'PERSON'), ('is', 'O'), ('studying', 'O'), ('at', 'O'), ('Stony', 'ORGANIZATION'), ('Brook', 'ORGANIZATION'), ('University', 'ORGANIZATION'), ('in', 'O'), ('NY', 'STATE_OR_PROVINCE')]

谢谢你,因为这很有帮助。你能帮个忙吗?谢谢你,艾尔瓦。你能帮忙吗