Stanford nlp 带有选项OpenIE.resolve#u coref don';行不通

Stanford nlp 带有选项OpenIE.resolve#u coref don';行不通,stanford-nlp,Stanford Nlp,我正在会见斯坦福NLP的openie注释员。但是,选项openie.resolve\u coref在我的输入文本中不起作用。 我想使用openie生成三元组,并解析共引用。我怎样才能做到这一点? 此代码是从站点Stanford复制的,我添加了以下行: props.setProperty(“openie.resolve_coref”,“true”) Properties=newproperties(); props.setProperty(“openie.resolve_coref”,“true”

我正在会见斯坦福NLP的openie注释员。但是,选项openie.resolve\u coref在我的输入文本中不起作用。 我想使用openie生成三元组,并解析共引用。我怎样才能做到这一点? 此代码是从站点Stanford复制的,我添加了以下行: props.setProperty(“openie.resolve_coref”,“true”)

Properties=newproperties();
props.setProperty(“openie.resolve_coref”,“true”);
props.setProperty(“注释器”、“标记化、ssplit、pos、引理、depparse、parse、natlog、ner、coref、openie”);
StanfordCoreNLP管道=新的StanfordCoreNLP(道具);
//为示例文档添加注释。
String text=“奥巴马出生于夏威夷。他是我们的总统。”;
批注单据=新批注(textoInput);
管道注释(doc);
//在文档中循环句子
int-sentNo=0;
for(CoreMap语句:doc.get(coreanotations.SentencesAnnotation.class)){
System.out.println(“句子”#“+++sentNo+”:“+句子.get(CoreAnnotations.TextAnnotation.class));
//打印语义图
System.out.println(句子.get(SemanticGraphCoreAnnotations.collapsedDependenceAnnotation.class).toString(SemanticGraph.OutputFormat.LIST));
//获取句子的OpenIE三元组
集合三元组=句子.get(NaturalLogicAnnotations.RelationTriplesAnnotation.class);
//打印三份
for(关系三重:三重){
System.out.println(triple.confidence+“\t CON=”+
triple.subjectLemmaGloss()+“\t REL=”+
triple.relationLemmaGloss()+“\t CON=”+
triple.objectLemmaGloss());
}
//或者,仅运行子句拆分器:
列表子句=新的OpenIE(道具)。子句内容(句子);
for(句子片段从句:从句){
System.out.println(子句.parseTree.toString(SemanticGraph.OutputFormat.LIST));
}
System.out.println();
这一过程产生了三重结果:

  • 1.0奥巴马将在夏威夷亮相

  • 1.0奥巴马是熊

  • 1.0当我们的总统->应该当->奥巴马当我们的总统


编辑:3.7.0版中也修复了此错误+

这是GitHub版本中修复的3.6.0版本中的一个bug。它将在下一个版本中修复,或者您可以从中手动更新代码和模型jar——您可以下载最新的模型,并使用
ant jar
构建代码jar

我的输出是:

Sentence #1: Obama was born in Hawaii.
root(ROOT-0, born-3)
nsubjpass(born-3, Obama-1)
auxpass(born-3, was-2)
case(Hawaii-5, in-4)
nmod:in(born-3, Hawaii-5)
punct(born-3, .-6)

1.0  CON=Obama   REL=be bear in  CON=Hawaii
1.0  CON=Obama   REL=be  CON=bear
[main] INFO edu.stanford.nlp.naturalli.ClauseSplitter - Loading clause splitter from edu/stanford/nlp/models/naturalli/clauseSearcherModel.ser.gz ... done [0.43 seconds]
root(ROOT-0, born-3)
nsubjpass(born-3, Obama-1)
auxpass(born-3, was-2)
case(Hawaii-5, in-4)
nmod:in(born-3, Hawaii-5)


Sentence #2: He is our president.
root(ROOT-0, president-4)
nsubj(president-4, He-1)
cop(president-4, is-2)
nmod:poss(president-4, our-3)
punct(president-4, .-5)

1.0  CON=Obama   REL=be  CON=we president
[main] INFO edu.stanford.nlp.naturalli.ClauseSplitter - Loading clause splitter from edu/stanford/nlp/models/naturalli/clauseSearcherModel.ser.gz ... done [0.45 seconds]
root(ROOT-0, president-4)
nsubj(president-4, He-1)
cop(president-4, is-2)
nmod:poss(president-4, our-3)

这是使用回购协议的GitHub负责人还是官方版本(3.6.0)?我使用的是maven的官方版本(3.6.0)。
Sentence #1: Obama was born in Hawaii.
root(ROOT-0, born-3)
nsubjpass(born-3, Obama-1)
auxpass(born-3, was-2)
case(Hawaii-5, in-4)
nmod:in(born-3, Hawaii-5)
punct(born-3, .-6)

1.0  CON=Obama   REL=be bear in  CON=Hawaii
1.0  CON=Obama   REL=be  CON=bear
[main] INFO edu.stanford.nlp.naturalli.ClauseSplitter - Loading clause splitter from edu/stanford/nlp/models/naturalli/clauseSearcherModel.ser.gz ... done [0.43 seconds]
root(ROOT-0, born-3)
nsubjpass(born-3, Obama-1)
auxpass(born-3, was-2)
case(Hawaii-5, in-4)
nmod:in(born-3, Hawaii-5)


Sentence #2: He is our president.
root(ROOT-0, president-4)
nsubj(president-4, He-1)
cop(president-4, is-2)
nmod:poss(president-4, our-3)
punct(president-4, .-5)

1.0  CON=Obama   REL=be  CON=we president
[main] INFO edu.stanford.nlp.naturalli.ClauseSplitter - Loading clause splitter from edu/stanford/nlp/models/naturalli/clauseSearcherModel.ser.gz ... done [0.45 seconds]
root(ROOT-0, president-4)
nsubj(president-4, He-1)
cop(president-4, is-2)
nmod:poss(president-4, our-3)