Python 分析StanfordCoreNLP和Stanza中的树,给出不同的结果(表示结构)

Python 分析StanfordCoreNLP和Stanza中的树,给出不同的结果(表示结构),python,nlp,nltk,stanford-nlp,Python,Nlp,Nltk,Stanford Nlp,我使用下面的代码使用StanfordCoreNLP进行依赖项解析 from stanfordcorenlp import StanfordCoreNLP nlp = StanfordCoreNLP('stanford-corenlp-full-2018-10-05', lang='en') sentence = 'The clothes in the dressing room are gorgeous. Can I have one?' tree_str = nlp.parse(senten

我使用下面的代码使用StanfordCoreNLP进行依赖项解析

from stanfordcorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('stanford-corenlp-full-2018-10-05', lang='en')

sentence = 'The clothes in the dressing room are gorgeous. Can I have one?'
tree_str = nlp.parse(sentence)
print(tree_str)
我得到了输出:

  (S
    (NP
      (NP (DT The) (NNS clothes))
      (PP (IN in)
        (NP (DT the) (VBG dressing) (NN room))))
    (VP (VBP are)
      (ADJP (JJ gorgeous)))
    (. .)))
如何在节中获得相同的输出

import stanza
from stanza.server import CoreNLPClient
classpath='/stanford-corenlp-full-2020-04-20/*'
client = CoreNLPClient(be_quite=False, classpath=classpath, annotators=['parse'], memory='4G', endpoint='http://localhost:8900')
client.start()
text = 'The clothes in the dressing room are gorgeous. Can I have one?'
ann = client.annotate(text)
sentence = ann.sentence[0]
dependency_parse = sentence.basicDependencies
print(dependency_parse)

在这一节中,我似乎不得不把组成这个句子的句子分开。我做错什么了吗


请注意,我的目标是提取名词短语

这里有一些关于用法的文档:

这显示了如何获得选区解析(这是输出示例的形式)。依赖项解析是单词之间的边列表

# set up the client
with CoreNLPClient(annotators=['tokenize','ssplit','pos','lemma','ner', 'parse'], timeout=30000, memory='16G') as client:
    # submit the request to the server
    ann = client.annotate(text)

    # get the first sentence
    sentence = ann.sentence[0]

    # get the constituency parse of the first sentence
    print('---')
    print('constituency parse of first sentence')
    constituency_parse = sentence.parseTree
    print(constituency_parse)

    # get the first subtree of the constituency parse
    print('---')
    print('first subtree of constituency parse')
    print(constituency_parse.child[0])


这里有一些关于用法的文档:

这显示了如何获得选区解析(这是输出示例的形式)。依赖项解析是单词之间的边列表

# set up the client
with CoreNLPClient(annotators=['tokenize','ssplit','pos','lemma','ner', 'parse'], timeout=30000, memory='16G') as client:
    # submit the request to the server
    ann = client.annotate(text)

    # get the first sentence
    sentence = ann.sentence[0]

    # get the constituency parse of the first sentence
    print('---')
    print('constituency parse of first sentence')
    constituency_parse = sentence.parseTree
    print(constituency_parse)

    # get the first subtree of the constituency parse
    print('---')
    print('first subtree of constituency parse')
    print(constituency_parse.child[0])


我与
法语有连接问题
我与
法语有连接问题