Python 如何提取句子中的主语及其相关短语?
我试图在一个句子中提取主语,这样我就能得到与主语一致的情感。为此,我在python2.7中使用了Python 如何提取句子中的主语及其相关短语?,python,nlp,nltk,spacy,Python,Nlp,Nltk,Spacy,我试图在一个句子中提取主语,这样我就能得到与主语一致的情感。为此,我在python2.7中使用了nltk。以下面的句子为例: 唐纳德·特朗普是美国最糟糕的总统,但希拉里比他好 他说:“我们可以看到,Donald Trump和Hillary是两个主题,与Donald Trump相关的情绪是消极的,而与Hillary相关的情绪是积极的。”。到目前为止,我能够将这个句子分解成大量的名词短语,并且我能够得到以下结果: (S (NP Donald/NNP Trump/NNP) is/VBZ (
nltk
。以下面的句子为例:
唐纳德·特朗普是美国最糟糕的总统,但希拉里比他好
他说:“我们可以看到,Donald Trump
和Hillary
是两个主题,与Donald Trump
相关的情绪是消极的,而与Hillary
相关的情绪是积极的。”。到目前为止,我能够将这个句子分解成大量的名词短语,并且我能够得到以下结果:
(S
(NP Donald/NNP Trump/NNP)
is/VBZ
(NP the/DT worst/JJS president/NN)
in/IN
(NP USA,/NNP)
but/CC
(NP Hillary/NNP)
is/VBZ
better/JJR
than/IN
(NP him/PRP))
[(u'donald trump', u'is', u'worst president'), (u'hillary', u'is', u'better')]
现在,我如何从这些名词短语中找到主语呢?那么我如何将这两个主题的短语组合在一起呢?一旦我有了分别针对这两个主题的短语,我就可以分别对这两个主题进行情绪分析
编辑
我查看了@Krzysiek(spacy
)提到的库,它在句子中也给了我依赖树
代码如下:
from spacy.en import English
parser = English()
example = u"Donald Trump is the worst president of USA, but Hillary is better than him"
parsedEx = parser(example)
# shown as: original token, dependency tag, head word, left dependents, right dependents
for token in parsedEx:
print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])
以下是依赖关系树:
(u'Donald', u'compound', u'Trump', [], [])
(u'Trump', u'nsubj', u'is', [u'Donald'], [])
(u'is', u'ROOT', u'is', [u'Trump'], [u'president', u',', u'but', u'is'])
(u'the', u'det', u'president', [], [])
(u'worst', u'amod', u'president', [], [])
(u'president', u'attr', u'is', [u'the', u'worst'], [u'of'])
(u'of', u'prep', u'president', [], [u'USA'])
(u'USA', u'pobj', u'of', [], [])
(u',', u'punct', u'is', [], [])
(u'but', u'cc', u'is', [], [])
(u'Hillary', u'nsubj', u'is', [], [])
(u'is', u'conj', u'is', [u'Hillary'], [u'better'])
(u'better', u'acomp', u'is', [], [u'than'])
(u'than', u'prep', u'better', [], [u'him'])
(u'him', u'pobj', u'than', [], [])
这将深入了解句子中不同标记的相关性。这是本文的重点,它描述了不同对之间的依赖关系。如何使用此树将不同主题的上下文词附加到它们上?我最近刚刚解决了非常类似的问题-我需要提取主题、动作和对象。我将我的作品开源,以便您可以查看此库: 它基于spacy(nltk的对手),但也基于句子树 例如,让我们将此文档嵌入spacy中作为示例:
import spacy
nlp = spacy.load("en")
doc = nlp(u"The Empire of Japan aimed to dominate Asia and the " \
"Pacific and was already at war with the Republic of China " \
"in 1937, but the world war is generally said to have begun on " \
"1 September 1939 with the invasion of Poland by Germany and " \
"subsequent declarations of war on Germany by France and the United Kingdom. " \
"From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered " \
"or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. " \
"Under the Molotov-Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and " \
"annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. " \
"The war continued primarily between the European Axis powers and the coalition of the United Kingdom " \
"and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, " \
"the aerial Battle of Britain, the Blitz bombing campaign, the Balkan Campaign as well as the " \
"long-running Battle of the Atlantic. In June 1941, the European Axis powers launched an invasion " \
"of the Soviet Union, opening the largest land theatre of war in history, which trapped the major part " \
"of the Axis' military forces into a war of attrition. In December 1941, Japan attacked " \
"the United States and European territories in the Pacific Ocean, and quickly conquered much of " \
"the Western Pacific.")
现在,您可以创建一个简单的管道结构(有关管道的更多信息,请参见本项目自述):
结果你会得到:
>>>[([Germany], [conquered], [Europe]),
([Japan], [attacked], [the, United, States])]
实际上,它强烈地基于另一个库——grammaregex(查找管道)。您可以从以下帖子中阅读:
已编辑
实际上,我在readme中给出的示例丢弃了adj,但您所需要的只是根据需要调整传递给引擎的管道结构。
例如,对于您的示例句子,我可以提出这样的结构/解决方案,为您提供每个句子包含3个元素(主语、动词、形容词)的元组:
import spacy
from textpipeliner import PipelineEngine, Context
from textpipeliner.pipes import *
pipes_structure = [SequencePipe([FindTokensPipe("VERB/nsubj/NNP"),
NamedEntityFilterPipe(),
NamedEntityExtractorPipe()]),
AggregatePipe([FindTokensPipe("VERB"),
FindTokensPipe("VERB/xcomp/VERB/aux/*"),
FindTokensPipe("VERB/xcomp/VERB")]),
AnyPipe([FindTokensPipe("VERB/[acomp,amod]/ADJ"),
AggregatePipe([FindTokensPipe("VERB/[dobj,attr]/NOUN/det/DET"),
FindTokensPipe("VERB/[dobj,attr]/NOUN/[acomp,amod]/ADJ")])])
]
engine = PipelineEngine(pipes_structure, Context(doc), [0,1,2])
engine.process()
pipes_structure_comp = [SequencePipe([FindTokensPipe("VERB/conj/VERB/nsubj/NNP"),
NamedEntityFilterPipe(),
NamedEntityExtractorPipe()]),
AggregatePipe([FindTokensPipe("VERB/conj/VERB"),
FindTokensPipe("VERB/conj/VERB/xcomp/VERB/aux/*"),
FindTokensPipe("VERB/conj/VERB/xcomp/VERB")]),
AnyPipe([FindTokensPipe("VERB/conj/VERB/[acomp,amod]/ADJ"),
AggregatePipe([FindTokensPipe("VERB/conj/VERB/[dobj,attr]/NOUN/det/DET"),
FindTokensPipe("VERB/conj/VERB/[dobj,attr]/NOUN/[acomp,amod]/ADJ")])])
]
engine2 = PipelineEngine(pipes_structure_comp, Context(doc), [0,1,2])
它将为您提供以下结果:
[([Donald, Trump], [is], [the, worst])]
有一点复杂的是,事实上你有一个复合句,库每句话产生一个元组-我很快会添加一种可能性(我的项目也需要它),将管道结构列表传递给引擎,以允许每句话产生更多元组。但现在你可以通过为复合句创建第二个引擎来解决这个问题,该引擎的结构只会与动词/conj/动词而不是动词不同(那些正则表达式总是从根开始,所以动词/conj/动词只会引导你找到复合句中的第二个动词):
现在,在运行两个引擎之后,您将获得预期结果:)
我想这就是你需要的。当然,我只是快速地为给定的示例句子创建了一个管道结构,它不会适用于所有情况,但我看到了很多句子结构,它已经达到了相当好的百分比,但是你可以添加更多的FindTokensPipe等来处理当前不起作用的情况,我相信经过一些调整后,你会涵盖很多可能的句子(英语不是太复杂,所以…:)我在浏览spacy库的更多内容,最后通过依赖关系管理找到了解决方案。多亏了repo,我知道了如何在我的主观动词宾语中包括形容词(使之成为SVAO),以及在查询中去掉复合主语。我的解决方案如下:
from nltk.stem.wordnet import WordNetLemmatizer
from spacy.lang.en import English
SUBJECTS = ["nsubj", "nsubjpass", "csubj", "csubjpass", "agent", "expl"]
OBJECTS = ["dobj", "dative", "attr", "oprd"]
ADJECTIVES = ["acomp", "advcl", "advmod", "amod", "appos", "nn", "nmod", "ccomp", "complm",
"hmod", "infmod", "xcomp", "rcmod", "poss"," possessive"]
COMPOUNDS = ["compound"]
PREPOSITIONS = ["prep"]
def getSubsFromConjunctions(subs):
moreSubs = []
for sub in subs:
# rights is a generator
rights = list(sub.rights)
rightDeps = {tok.lower_ for tok in rights}
if "and" in rightDeps:
moreSubs.extend([tok for tok in rights if tok.dep_ in SUBJECTS or tok.pos_ == "NOUN"])
if len(moreSubs) > 0:
moreSubs.extend(getSubsFromConjunctions(moreSubs))
return moreSubs
def getObjsFromConjunctions(objs):
moreObjs = []
for obj in objs:
# rights is a generator
rights = list(obj.rights)
rightDeps = {tok.lower_ for tok in rights}
if "and" in rightDeps:
moreObjs.extend([tok for tok in rights if tok.dep_ in OBJECTS or tok.pos_ == "NOUN"])
if len(moreObjs) > 0:
moreObjs.extend(getObjsFromConjunctions(moreObjs))
return moreObjs
def getVerbsFromConjunctions(verbs):
moreVerbs = []
for verb in verbs:
rightDeps = {tok.lower_ for tok in verb.rights}
if "and" in rightDeps:
moreVerbs.extend([tok for tok in verb.rights if tok.pos_ == "VERB"])
if len(moreVerbs) > 0:
moreVerbs.extend(getVerbsFromConjunctions(moreVerbs))
return moreVerbs
def findSubs(tok):
head = tok.head
while head.pos_ != "VERB" and head.pos_ != "NOUN" and head.head != head:
head = head.head
if head.pos_ == "VERB":
subs = [tok for tok in head.lefts if tok.dep_ == "SUB"]
if len(subs) > 0:
verbNegated = isNegated(head)
subs.extend(getSubsFromConjunctions(subs))
return subs, verbNegated
elif head.head != head:
return findSubs(head)
elif head.pos_ == "NOUN":
return [head], isNegated(tok)
return [], False
def isNegated(tok):
negations = {"no", "not", "n't", "never", "none"}
for dep in list(tok.lefts) + list(tok.rights):
if dep.lower_ in negations:
return True
return False
def findSVs(tokens):
svs = []
verbs = [tok for tok in tokens if tok.pos_ == "VERB"]
for v in verbs:
subs, verbNegated = getAllSubs(v)
if len(subs) > 0:
for sub in subs:
svs.append((sub.orth_, "!" + v.orth_ if verbNegated else v.orth_))
return svs
def getObjsFromPrepositions(deps):
objs = []
for dep in deps:
if dep.pos_ == "ADP" and dep.dep_ == "prep":
objs.extend([tok for tok in dep.rights if tok.dep_ in OBJECTS or (tok.pos_ == "PRON" and tok.lower_ == "me")])
return objs
def getAdjectives(toks):
toks_with_adjectives = []
for tok in toks:
adjs = [left for left in tok.lefts if left.dep_ in ADJECTIVES]
adjs.append(tok)
adjs.extend([right for right in tok.rights if tok.dep_ in ADJECTIVES])
tok_with_adj = " ".join([adj.lower_ for adj in adjs])
toks_with_adjectives.extend(adjs)
return toks_with_adjectives
def getObjsFromAttrs(deps):
for dep in deps:
if dep.pos_ == "NOUN" and dep.dep_ == "attr":
verbs = [tok for tok in dep.rights if tok.pos_ == "VERB"]
if len(verbs) > 0:
for v in verbs:
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
objs.extend(getObjsFromPrepositions(rights))
if len(objs) > 0:
return v, objs
return None, None
def getObjFromXComp(deps):
for dep in deps:
if dep.pos_ == "VERB" and dep.dep_ == "xcomp":
v = dep
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
objs.extend(getObjsFromPrepositions(rights))
if len(objs) > 0:
return v, objs
return None, None
def getAllSubs(v):
verbNegated = isNegated(v)
subs = [tok for tok in v.lefts if tok.dep_ in SUBJECTS and tok.pos_ != "DET"]
if len(subs) > 0:
subs.extend(getSubsFromConjunctions(subs))
else:
foundSubs, verbNegated = findSubs(v)
subs.extend(foundSubs)
return subs, verbNegated
def getAllObjs(v):
# rights is a generator
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
objs.extend(getObjsFromPrepositions(rights))
potentialNewVerb, potentialNewObjs = getObjFromXComp(rights)
if potentialNewVerb is not None and potentialNewObjs is not None and len(potentialNewObjs) > 0:
objs.extend(potentialNewObjs)
v = potentialNewVerb
if len(objs) > 0:
objs.extend(getObjsFromConjunctions(objs))
return v, objs
def getAllObjsWithAdjectives(v):
# rights is a generator
rights = list(v.rights)
objs = [tok for tok in rights if tok.dep_ in OBJECTS]
if len(objs)== 0:
objs = [tok for tok in rights if tok.dep_ in ADJECTIVES]
objs.extend(getObjsFromPrepositions(rights))
potentialNewVerb, potentialNewObjs = getObjFromXComp(rights)
if potentialNewVerb is not None and potentialNewObjs is not None and len(potentialNewObjs) > 0:
objs.extend(potentialNewObjs)
v = potentialNewVerb
if len(objs) > 0:
objs.extend(getObjsFromConjunctions(objs))
return v, objs
def findSVOs(tokens):
svos = []
verbs = [tok for tok in tokens if tok.pos_ == "VERB" and tok.dep_ != "aux"]
for v in verbs:
subs, verbNegated = getAllSubs(v)
# hopefully there are subs, if not, don't examine this verb any longer
if len(subs) > 0:
v, objs = getAllObjs(v)
for sub in subs:
for obj in objs:
objNegated = isNegated(obj)
svos.append((sub.lower_, "!" + v.lower_ if verbNegated or objNegated else v.lower_, obj.lower_))
return svos
def findSVAOs(tokens):
svos = []
verbs = [tok for tok in tokens if tok.pos_ == "VERB" and tok.dep_ != "aux"]
for v in verbs:
subs, verbNegated = getAllSubs(v)
# hopefully there are subs, if not, don't examine this verb any longer
if len(subs) > 0:
v, objs = getAllObjsWithAdjectives(v)
for sub in subs:
for obj in objs:
objNegated = isNegated(obj)
obj_desc_tokens = generate_left_right_adjectives(obj)
sub_compound = generate_sub_compound(sub)
svos.append((" ".join(tok.lower_ for tok in sub_compound), "!" + v.lower_ if verbNegated or objNegated else v.lower_, " ".join(tok.lower_ for tok in obj_desc_tokens)))
return svos
def generate_sub_compound(sub):
sub_compunds = []
for tok in sub.lefts:
if tok.dep_ in COMPOUNDS:
sub_compunds.extend(generate_sub_compound(tok))
sub_compunds.append(sub)
for tok in sub.rights:
if tok.dep_ in COMPOUNDS:
sub_compunds.extend(generate_sub_compound(tok))
return sub_compunds
def generate_left_right_adjectives(obj):
obj_desc_tokens = []
for tok in obj.lefts:
if tok.dep_ in ADJECTIVES:
obj_desc_tokens.extend(generate_left_right_adjectives(tok))
obj_desc_tokens.append(obj)
for tok in obj.rights:
if tok.dep_ in ADJECTIVES:
obj_desc_tokens.extend(generate_left_right_adjectives(tok))
return obj_desc_tokens
现在,当您传递查询时,例如:
from spacy.lang.en import English
parser = English()
sentence = u"""
Donald Trump is the worst president of USA, but Hillary is better than him
"""
parse = parser(sentence)
print(findSVAOs(parse))
您将获得以下信息:
(S
(NP Donald/NNP Trump/NNP)
is/VBZ
(NP the/DT worst/JJS president/NN)
in/IN
(NP USA,/NNP)
but/CC
(NP Hillary/NNP)
is/VBZ
better/JJR
than/IN
(NP him/PRP))
[(u'donald trump', u'is', u'worst president'), (u'hillary', u'is', u'better')]
谢谢@Krzysiek为您提供的解决方案,我实际上无法深入到您的库中进行修改。我试图修改上面提到的链接来解决我的问题。我尝试了你的解决方案,但它放弃了用于不同主题的形容词/副词。因为我必须进行情绪分析,这将使所有的陈述/关系保持中立。好的,因此,您的评论中的答案是在原始答案的编辑部分之后添加的,因为在评论中没有足够的空间。@Krzysiek获取错误:
AttributeError:'spacy.tokens.doc.doc'对象在键入engine2.process()
@JafferWilson后没有属性“next_sent”
,谢谢。它应该是上下文,它是spacy文档的包装器。我更正了它。@Krzysiek当我尝试你的代码时,我得到的输出是你的示例的[]
。很好,spacy真的很强大;)顺便说一句,我刚刚在我的lib中实现了传递管道列表结构的可能性,以便每个语句获得多个元组;)我想知道你的解决方案是否适用于非常不同的语句示例?是的,确实如此。它尝试解析除主语-动词-对象之外的许多其他依赖项。今晚我将尝试实现您的库并对其进行探索。结果我得到了一个空列表。这对你有用吗?上面的导入英语似乎不再包含语法分析器了?无论如何,修复方法是--import-spacy
parser=spacy.load('en',disable=['ner','textcat'])
我一直在尝试使用所描述的方法和指定的模型en\u core\u web\u lg而不是parser-English()来运行代码。在这两种情况下,代码对同一示例语句不返回任何内容。我可能做错了什么?我正在运行spacy及其模型的最新版本。