如何在Python中使用WordNet获取单词域?

如何在Python中使用WordNet获取单词域?,python,nlp,nltk,wordnet,Python,Nlp,Nltk,Wordnet,如何使用nltk Python模块和查找单词域 假设我有像(交易、即期汇票、支票、存折)这样的词,所有这些词的域都是“银行”。我们如何在Python中使用nltk和WordNet实现这一点 我正在尝试通过上下位关系: 例如: from nltk.corpus import wordnet as wn sports = wn.synset('sport.n.01') sports.hyponyms() [Synset('judo.n.01'), Synset('athletic_game.n.01

如何使用nltk Python模块和查找单词域

假设我有像(交易、即期汇票、支票、存折)这样的词,所有这些词的域都是“银行”。我们如何在Python中使用nltk和WordNet实现这一点

我正在尝试通过上下位关系:

例如:

from nltk.corpus import wordnet as wn
sports = wn.synset('sport.n.01')
sports.hyponyms()
[Synset('judo.n.01'), Synset('athletic_game.n.01'), Synset('spectator_sport.n.01'),    Synset('contact_sport.n.01'), Synset('cycling.n.01'), Synset('funambulism.n.01'), Synset('water_sport.n.01'), Synset('riding.n.01'), Synset('gymnastics.n.01'), Synset('sledding.n.01'), Synset('skating.n.01'), Synset('skiing.n.01'), Synset('outdoor_sport.n.01'), Synset('rowing.n.01'), Synset('track_and_field.n.01'), Synset('archery.n.01'), Synset('team_sport.n.01'), Synset('rock_climbing.n.01'), Synset('racing.n.01'), Synset('blood_sport.n.01')]


普林斯顿WordNet和NLTK的WN API中都没有明确的域信息

我建议您获取WordNet域资源的副本,然后使用域链接您的语法集,请参阅

注册并完成下载后,您将看到一个
wn-domains-3.2-20070223
textfile,它是一个以制表符分隔的文件,第一列是语音标识符的偏移部分,第二列包含由空格分隔的域标记,例如

00584282-v  military pedagogy
00584395-v  military school university
00584526-v  animals pedagogy
00584634-v  pedagogy
00584743-v  school university
00585097-v  school university
00585271-v  pedagogy
00585495-v  pedagogy
00585683-v  psychological_features
然后使用以下脚本访问Synset的域:

还可以在WordNet域资源中查找
wn affect
,它对于消除情感词的歧义非常有用


在更新的NLTK v3.0中,它附带了开放的多语言WordNet(),并且由于法语语法集共享相同的偏移量ID,因此您可以简单地将WND用作跨语言资源。法语引理名称可以这样访问:

# Gets domains given synset.
for ss in wn.all_synsets():
    ssid = str(ss.offset()).zfill(8) + "-" + ss.pos()
    if synset2domains[ssid]: # not all synsets are in WordNet Domain.
        print ss, ss.lemma_names('fre'), ssid, synset2domains[ssid]

请注意,NLTK的最新版本将synset属性更改为“get”函数:
synset.offset
->
synset.offset()

正如@alvas所建议的,您可以使用WordNetDomains。您必须下载WordNet2.0(在其当前状态下,WordNetDomains不支持WordNet3.0的感知目录,这是NLTK使用的WordNet的默认版本)和WordNetDomains

  • WordNet2.0可以从

  • WordNetDomains可以从下载(获得许可后)

我创建了一个非常简单的应用程序,可以加载Python3.x中的两个资源,并提供一些可能需要的常用例程(例如获取一组链接到给定术语或给定语法集的域,等等)。WordNetDomains的数据加载来自@alvas

这就是它的样子(省略了大多数注释):


我认为您也可以使用spacy库,请参见下面的代码:

代码取自spacy wordnet官方网站:


请展示一些研究的成果,到目前为止你尝试了什么?@Torxed Hello,我已经添加了我的试验。太好了,我还没有听说过这个数据库!顺便说一句,我似乎不是唯一一个困惑为什么有人会在自由许可下发布,然后不让下载变得容易的人:(因为自2007年以来没有发布过任何版本(?),最好也发布github项目的更新?)虽然人们很久没有在WSD中使用WND了,现在最热门的事情是大量无监督的东西,这些东西很有趣,但却无法获得更多的知识,比如WND=)这太棒了,@alvas!你知道法国的WordNet是否有WordNet域名吗?还有,你有网站吗?我很想看到你更多的作品…@duhaime,谢谢你的关注,但我不是WND的创建者或开发者,我只是另一个潜藏在水池周围的nlp研究员。我确实有一个网页,但我不会做非常令人兴奋的事情,为了避免url机器人,尝试谷歌搜索
alvations
。用OMW信息更新我的答案,我希望它能有所帮助。
from collections import defaultdict
from nltk.corpus import wordnet as wn

# Loading the Wordnet domains.
domain2synsets = defaultdict(list)
synset2domains = defaultdict(list)
for i in open('wn-domains-3.2-20070223', 'r'):
    ssid, doms = i.strip().split('\t')
    doms = doms.split()
    synset2domains[ssid] = doms
    for d in doms:
        domain2synsets[d].append(ssid)

# Gets domains given synset.
for ss in wn.all_synsets():
    ssid = str(ss.offset).zfill(8) + "-" + ss.pos()
    if synset2domains[ssid]: # not all synsets are in WordNet Domain.
        print ss, ssid, synset2domains[ssid]

# Gets synsets given domain.        
for dom in sorted(domain2synsets):
    print dom, domain2synsets[dom][:3]
# Gets domains given synset.
for ss in wn.all_synsets():
    ssid = str(ss.offset()).zfill(8) + "-" + ss.pos()
    if synset2domains[ssid]: # not all synsets are in WordNet Domain.
        print ss, ss.lemma_names('fre'), ssid, synset2domains[ssid]
from collections import defaultdict
from nltk.corpus import WordNetCorpusReader
from os.path import exists


class WordNetDomains:
    def __init__(self, wordnet_home):
        #This class assumes you have downloaded WordNet2.0 and WordNetDomains and that they are on the same data home.
        assert exists(f'{wordnet_home}/WordNet-2.0'), f'error: missing WordNet-2.0 in {wordnet_home}'
        assert exists(f'{wordnet_home}/wn-domains-3.2'), f'error: missing WordNetDomains in {wordnet_home}'

        # load WordNet2.0
        self.wn = WordNetCorpusReader(f'{wordnet_home}/WordNet-2.0/dict', 'WordNet-2.0/dict')

        # load WordNetDomains (based on https://stackoverflow.com/a/21904027/8759307)
        self.domain2synsets = defaultdict(list)
        self.synset2domains = defaultdict(list)
        for i in open(f'{wordnet_home}/wn-domains-3.2/wn-domains-3.2-20070223', 'r'):
            ssid, doms = i.strip().split('\t')
            doms = doms.split()
            self.synset2domains[ssid] = doms
            for d in doms:
                self.domain2synsets[d].append(ssid)

    def get_domains(self, word, pos=None):
        word_synsets = self.wn.synsets(word, pos=pos)
        domains = []
        for synset in word_synsets:
            domains.extend(self.get_domains_from_synset(synset))
        return set(domains)

    def get_domains_from_synset(self, synset):
        return self.synset2domains.get(self._askey_from_synset(synset), set())

    def get_synsets(self, domain):
        return [self._synset_from_key(key) for key in self.domain2synsets.get(domain, [])]

    def get_all_domains(self):
        return set(self.domain2synsets.keys())

    def _synset_from_key(self, key):
        offset, pos = key.split('-')
        return self.wn.synset_from_pos_and_offset(pos, int(offset))

    def _askey_from_synset(self, synset):
        return self._askey_from_offset_pos(synset.offset(), synset.pos())

    def _askey_from_offset_pos(self, offset, pos):
        return str(offset).zfill(8) + "-" + pos
import spacy

from spacy_wordnet.wordnet_annotator import WordnetAnnotator 

# Load an spacy model (supported models are "es" and "en")  nlp = spacy.load('en') nlp.add_pipe(WordnetAnnotator(nlp.lang), after='tagger') token = nlp('prices')[0]

# wordnet object link spacy token with nltk wordnet interface by giving acces to
# synsets and lemmas  token._.wordnet.synsets() token._.wordnet.lemmas()

# And automatically tags with wordnet domains token._.wordnet.wordnet_domains()

# Imagine we want to enrich the following sentence with synonyms sentence = nlp('I want to withdraw 5,000 euros')

# spaCy WordNet lets you find synonyms by domain of interest
# for example economy economy_domains = ['finance', 'banking'] enriched_sentence = []

# For each token in the sentence for token in sentence:
    # We get those synsets within the desired domains
    synsets = token._.wordnet.wordnet_synsets_for_domain(economy_domains)
    if synsets:
        lemmas_for_synset = []
        for s in synsets:
            # If we found a synset in the economy domains
            # we get the variants and add them to the enriched sentence
            lemmas_for_synset.extend(s.lemma_names())
            enriched_sentence.append('({})'.format('|'.join(set(lemmas_for_synset))))
    else:
        enriched_sentence.append(token.text)

# Let's see our enriched sentence print(' '.join(enriched_sentence))
# >> I (need|want|require) to (draw|withdraw|draw_off|take_out) 5,000 euros