Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/design-patterns/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Nlp 如何定制spaCy';s标记器,以排除由正则表达式描述的拆分短语_Nlp_Spacy - Fatal编程技术网

Nlp 如何定制spaCy';s标记器,以排除由正则表达式描述的拆分短语

Nlp 如何定制spaCy';s标记器,以排除由正则表达式描述的拆分短语,nlp,spacy,Nlp,Spacy,例如,我希望标记器将“newyork”标记为['newyork'],而不是默认的['New','York'] 文档建议在创建自定义标记器时添加正则表达式 因此,我做了以下工作: import re import spacy from spacy.tokenizer import Tokenizer target = re.compile(r'New York') def custom_tokenizer(nlp): dflt_prefix = nlp.Defaults.prefix

例如,我希望标记器将“newyork”标记为['newyork'],而不是默认的['New','York']

文档建议在创建自定义标记器时添加正则表达式

因此,我做了以下工作:

import re
import spacy
from spacy.tokenizer import Tokenizer

target = re.compile(r'New York')

def custom_tokenizer(nlp):

    dflt_prefix = nlp.Defaults.prefixes
    dflt_suffix = nlp.Defaults.suffixes
    dflt_infix = nlp.Defaults.infixes

    prefix_re = spacy.util.compile_prefix_regex(dflt_prefix).search
    suffix_re = spacy.util.compile_suffix_regex(dflt_suffix).search
    infix_re = spacy.util.compile_infix_regex(dflt_infix).finditer

    return Tokenizer(nlp.vocab, prefix_search=prefix_re,
                                suffix_search=suffix_re,
                                infix_finditer=infix_re,
                                token_match=target.match)

nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u"New York")
print([t.text for t in doc])

我使用了默认值,以便正常行为继续,除非函数目标(token_match参数的参数)返回true

但我仍然得到[纽约],[纽约]。感谢您的帮助。

使用该组件来识别您希望作为单个标记处理的短语。使用上下文管理器将短语中的标记合并为单个标记。最后,将整个过程打包,并将组件添加到您的语言模型中

import spacy
from spacy.lang.en import English
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc

class MatchRetokenizeComponent:
  def __init__(self, nlp, terms):
    self.terms = terms
    self.matcher = PhraseMatcher(nlp.vocab)
    patterns = [nlp.make_doc(text) for text in terms]
    self.matcher.add("TerminologyList", None, *patterns)
    Doc.set_extension("phrase_matches", getter=self.matcher, force=True) # You should probably set Force=False

  def __call__(self, doc):
    matches = self.matcher(doc)
    with doc.retokenize() as retokenizer:
        for match_id, start, end in matches:
            retokenizer.merge(doc[start:end], attrs={"LEMMA": str(doc[start:end])})
    return doc

terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."]

nlp = English()
retokenizer = MatchRetokenizeComponent(nlp, terms) 
nlp.add_pipe(retokenizer, name='merge_phrases', last=True)

doc = nlp("German Chancellor Angela Merkel and US President Barack Obama "
          "converse in the Oval Office inside the White House in Washington, D.C.")

[tok for tok in doc]

#[German,
# Chancellor,
# Angela Merkel,
# and,
# US,
# President,
# Barack Obama,
# converse,
# in,
# the,
# Oval,
# Office,
# inside,
# the,
# White,
# House,
# in,
# Washington, D.C.]
编辑:如果您最终尝试合并重叠跨距,短语匹配器实际上会抛出错误。如果这是一个问题,您最好使用new,它尝试使用最长的连续匹配。使用这样的实体,让我们稍微简化一下自定义管道组件:

class EntityRetokenizeComponent:
  def __init__(self, nlp):
    pass
  def __call__(self, doc):
    with doc.retokenize() as retokenizer:
        for ent in doc.ents:
            retokenizer.merge(doc[ent.start:ent.end], attrs={"LEMMA": str(doc[ent.start:ent.end])})
    return doc


nlp = English()

ruler = EntityRuler(nlp)

# I don't care about the entity label, so I'm just going to call everything an "ORG"
ruler.add_patterns([{"label": "ORG", "pattern": term} for term in terms])
nlp.add_pipe(ruler) 

retokenizer = EntityRetokenizeComponent(nlp)
nlp.add_pipe(retokenizer, name='merge_phrases')

可以为短语匹配器提供标记的正则表达式吗?我不确定,但您问题中的“regex”是完全匹配的,它肯定支持。它还支持忽略大小写并通过
attr
参数匹配引理和其他类似的东西,该参数确定要匹配的标记属性。如果PhraseMatcher或EntityRuler对您来说不够强大,您可以查看底层Matcher类的模式API。