Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/282.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 命名实体识别的Spacy 3置信度得分_Python_Nlp_Ner_Spacy 3 - Fatal编程技术网

Python 命名实体识别的Spacy 3置信度得分

Python 命名实体识别的Spacy 3置信度得分,python,nlp,ner,spacy-3,Python,Nlp,Ner,Spacy 3,我需要得到NER'de_core_news_lg'模型预测的标签的置信度分数。Spacy 2中有一个众所周知的问题解决方案: nlp = spacy.load('de_core_news_lg') doc = nlp('ich möchte mit frau Mustermann in der Musterbank sprechen') text = content doc = nlp.make_doc(text) beams = nlp.entity.beam_parse([doc], bea

我需要得到NER'de_core_news_lg'模型预测的标签的置信度分数。Spacy 2中有一个众所周知的问题解决方案:

nlp = spacy.load('de_core_news_lg')
doc = nlp('ich möchte mit frau Mustermann in der Musterbank sprechen')
text = content
doc = nlp.make_doc(text)
beams = nlp.entity.beam_parse([doc], beam_width=16, beam_density=0.0001)
for score, ents in nlp.entity.moves.get_beam_parses(beams[0]):
    print (score, ents)
    entity_scores = defaultdict(float)
    for start, end, label in ents:
        # print ("here")
        entity_scores[(start, end, label)] += score
        print ('entity_scores', entity_scores)
但是,在Spacy 3中,我得到以下错误:

AttributeError: 'German' object has no attribute 'entity'
显然,
语言
对象不再具有
实体
属性。
有人知道如何获得Spacy 3的置信度分数吗?

答案的核心是“使用管道组件”beam\n ner,并查看EntityRecognizer.pyx代码。然后是单元测试test_ner.py test_beam_ner_scores(),它几乎展示了如何进行。 如果要查看如何修改配置cfg,请保存模型(如下面的make_nlp()中所述),然后查看保存的model config.cfg

问题是它只适用于单元测试生成的“模型”。对于我的真实模型来说,它失败得很惨(5000个文档~4k个文本,训练NER f分数约75%)。 我所说的“悲惨”是指“贪婪”搜索找到我的实体,但“光束搜索”报告了数百个带有“分数”的标记(甚至标点符号),如0.013。以及(基于偏移量)通常来自文档的一小部分

这是令人沮丧的,因为我相信spacy训练(对于'beam_-ner')使用相同的代码来“验证”训练迭代,并且训练报告的分数几乎是不错的(好吧,比spacy 2低10%,但是对于使用'ner'和'beam_-ner'的训练,这种情况会发生)

所以我发布这篇文章是希望有人能有更好的运气,或者能指出我做错了什么

到目前为止,Spacy3对我来说是一场大灾难:无法获得信任,我不能再使用GPU(我只有6GB),基于光线的并行化无法工作(在Windows=实验性)并且通过使用基于“transformer”的模型,我的训练NER分数比Spacy2低10%

代码

结果应该如下所示(在重复make_nlp生成可用的“模型”之后):


目前还没有一个好的方法来获得spaCy v3中NER分数的信心。然而,开发中有一个分类程序组件可以使这项工作变得容易。这还不确定,但我们希望在下一个小版本中发布它。您可以在中关注开发或阅读更多相关信息。

这是一个重复
import spacy
from spacy.lang.en import English
from spacy.language import Language
from spacy.tokens import Doc
from spacy.training import Example

# Based upon test_ner.py test_beam_ner_scores()

TRAIN_DATA = [
    ("Who is Shaka Khan?", {"entities": [(7, 17, "PERSON")]}),
    ("I like London and Berlin.",  {"entities": [(7, 13, "LOC"), (18, 24, "LOC")]}),
    ("You like Paris and Prague.", {"entities": [(9, 14, "LOC"), (19, 25, "LOC")]}),
]

def make_nlp(model_dir):
    # ORIGINALLY: Test that we can get confidence values out of the beam_ner pipe
    nlp = English()
    config = { "beam_width": 32, "beam_density": 0.001 }
    ner = nlp.add_pipe("beam_ner", config=config)
    train_examples = []
    for text, annotations in TRAIN_DATA:
        train_examples.append(Example.from_dict(nlp.make_doc(text), annotations))
        for ent in annotations.get("entities"):
            ner.add_label(ent[2])
    optimizer = nlp.initialize()
    # update once
    losses = {}
    nlp.update(train_examples, sgd=optimizer, losses=losses)
    # save
    #if not model_dir.exists():
    #model_dir.mkdir()
    nlp.to_disk(model_dir)
    print("Saved model to", model_dir)
    return nlp


def test_greedy(nlp, text):
    # Report predicted entities using the default 'greedy' search (no confidences)
    doc = nlp(text)    
    print("GREEDY search");
    for ent in doc.ents:
        print("Greedy offset=", ent.start_char, "-", ent.end_char, ent.label_, "text=",  ent.text)
 
def test_beam(nlp, text):
    # Report predicted entities using the beam search (beam_width 16 or higher)
    ner = nlp.get_pipe("beam_ner")

    # Get the prediction scores from the beam search
    doc = nlp.make_doc(text)
    docs = [doc]
    # beams = StateClass returned from ner.predict(docs)
    beams = ner.predict(docs)
    print("BEAM search, labels", ner.labels);

    # Show individual entities and their scores as reported
    scores = ner.scored_ents(beams)[0]
    for ent, sco in scores.items():
        tok = doc[ent[0]]
        lbl = ent[2]
        spn = doc[ent[0]: ent[1]]           
        print('Beam-search', ent[0], ent[1], 'offset=', tok.idx, lbl, 'score=', sco,
              'text=', spn.text.replace('\n', '  '))

MODEL_DIR = "./test_model"
TEST_TEXT = "I like London and Paris."
  
if __name__ == "__main__":
    # You may have to repeat make_nlp() several times to produce a semi-decent 'model'
    # nlp = make_nlp(MODEL_DIR)
    nlp = spacy.load(MODEL_DIR)
    test_greedy(nlp, TEST_TEXT)
    test_beam  (nlp, TEST_TEXT)
GREEDY search
Greedy offset= 7 - 13 LOC text= London
Greedy offset= 18 - 23 LOC text= Paris
BEAM search, labels ('LOC', 'PERSON')
Beam-search 2 3 offset= 7 LOC score= 0.5315668466265199 text= London
Beam-search 4 5 offset= 18 LOC score= 0.7206478212662492 text= Paris
Beam-search 0 1 offset= 0 LOC score= 0.4679245513356703 text= I
Beam-search 3 4 offset= 14 LOC score= 0.4670399792743775 text= and
Beam-search 5 6 offset= 23 LOC score= 0.2799470367073933 text= .
Beam-search 1 2 offset= 2 LOC score= 0.21658368070744227 text= like