在现有英文模型上实现Spacy中的自定义POS标记器:NLP-Python
我正在尝试重新训练spacy中现有的POS标记器,以便使用下面的代码为某些错误分类的单词显示正确的标记。但它给了我一个错误: 警告:未命名向量--这不允许使用多个向量模型 待加载。(形状:(0,0)) 另外,当我再次尝试检查标签是否使用下面的代码正确分类时在现有英文模型上实现Spacy中的自定义POS标记器:NLP-Python,python,nlp,spacy,Python,Nlp,Spacy,我正在尝试重新训练spacy中现有的POS标记器,以便使用下面的代码为某些错误分类的单词显示正确的标记。但它给了我一个错误: 警告:未命名向量--这不允许使用多个向量模型 待加载。(形状:(0,0)) 另外,当我再次尝试检查标签是否使用下面的代码正确分类时 doc = nlp('If ThermostatFailedOpen moves from false to true, we are going to party') for token in doc: print(token.te
doc = nlp('If ThermostatFailedOpen moves from false to true, we are going to party')
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
token.shape_, token.is_alpha, token.is_stop)
恒温器故障打开恒温器故障打开谓词VB nsubj XXXXXXXXXXXXX
真假
这些单词没有正确分类(我猜是预料中的)!关于如何解决这个问题有什么见解吗
#!/usr/bin/env python
# coding: utf8
import random
from pathlib import Path
import spacy
# You need to define a mapping from your data's part-of-speech tag names to the
# Universal Part-of-Speech tag set, as spaCy includes an enum of these tags.
# See here for the Universal Tag Set:
# http://universaldependencies.github.io/docs/u/pos/index.html
# You may also specify morphological features for your tags, from the universal
# scheme.
TAG_MAP = {
'N': {'pos': 'NOUN'},
'V': {'pos': 'VERB'},
'J': {'pos': 'ADJ'}
}
# Usually you'll read this in, of course. Data formats vary. Ensure your
# strings are unicode and that the number of tags assigned matches spaCy's
# tokenization. If not, you can always add a 'words' key to the annotations
# that specifies the gold-standard tokenization, e.g.:
# ("Eatblueham", {'words': ['Eat', 'blue', 'ham'] 'tags': ['V', 'J', 'N']})
TRAIN_DATA = [
("ThermostatFailedOpen", {'tags': ['V']}),
("EThermostatFailedClose", {'tags': ['V']})
]
def main(lang='en', output_dir=None, n_iter=25):
"""Create a new model, set up the pipeline and train the tagger. In order to
train the tagger with a custom tag map, we're creating a new Language
instance with a custom vocab.
"""
nlp = spacy.blank(lang)
# add the tagger to the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
tagger = nlp.create_pipe('tagger')
# Add the tags. This needs to be done before you start training.
for tag, values in TAG_MAP.items():
tagger.add_label(tag, values)
nlp.add_pipe(tagger)
nlp.vocab.vectors.name = 'spacy_pretrained_vectors'
optimizer = nlp.begin_training()
for i in range(n_iter):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update([text], [annotations], sgd=optimizer, losses=losses)
print(losses)
# test the trained model
test_text = "If ThermostatFailedOpen moves from false to true, we are going to party"
doc = nlp(test_text)
print('Tags', [(t.text, t.tag_, t.pos_) for t in doc])
# save model to output directory
if output_dir is not None:
output_dir = Path(output_dir)
if not output_dir.exists():
output_dir.mkdir()
nlp.to_disk(output_dir)
print("Saved model to", output_dir)
# test the save model
print("Loading from", output_dir)
nlp2 = spacy.load(output_dir)
doc = nlp2(test_text)
print('Tags', [(t.text, t.tag_, t.pos_) for t in doc])
if __name__ == '__main__':
main('en','customPOS')
注意:如果尝试追加,将出现以下错误
File "pipeline.pyx", line 550, in spacy.pipeline.Tagger.add_label
ValueError: [T003] Resizing pre-trained Tagger models is not currently supported.
最初我尝试了这个
如果您使用的是相同的标签,并且只需要对其进行更好的培训,则无需添加新标签。但是,如果要使用不同的标签集,则需要训练新模型 对于第一种情况,您需要
get_pipe('tagger')
,跳过add_label
循环并继续
对于第二种情况,您需要创建一个新的标记器,对其进行训练,然后将其添加到管道中。为此,您还需要在加载模型时禁用标记器(因为您将培训一个新的标记器)。我还回答了这个问题您可以通过nlp.vocab.vectors.name='spacy\u pretrained\u vectors'optimizer=nlp.begin\u training()修复衰减
File "pipeline.pyx", line 550, in spacy.pipeline.Tagger.add_label
ValueError: [T003] Resizing pre-trained Tagger models is not currently supported.
nlp = spacy.load('en_core_web_sm')
tagger = nlp.get_pipe('tagger')
# Add the tags. This needs to be done before you start training.
for tag, values in TAG_MAP.items():
tagger.add_label(tag, values)
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'tagger']
with nlp.disable_pipes(*other_pipes): # only train TAGGER
nlp.vocab.vectors.name = 'spacy_pretrained_vectors'
optimizer = nlp.begin_training()
for i in range(n_iter):
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update([text], [annotations], sgd=optimizer, losses=losses)
print(losses)