Python 3.x 训练spaCy-NER模型和损失会随着迭代次数和批次的增加而不断增加

Python 3.x 训练spaCy-NER模型和损失会随着迭代次数和批次的增加而不断增加,python-3.x,spacy,Python 3.x,Spacy,我试图训练一个spaCy NER模型,并遵循spaCy培训指南。随着时间的推移,损失应该会减少,但随着每个时代的发展,损失会不断增加。我尝试过调整批量大小和迭代次数,但没有效果 示例: 历元:0损失:{} 纪元:0损失:{'ner':37.49999785423279} 纪元:0损失:{'ner':72.21390223503113} 纪元:0损失:{'ner':93.70724439620972} 纪元:0损失:{'ner':124.94790315628052} 纪元:0损失:{'ner':

我试图训练一个spaCy NER模型,并遵循spaCy培训指南。随着时间的推移,损失应该会减少,但随着每个时代的发展,损失会不断增加。我尝试过调整批量大小和迭代次数,但没有效果

示例:
历元:0损失:{}
纪元:0损失:{'ner':37.49999785423279}
纪元:0损失:{'ner':72.21390223503113}
纪元:0损失:{'ner':93.70724439620972}
纪元:0损失:{'ner':124.94790315628052}
纪元:0损失:{'ner':164.6911883354187}
纪元:0损失:{'ner':182.06093049049377}
纪元:0损失:{'ner':200.32691740989685}
纪元:0损失:{'ner':210.71145126968622}
纪元:0损失:{'ner':222.89578241482377}
纪元:0损失:{'ner':233.59122055233456}
纪元:0损失:{'ner':245.26212133839726}
纪元:0损失:{'ner':258.0684297736734}


这个时代的最后一批损失是11000。感谢您的帮助。

您找到解决方案了吗?您找到解决方案了吗?
# Create a blank 'en' model
nlp = spacy.blank("en")

# Create a new entity recognizer and add it to the pipeline
ner = nlp.create_pipe("ner")
nlp.add_pipe(ner)
# Add a new label
ner.add_label('LABEL')

# Start the training
optimizer = nlp.begin_training()
# Loop for 10 iterations
for itn in range(10):
# Shuffle the training data
    random.shuffle(spacy_train)
    losses = {}
    # Batch the examples and iterate over them
    for batch in spacy.util.minibatch(spacy_train, size=spacy.util.compounding(4.0, 32.0, 1.001)):
        texts = [text for text, entities in batch]
        annotations = [entities for text, entities in batch]
        print("epoch: {} Losses: {}".format(itn, str(losses)))
    # Update the model
        nlp.update(texts, annotations, drop=0.5, losses=losses, sgd=optimizer)