Nlp Flair SequenceTagger PyTorch模型量化中的属性错误
我正在处理一个从继承FlairEmbedding的自定义类。在这个类中,我想使用PyTorch的torch.quantization模块实现模型量化。为了做到这一点,我需要对模型进行几批训练,以收集统计数据并选择适当的量化参数。该模型将用于序列标记器下游,因此我使用Flair的SequenceTagger类,其参数与我在下游任务中使用的参数相同。这是该类的外观: 类CustomEmbeddingsFlarembeddings: 定义初始化__ self、tag\u字典、tag\u类型、语料库、mini\u批量大小、train\u with\u dev、用于培训 模型,微调,每块字符,带空格,标记化的基本FlairEmbeddings参数 : 超级.\uuuuu初始\uuuu模型,微调,每块字符,带\u空格,标记化\u lm self.lm.qconfig=torch.quantization.default\u config torch.quantization.prepareself.lm.qconfig,就地=真 收集统计数据的小型培训 tagger=SequenceTaggerhidden\u size=256,embeddings=self,tag\u dictionary=tag\u dictionary,tag\u type=tag\u type 培训师=模型培训师,语料库 --->培训师。培训“模型”,小批量大小=小批量大小,最大时间=10,带开发的培训=带开发的培训 torch.quantization.convertself.lm,就地=真 此代码失败,出现以下错误:Nlp Flair SequenceTagger PyTorch模型量化中的属性错误,nlp,pytorch,word-embedding,quantization,flair,Nlp,Pytorch,Word Embedding,Quantization,Flair,我正在处理一个从继承FlairEmbedding的自定义类。在这个类中,我想使用PyTorch的torch.quantization模块实现模型量化。为了做到这一点,我需要对模型进行几批训练,以收集统计数据并选择适当的量化参数。该模型将用于序列标记器下游,因此我使用Flair的SequenceTagger类,其参数与我在下游任务中使用的参数相同。这是该类的外观: 类CustomEmbeddingsFlarembeddings: 定义初始化__ self、tag\u字典、tag\u类型、语料库、m
File "/home/pie3636/project/main.py", line 28, in __init__
embeddings = CustomEmbeddings(name, **params)
File "/home/pie3636/project/custom_embeddings.py", line 35, in __init__
trainer.train('model', mini_batch_size=mini_batch_size, max_epochs=10, train_with_dev=train_with_dev)
File "/usr/local/lib/python3.6/dist-packages/flair/trainers/trainer.py", line 371, in train
loss = self.model.forward_loss(batch_step)
File "/usr/local/lib/python3.6/dist-packages/flair/models/sequence_tagger_model.py", line 603, in forward_loss
features = self.forward(data_points)
File "/usr/local/lib/python3.6/dist-packages/flair/models/sequence_tagger_model.py", line 608, in forward
self.embeddings.embed(sentences)
File "/usr/local/lib/python3.6/dist-packages/flair/embeddings/base.py", line 60, in embed
self._add_embeddings_internal(sentences)
File "/usr/local/lib/python3.6/dist-packages/flair/embeddings/token.py", line 610, in _add_embeddings_internal
text_sentences, start_marker, end_marker, self.chars_per_chunk
File "/usr/local/lib/python3.6/dist-packages/flair/models/language_model.py", line 157, in get_representation
_, rnn_output, hidden = self.forward(batch, hidden)
File "/usr/local/lib/python3.6/dist-packages/flair/models/language_model.py", line 80, in forward
output, hidden = self.rnn(emb, hidden)
File "/home/m.meloux/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 726, in _call_impl
hook_result = hook(self, input, result)
File "/home/m.meloux/.local/lib/python3.6/site-packages/torch/quantization/quantize.py", line 74, in _observer_forward_hook
return self.activation_post_process(output)
File "/home/m.meloux/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/m.meloux/.local/lib/python3.6/site-packages/torch/quantization/observer.py", line 276, in forward
x = x_orig.detach() # avoid keeping autograd tape
AttributeError: 'tuple' object has no attribute 'detach'
我不确定问题是否来自我的代码,或者是否需要向PyTorch或Flair的问题跟踪者报告。stacktrace让我觉得是这两个库之间的交互失败了,而不是我的代码失败了,特别是因为PyTorch的量化模块仍处于测试阶段,但我可能弄错了。任何关于错误可能是什么的输入都将受到欢迎。当模型返回的不是张量,而是列表(例如,或元组)时,就会出现这个问题