Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/349.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python HuggingFace模型应具有';获取编码器&x27;功能定义_Python_Huggingface Transformers_Huggingface Tokenizers - Fatal编程技术网

Python HuggingFace模型应具有';获取编码器&x27;功能定义

Python HuggingFace模型应具有';获取编码器&x27;功能定义,python,huggingface-transformers,huggingface-tokenizers,Python,Huggingface Transformers,Huggingface Tokenizers,我正在使用HuggingFacefacebook/bart大型cnn预训练模式,通过AutoModel和AutoTokenizer进行文本摘要。模型和标记器都可以很好地加载: import os import torch from transformers import AutoTokenizer, AutoModel torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = AutoTokenizer

我正在使用HuggingFace
facebook/bart大型cnn
预训练模式,通过
AutoModel
AutoTokenizer
进行文本摘要。模型和标记器都可以很好地加载:

import os
import torch
from transformers import AutoTokenizer, AutoModel

torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'

tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn",
                                          cache_dir=os.getenv("cache_dir", "model"))

model = AutoModel.from_pretrained("facebook/bart-large-cnn",
                                  cache_dir=os.getenv("cache_dir", "model")).to(torch_device)

FRANCE_ARTICLE = ' Marseille...'  # @noqa

dct = tokenizer.batch_encode_plus(
    [FRANCE_ARTICLE],
    max_length=1024,
    padding="max_length",
    truncation=True,
    return_tensors="pt",
)

max_length = 140
min_length = 55

hypotheses_batch = model.generate(
    input_ids=dct["input_ids"].to(torch_device),
    attention_mask=dct["attention_mask"].to(torch_device),
    num_beams=4,
    length_penalty=2.0,
    max_length=max_length + 2,
    min_length=min_length + 1,
    no_repeat_ngram_size=3,
    do_sample=False,
    early_stopping=True,
    decoder_start_token_id=model.config.eos_token_id,
)

decoded = [
    tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in hypotheses_batch
]

print(decoded)
但是在
标记器中调用标记器上的解码时,我遇到了这个错误。batch\u encode\u plus

Traceback (most recent call last):
  File "src/summarization/run.py", line 42, in <module>
    summary_ids = model.generate(article_input_ids,num_beams=4,length_penalty=2.0,max_length=142,min_length=56,no_repeat_ngram_size=3)
  File "/usr/local/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/transformers/generation_utils.py", line 379, in generate
    assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self)
AssertionError: BartModel(
  (shared): Embedding(50264, 1024, padding_idx=1)
  (encoder): BartEncoder(
    (embed_tokens): Embedding(50264, 1024, padding_idx=1)
    (embed_positions): LearnedPositionalEmbedding(1026, 1024, padding_idx=1)
    (layers): ModuleList(
      (0): EncoderLayer(
...
      )
    )
    (layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
  )
) should have a 'get_encoder' function defined
回溯(最近一次呼叫最后一次):
文件“src/summary/run.py”,第42行,在
摘要ID=模型。生成(文章输入ID,波束数=4,长度惩罚=2.0,最大长度=142,最小长度=56,无重复内存大小=3)
文件“/usr/local/lib/python3.7/site packages/torch/autograd/grad\u mode.py”,第15行,在上下文中
返回函数(*args,**kwargs)
文件“/usr/local/lib/python3.7/site packages/transformers/generation_utils.py”,第379行,在generate中
断言hasattr(self,“get_encoder”),“{}应该定义一个“get_encoder”函数。格式(self)
断言错误:BartModel(
(共享):嵌入(50264,1024,padding_idx=1)
(编码器):易货(
(嵌入令牌):嵌入(50264,1024,padding\u idx=1)
(嵌入位置):LearnedPositionEmbedding(10261024,padding_idx=1)
(层):模块列表(
(0):编码器层(
...
)
)
(layernorm_嵌入):layernorm((1024,),eps=1e-05,elementwise_affine=True)
)
)应定义“获取编码器”功能