Python RuntimeError:张量a(4000)的大小必须与张量b(512)在非单态维度1的大小相匹配

Python RuntimeError:张量a(4000)的大小必须与张量b(512)在非单态维度1的大小相匹配,python,deep-learning,pytorch,bert-language-model,huggingface-transformers,Python,Deep Learning,Pytorch,Bert Language Model,Huggingface Transformers,我正在尝试建立一个文档分类模型。我正在将BERT与PyTorch一起使用 我用下面的代码得到了伯特模型 bert = AutoModel.from_pretrained('bert-base-uncased') 这是培训代码 for epoch in range(epochs): print('\n Epoch {:} / {:}'.format(epoch + 1, epochs)) #train model train_loss, _ = modhelper.

我正在尝试建立一个文档分类模型。我正在将
BERT
PyTorch
一起使用

我用下面的代码得到了伯特模型

bert = AutoModel.from_pretrained('bert-base-uncased')
这是培训代码

for epoch in range(epochs):
 
    print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))

    #train model
    train_loss, _ = modhelper.train(proc.train_dataloader)

    #evaluate model
    valid_loss, _ = modhelper.evaluate()

    #save the best model
    if valid_loss < best_valid_loss:
        best_valid_loss = valid_loss
        torch.save(modhelper.model.state_dict(), 'saved_weights.pt')

    # append training and validation loss
    train_losses.append(train_loss)
    valid_losses.append(valid_loss)

    print(f'\nTraining Loss: {train_loss:.3f}')
    print(f'Validation Loss: {valid_loss:.3f}')
preds=self.model(sent\u id,mask)
此行抛出以下错误(包括完全回溯)

Epoch 1/1
火炬尺寸([324000])火炬尺寸([324000])
回溯(最近一次呼叫最后一次):
文件“”,第8行,在
列车丢失,=modhelper.train(过程列车数据加载器)
列车中第71行的文件“E:\BertTorch\model.py”
preds=自我模型(已发送\u id,掩码)
文件“E:\BertTorch\venv\lib\site packages\torch\nn\modules\module.py”,第727行,在调用impl中
结果=自我转发(*输入,**kwargs)
文件“E:\bertorch\model.py”,第181行,向前
#将输入传递给模型
文件“E:\BertTorch\venv\lib\site packages\torch\nn\modules\module.py”,第727行,在调用impl中
结果=自我转发(*输入,**kwargs)
文件“E:\bertorch\venv\lib\site packages\transformers\modeling\u bert.py”,第837行,向前
嵌入\输出=自嵌入(
文件“E:\BertTorch\venv\lib\site packages\torch\nn\modules\module.py”,第727行,在调用impl中
结果=自我转发(*输入,**kwargs)
文件“E:\bertorch\venv\lib\site packages\transformers\modeling\u bert.py”,第201行,向前
嵌入=输入\嵌入+位置\嵌入+标记\类型\嵌入
RuntimeError:张量a(4000)的大小必须与张量b(512)在非单态维度1的大小相匹配
如果你注意的话,我已经在代码中打印了火炬的尺寸。
打印(已发送\u id.size()、掩码.size())

该行代码的输出是
torch.Size([324000])torch.Size([324000])

正如我们所看到的,大小是一样的,但它会抛出错误。请把你的想法。真的很感激


如果您需要进一步的信息,请发表意见。我会很快添加所需内容。

问题是关于BERT的字数限制。我已将字数传递为4000,其中支持的最大字数为512(必须为字符串开头和结尾的“[cls]”和“[Sep]”再放弃2个,因此仅为510)。减少字数或使用其他模型解决问题。类似于上面评论中@cronoik所建议的


谢谢。

这一行特别出现了错误:
embeddings=inputs\u embeddings+position\u embeddings+token\u type\u embeddings
。这三个变量之间可能存在形状不匹配,因此产生了错误。@planet\u pluto希望您检查了显示两个tnsors.torch.size([324000])torch.size的行([324000])为什么要标记?@Venkatesh我知道
self.model()
抛出错误。但是如果仔细查看堆栈跟踪,您可以找到在模型向前传递过程中发生错误的确切位置。您加载的BET经过训练,可以处理长度为512个元素的序列。您提供的序列有4000个元素,而模型告诉您它无法处理。您可以它是使用不同的模型(如longformer)还是使用滑动窗口方法。这取决于您的任务。
def train(self, train_dataloader):
    self.model.train()
    total_loss, total_accuracy = 0, 0
    
    # empty list to save model predictions
    total_preds=[]
    
        # iterate over batches
    for step, batch in enumerate(train_dataloader):
        
        # progress update after every 50 batches.
        if step % 50 == 0 and not step == 0:
            print('  Batch {:>5,}  of  {:>5,}.'.format(step, len(train_dataloader)))
        
        # push the batch to gpu
        #batch = [r.to(device) for r in batch]
        
        sent_id, mask, labels = batch
        
        # clear previously calculated gradients 
        self.model.zero_grad()        

        print(sent_id.size(), mask.size())
        # get model predictions for the current batch
        preds = self.model(sent_id, mask) #This line throws the error
        
        # compute the loss between actual and predicted values
        self.loss = self.cross_entropy(preds, labels)
        
        # add on to the total loss
        total_loss = total_loss + self.loss.item()
        
        # backward pass to calculate the gradients
        self.loss.backward()
        
        # clip the the gradients to 1.0. It helps in preventing the exploding gradient problem
        torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0)
        
        # update parameters
        self.optimizer.step()
        
        # model predictions are stored on GPU. So, push it to CPU
        #preds=preds.detach().cpu().numpy()
        
        # append the model predictions
        total_preds.append(preds)
      
    # compute the training loss of the epoch
    avg_loss = total_loss / len(train_dataloader)
    
    # predictions are in the form of (no. of batches, size of batch, no. of classes).
    # reshape the predictions in form of (number of samples, no. of classes)
    total_preds  = np.concatenate(total_preds, axis=0)
      
    #returns the loss and predictions
    return avg_loss, total_preds
 Epoch 1 / 1
torch.Size([32, 4000]) torch.Size([32, 4000])
Traceback (most recent call last):

File "<ipython-input-39-17211d5a107c>", line 8, in <module>
train_loss, _ = modhelper.train(proc.train_dataloader)

File "E:\BertTorch\model.py", line 71, in train
preds = self.model(sent_id, mask)

File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)

File "E:\BertTorch\model.py", line 181, in forward
#pass the inputs to the model

File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)

File "E:\BertTorch\venv\lib\site-packages\transformers\modeling_bert.py", line 837, in forward
embedding_output = self.embeddings(

File "E:\BertTorch\venv\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)

File "E:\BertTorch\venv\lib\site-packages\transformers\modeling_bert.py", line 201, in forward
embeddings = inputs_embeds + position_embeddings + token_type_embeddings

RuntimeError: The size of tensor a (4000) must match the size of tensor b (512) at non-singleton dimension 1