Deep learning ValueError:训练时每个通道的值应大于1,获得输入大小。大小([1,256])

Deep learning ValueError:训练时每个通道的值应大于1,获得输入大小。大小([1,256]),deep-learning,Deep Learning,我不能确定到底发生了什么,因为我不知道D的定义,但如果它是一个使用批标准化的神经网络,那么可能的问题是需要使用大于1的批大小 import time model_path = "/content/drive/My Drive/Generating-Webpages-from-Screenshots-master/models/" batch_count = len(dataloader) start = time.time() for epoch in range(ep

我不能确定到底发生了什么,因为我不知道
D
的定义,但如果它是一个使用批标准化的神经网络,那么可能的问题是需要使用大于1的批大小

import time
model_path = "/content/drive/My Drive/Generating-Webpages-from-Screenshots-master/models/" 
batch_count = len(dataloader)


start = time.time()
for epoch in range(epochs):
    #s = 0
    for i , (images , captions , lengths) in enumerate(dataloader):
        
        images = Variable(images.cuda())
        captions = Variable(captions.cuda())
        #lenghts is a list of caption length in descending order
        
        #The collate_fn function does padding to the captions that are short in length
        #so we need to pad our targets too so as to compute the loss
        
        targets = nn.utils.rnn.pack_padded_sequence(input = captions, lengths = lengths, batch_first = True)[0]
        
        #Clearing out buffers
        E.zero_grad()
        D.zero_grad()
        
        features = E(images)
        output = D(features , captions , lengths)
        loss = criterion(output , targets)
        
        loss.backward()
        optimizer.step()
        #s = s + 1
        
        
        if epoch % log_step == 0 and i == 0:
            
            print("Epoch : {} || Loss : {} || Perplexity : {}".format(epoch , loss.item() 
                                                                      , np.exp(loss.item())))
            
        #Uncomment this to use checkpointing
        #if (epoch + 1) % save_after_epochs == 0 and i == 0:
            
            #print("Saving Models")
            #torch.save(E.state_dict , os.path.join(model_path , 'encoder-{}'.format(epoch + 1)))
            #torch.save(D.state_dict , os.path.join(model_path , 'decoder-{}'.format(epoch + 1)))
print("Done Training!")
print("Time : {}".format(time.time() - start))