Python 为什么上一批的精度下降了?

Python 为什么上一批的精度下降了?,python,pytorch,Python,Pytorch,我正在使用PyTorch进行分类任务。由于某种原因,在上一次迭代中精度下降,我想知道为什么?任何答复都将不胜感激 密码是她的 class Classifier(nn.Module): def __init__(self): super(Classifier, self).__init__() self.lay

我正在使用PyTorch进行分类任务。由于某种原因,在上一次迭代中精度下降,我想知道为什么?任何答复都将不胜感激

密码是她的

class Classifier(nn.Module):                          
    def __init__(self):                                    
        super(Classifier, self).__init__()             
        self.layers = nn.Sequential(nn.Linear(89, 128),   
                                    nn.ReLU(),              
                                    nn.Linear(128, 64),      
                                    nn.ReLU(),              
                                    nn.Linear(64, 2))       
    def forward(self, x):               
        return self.layers(x)

def train(train_dl, model, epochs):  
    loss_function = nn.CrossEntropyLoss()
    optimizer = optim.Adam(model.parameters(), lr=0.1)
    for epoch in range(epochs):
        for (features, target) in train_dl:      
            optimizer.zero_grad() 
            features, target = features.to(device), target.to(device)
            output = model(features.float())
            target = target.view(-1) 
            loss = loss_function(output, target)
            loss.backward()  
            optimizer.step()
            output = torch.argmax(output, dim=1)
            correct = (output == target).float().sum()
            accuracy = correct / 512
            print(accuracy, loss)
        break
        
model = Classifier().to(device)
train(train_dl, model, 10)
以及输出的最后一部分

tensor(0.6465, device='cuda:0') tensor(0.6498, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6348, device='cuda:0') tensor(0.6574, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6582, device='cuda:0') tensor(0.6423, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6660, device='cuda:0') tensor(0.6375, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6719, device='cuda:0') tensor(0.6338, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6426, device='cuda:0') tensor(0.6523, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6699, device='cuda:0') tensor(0.6347, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6582, device='cuda:0') tensor(0.6422, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6543, device='cuda:0') tensor(0.6449, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6465, device='cuda:0') tensor(0.6502, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6992, device='cuda:0') tensor(0.6147, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6777, device='cuda:0') tensor(0.6289, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6836, device='cuda:0') tensor(0.6244, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6738, device='cuda:0') tensor(0.6315, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.1387, device='cuda:0') tensor(0.5749, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.6465,device='cuda:0')tensor(0.6498,device='cuda:0',grad_fn=)
张量(0.6348,device='cuda:0')张量(0.6574,device='cuda:0',grad_fn=)
张量(0.6582,device='cuda:0')张量(0.6423,device='cuda:0',grad_fn=)
张量(0.6660,device='cuda:0')张量(0.6375,device='cuda:0',grad_fn=)
张量(0.6719,device='cuda:0')张量(0.6338,device='cuda:0',grad_fn=)
张量(0.6426,device='cuda:0')张量(0.6523,device='cuda:0',grad_fn=)
张量(0.6699,device='cuda:0')张量(0.6347,device='cuda:0',grad_fn=)
张量(0.6582,device='cuda:0')张量(0.6422,device='cuda:0',grad_fn=)
张量(0.6543,device='cuda:0')张量(0.6449,device='cuda:0',grad_fn=)
张量(0.6465,device='cuda:0')张量(0.6502,device='cuda:0',grad_fn=)
张量(0.6992,device='cuda:0')张量(0.6147,device='cuda:0',grad_fn=)
张量(0.6777,device='cuda:0')张量(0.6289,device='cuda:0',grad_fn=)
张量(0.6836,device='cuda:0')张量(0.6244,device='cuda:0',grad_fn=)
张量(0.6738,device='cuda:0')张量(0.6315,device='cuda:0',grad_fn=)
张量(0.1387,device='cuda:0')张量(0.5749,device='cuda:0',grad_fn=)

我没有评论的名声,但这可能只是训练不稳定。这总是发生在第十纪元吗?您是否尝试运行它超过10个时代?

可能是因为您的上一批大小小于512。最好换一条线

精度=正确/512
致:

accurity=correct/features.shape[0]
或者,如果您不希望上一批具有不同的大小,则可以在创建时删除它,方法是设置
drop\u last=True
,如下所示:

train\u dl=DataLoader(…,drop\u last=True)

是。100%正确,非常感谢。@alex没问题:)