Python 函数的结果是torch.FloatTensor没有';需要#u梯度';属性错误

Python 函数的结果是torch.FloatTensor没有';需要#u梯度';属性错误,python,pytorch,Python,Pytorch,我使用的是Resnet18预训练模型。因此,基本上,我获取模型的“输出”,然后调用CrossEntropyLoss()。让模型的输出为“output”,而“labels”是类标签。因此,调用myCrossEntropyLoss(输出、标签)。我检查了“输出”的类型,它是。我还尝试了不同的“标签”组合。首先,将其设置为numpy数组,然后设置为变量。但似乎什么都不管用。我使用的是pytorch 0.3.1。请不要建议升级Pytork,因为在我目前的情况下不可能做到。我还附加了错误堆栈。但是,它似乎

我使用的是
Resnet18
预训练模型。因此,基本上,我获取模型的“输出”,然后调用
CrossEntropyLoss()
。让模型的输出为“output”,而“labels”是类标签。因此,调用my
CrossEntropyLoss(输出、标签)
。我检查了“输出”的类型,它是
。我还尝试了不同的“标签”组合。首先,将其设置为numpy数组,然后设置为变量。但似乎什么都不管用。我使用的是
pytorch 0.3.1
。请不要建议升级Pytork,因为在我目前的情况下不可能做到。我还附加了错误堆栈。但是,它似乎在0.4.0版中工作

标准函数是无熵函数

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-76-6d5f48373efd> in <module>()
      5 
      6 # Train and evaluate
----> 7 model_ft, hist = train_model(model_ft, data, criterion, optimizer_ft, num_epochs=num_epochs, is_inception=(model_name=="inception"))

<ipython-input-70-bfd03f976e97> in train_model(model, dataloaders, criterion, optimizer, num_epochs, is_inception)
     47                 labels=(torch.from_numpy(np.array([labels])))
     48                 #print(((outputs.requires_gradient)))
---> 49                 loss = criterion(outputs, labels)  ##calculate entropy loss
     50 
     51                 _, preds = torch.max(outputs, 1)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    355             result = self._slow_forward(*input, **kwargs)
    356         else:
--> 357             result = self.forward(*input, **kwargs)
    358         for hook in self._forward_hooks.values():
    359             hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    675 
    676     def forward(self, input, target):
--> 677         _assert_no_grad(target)
    678         return F.cross_entropy(input, target, self.weight, self.size_average,
    679                                self.ignore_index, self.reduce)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in _assert_no_grad(variable)
      8 
      9 def _assert_no_grad(variable):
---> 10     assert not variable.requires_grad, \
     11         "nn criterions don't compute the gradient w.r.t. targets - please " \
     12         "mark these variables as volatile or not requiring gradients"

AttributeError: 'torch.LongTensor' object has no attribute 'requires_grad'

''

似乎不支持
requires\u grad
标志(尝试搜索它)。如果可能的话,我建议升级PyTorch,它应该可以工作()。

您使用的PyTorch版本是什么?PyTorch版本-0.3.1
val_acc_history = []      

best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0

for epoch in range(num_epochs):
    print('Epoch {}/{}'.format(epoch, num_epochs - 1))
    print('-' * 10) 

    # Each epoch has a training and validation phase
    for phase in ['train', 'val']:
        if phase == 'train':
            model.train()  # Set model to training mode
        else:
            model.eval()   # Set model to evaluate mode

        running_loss = 0.0
        running_corrects = 0

        # Iterate over data.
        count=0
        for inputs, labels in dataloaders[phase]:

            # zero the parameter gradients
            optimizer.zero_grad() 


            outputs = model(inputs.unsqueeze(0))  ###input to the model and output porduced
            labels=(torch.from_numpy(np.array([labels])))
            loss = criterion(outputs, labels)  ##calculate entropy loss

            _, preds = torch.max(outputs, 1) 

            # backward + optimize only if in training phase
            if phase == 'train':
                loss.backward()    ### loss gradient going backward
                optimizer.step()    ### Optimizer performs parameter update based on current gradient

            # statistics
            running_loss += loss.item() * inputs.size(0)    
            running_corrects += torch.sum(preds == labels.data)
            count=count+1

        epoch_loss = running_loss / count
        epoch_acc = running_corrects.double() / count

        print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))

        # deep copy the model
        if phase == 'val' and epoch_acc > best_acc:
            best_acc = epoch_acc
            best_model_wts = copy.deepcopy(model.state_dict())
        if phase == 'val':
            val_acc_history.append(epoch_acc)

    #print()

time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))

# load best model weights
model.load_state_dict(best_model_wts)
return model, val_acc_history