为什么pytorch F.mse_损耗在w.r.t.张量和参数上表现不同?

为什么pytorch F.mse_损耗在w.r.t.张量和参数上表现不同?,pytorch,Pytorch,下面是我的代码: import torch as pt from torch.nn import functional as F a = pt.Tensor([[0, 1], [2, 3]]) b = pt.Tensor([[1, 0], [5, 4]]) print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean')) a = pt.nn.Parameter(a) b = pt.nn.Parameter(b) p

下面是我的代码:

import torch as pt
from torch.nn import functional as F

a = pt.Tensor([[0, 1], [2, 3]])
b = pt.Tensor([[1, 0], [5, 4]])
print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean'))

a = pt.nn.Parameter(a)
b = pt.nn.Parameter(b)
print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean'))
结果是:

tensor(3.) tensor(3.)
tensor(12., grad_fn=<SumBackward0>) tensor(12., grad_fn=<SumBackward0>)
张量(3.)张量(3.) 张量(12,梯度fn=)张量(12,梯度fn=) 我想知道为什么他们给出了两个不同的结果

环境设置:
python 3.6
pytorch 0.4.1

根据标准,它是一个bug