Neural network 在Pytorch中手动更新权重

Neural network 在Pytorch中手动更新权重,neural-network,pytorch,Neural Network,Pytorch,然后我打印重量 import torch import math # Create Tensors to hold input and outputs. x = torch.linspace(-math.pi, math.pi, 2000) y = torch.sin(x) # For this example, the output y is a linear function of (x, x^2, x^3), so # we can consider it as a linear l

然后我打印重量

import torch
import math


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-6
结果:

parameters = list(model.parameters())
print(parameters)
更新权重:

y_pred = model(xx)
loss = loss_fn(y_pred, y)
model.zero_grad()
loss.backward()
然后

with torch.no_grad():  
    for param in model.parameters():
        param -= 1e-6 * param.grad
权重已更新。我弄糊涂了。这怎么可能?我以为for循环中的变量已更改,而不是model.parameters()

但是当你稍微修改一下代码的时候

list(model.parameters())
[Parameter containing:
 tensor([[ 0.0532,  0.2472, -0.0393]], requires_grad=True),
 Parameter containing:
 tensor([-0.0167], requires_grad=True)]

重量没有变化。所以我猜它与参数grad有关。你们能给我解释一下吗?

param
变量在
model.parameters()的每个元素的循环中。因此,更新
param
与更新
model.parameters()
的元素相同

至于你的第二个例子,我认为通过
1e-6
递减是不足以让你看到效果的。尝试
param-=1.
并查看这是否对
model.parameters()
有任何影响

list(model.parameters())
[Parameter containing:
 tensor([[ 0.0532,  0.2472, -0.0393]], requires_grad=True),
 Parameter containing:
 tensor([-0.0167], requires_grad=True)]
with torch.no_grad():  
    for param in model.parameters():
        param -= 1e-6