理解pytorch autograd
我试图了解pytorch autograd是如何工作的。如果我有函数y=2x和z=y**2,如果我做正态微分,我得到x=1处的dz/dx为8(dz/dx=dz/dy*dy/dx=2y*2=2(2x)*2=8x)。或者,z=(2x)**2=4x^2和dz/dx=8x,因此在x=1时,它是8 如果我对pytorch autograd也这样做,我会得到4理解pytorch autograd,pytorch,autograd,Pytorch,Autograd,我试图了解pytorch autograd是如何工作的。如果我有函数y=2x和z=y**2,如果我做正态微分,我得到x=1处的dz/dx为8(dz/dx=dz/dy*dy/dx=2y*2=2(2x)*2=8x)。或者,z=(2x)**2=4x^2和dz/dx=8x,因此在x=1时,它是8 如果我对pytorch autograd也这样做,我会得到4 x = torch.ones(1,requires_grad=True) y = 2*x z = y**2 x.backward(z) print(
x = torch.ones(1,requires_grad=True)
y = 2*x
z = y**2
x.backward(z)
print(x.grad)
哪张照片
tensor([4.])
我哪里出错了?你用错了。要获得您要求的结果,您应该使用
x = torch.ones(1,requires_grad=True)
y = 2*x
z = y**2
z.backward() # <-- fixed
print(x.grad)
x=torch.ones(1,需要_grad=True)
y=2*x
z=y**2
z、 backward()#如果您对pytorch中的autograd仍有一些困惑,请参考以下内容:
这将是基本的异或门表示
import numpy as np
import torch.nn.functional as F
inputs = torch.tensor(
[
[0, 0],
[0, 1],
[1, 0],
[1, 1]
]
)
outputs = torch.tensor(
[
0,
1,
1,
0
],
)
weights = torch.randn(1, 2)
weights.requires_grad = True #set it as true for gradient computation
bias = torch.randn(1, requires_grad=True) #set it as true for gradient computation
preds = F.linear(inputs, weights, bias) #create a basic linear model
loss = (outputs - preds).mean()
loss.backward()
print(weights.grad) # this will print your weights
谢谢我真的被pytorch教程弄糊涂了,但是你的解释和你提供的链接帮助很大!