Python do output.backward(梯度=df)和loss.backward()之间有什么区别?

Python do output.backward(梯度=df)和loss.backward()之间有什么区别?,python,gradient,backpropagation,autograd,Python,Gradient,Backpropagation,Autograd,我目前正在测试一个用于表面法线估计的开源存储库(),以下函数用作表面法线损失函数: def l1norm(input, label, mask, train=True): # input: bs*ch*h*w # label: bs*h*w*ch # mask: bs*h*w bz, ch, h, w = input.size() # normalization input = input.permute(0,2,3,1).contigu

我目前正在测试一个用于表面法线估计的开源存储库(),以下函数用作表面法线损失函数:

def l1norm(input, label, mask, train=True):
    # input: bs*ch*h*w
    # label: bs*h*w*ch
    # mask: bs*h*w
    bz, ch, h, w = input.size()
    
    # normalization
    input = input.permute(0,2,3,1).contiguous().view(-1,ch)
    input_v = F.normalize(input,p=2)
    label_v = label.contiguous().view(-1,ch)
    target = torch.ones([input.size(0),], dtype=torch.float).cuda()
    mask_t = mask.contiguous().view(-1,1)
    mask_t = torch.squeeze(mask_t)
    target[torch.eq(mask_t,0)] = -1
    
    if(train == True): # use mask from surface normal
        loss = F.l1_loss(input_v, label_v, reduce=False)#compute inner product
        loss[torch.eq(mask_t,0)] = 0 #rm the masked pixels 
        loss = torch.mean(loss)
        df = torch.autograd.grad(loss,input_v,only_inputs=True)
        df = df[0]
        df = torch.autograd.grad(input_v,input,grad_outputs=df,only_inputs=True)
        df = df[0]
        mask = mask.contiguous().view(-1,1).expand_as(df)
        df[torch.eq(mask,0)] = 0
        df = df.view(-1, h, w, ch)
        df = df.permute(0,3,1,2).contiguous()
    else:  # use mask from depth valid
        loss = F.l1_loss(input_v, label_v, reduce=False)
        loss[torch.eq(mask_t, 0)] = 0
        loss = torch.mean(loss)
        df = None
    return loss, df
然后在训练过程中,通过调用output.backward(gradient=df)而不是loss.backward()来反向传播梯度,第二个对我来说似乎更常见。 我的问题是,在哪种情况下,我们应该使用torch.autograd.grad()函数计算梯度,这样做的好处是什么?我可以用loss.backward()替换它吗

提前感谢您的帮助