Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/285.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么Pytork渐变是数组而不是向量?_Python_Pytorch - Fatal编程技术网

Python 为什么Pytork渐变是数组而不是向量?

Python 为什么Pytork渐变是数组而不是向量?,python,pytorch,Python,Pytorch,我试图计算两个不同时代的同一层的渐变点,但当我使用print(model.layer1[0].weight.grad)时,它会返回 tensor([[[[-1.1855e-03, -3.7884e-03, -2.8973e-03, -2.8847e-03, -9.6510e-04], [-2.0213e-03, -4.4927e-03, -5.4852e-03, -6.6060e-03, -3.5726e-03], [ 7.4499e-04, -1.844

我试图计算两个不同时代的同一层的渐变点,但当我使用
print(model.layer1[0].weight.grad)
时,它会返回

tensor([[[[-1.1855e-03, -3.7884e-03, -2.8973e-03, -2.8847e-03, -9.6510e-04],
          [-2.0213e-03, -4.4927e-03, -5.4852e-03, -6.6060e-03, -3.5726e-03],
          [ 7.4499e-04, -1.8440e-03, -5.0472e-03, -5.6322e-03, -1.9532e-03],
          [-4.5696e-04,  9.6445e-04, -1.4923e-03, -2.9467e-03, -1.4610e-03],
          [ 2.4987e-04,  2.2086e-03, -7.6576e-04, -2.7009e-03, -2.8571e-03]]],


        [[[ 2.1447e-03,  3.1090e-03,  6.8175e-03,  6.4778e-03,  3.0501e-03],
          [ 2.0214e-03,  3.9936e-03,  7.9528e-03,  6.0224e-03,  1.7545e-03],
          [ 3.8781e-03,  5.6659e-03,  6.6901e-03,  5.4041e-03,  7.8014e-04],
          [ 4.4273e-03,  3.4548e-03,  5.7185e-03,  4.1650e-03,  9.9067e-04],
          [ 4.6075e-03,  4.1176e-03,  6.8392e-03,  3.4005e-03,  1.0009e-03]]],


        [[[-3.8654e-04, -2.9567e-03, -6.1341e-03, -8.3991e-03, -8.2343e-03],
          [-2.9113e-03, -5.4605e-03, -6.3008e-03, -8.2075e-03, -9.6702e-03],
          [-1.5218e-03, -4.4105e-03, -5.5651e-03, -6.8926e-03, -6.6076e-03],
          [-6.0357e-04, -3.1118e-03, -4.4441e-03, -4.0519e-03, -3.9733e-03],
          [-2.8683e-04, -1.6281e-03, -4.2213e-03, -5.5304e-03, -5.0142e-03]]],


        [[[-3.7607e-04, -1.7234e-04, -1.4569e-03, -3.5825e-04,  1.4530e-03],
          [ 2.6226e-04,  8.5076e-04,  1.2195e-03,  2.7885e-03,  2.5953e-03],
          [-7.7404e-04,  1.0984e-03,  7.8208e-04,  5.1286e-03,  4.6842e-03],
          [-1.8183e-03,  8.9730e-04,  1.0955e-03,  4.9259e-03,  6.4677e-03],
          [ 1.1674e-03,  4.0651e-03,  4.5886e-03,  8.3678e-03,  8.9893e-03]]],
那是梯度吗?如果是,为什么它们不是向量?下面是我的神经网络

class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=2),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))
        self.layer2 = nn.Sequential(
            nn.Conv2d(32, 64, kernel_size=5, stride=1, padding=2),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))
        self.drop_out = nn.Dropout()
        self.fc1 = nn.Linear(7 * 7 * 64, 1000)
        self.fc2 = nn.Linear(1000, 10)

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0), -1)
        out = self.drop_out(out)
        out = self.fc1(out)
        out = self.fc2(out)
        return out
下面是我如何训练和计算梯度的代码

model = ConvNet()
klisi=[]
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# Train the model
total_step = len(train_loader)
loss_list = []
acc_list = []
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        # Run the forward pass
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss_list.append(loss.item())

        # Backprop and perform Adam optimisation
        optimizer.zero_grad()
        loss.backward()
        
        optimizer.step()

        # Track the accuracy
        total = labels.size(0)
        _, predicted = torch.max(outputs.data, 1)
        correct = (predicted == labels).sum().item()
        acc_list.append(correct / total)

        if (i + 1) % 100 == 0:
            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}, Accuracy: {:.2f}%'
                  .format(epoch + 1, num_epochs, i + 1, total_step, loss.item(),
                          (correct / total) * 100))
    print(model.layer1[0].weight.grad)
    klisi.append(model.layer1[0].weight.grad)
    print(optimizer.param_groups[0]['lr'])
    optimizer.param_groups[0]['lr'] *= 0.9999

请不要在你的问题中包含图像-你能复制并粘贴文本并将其包含在你的问题中吗?好的,我将编辑它如何使用你用来计算梯度的代码。torch通常会累积梯度,因此,如果您计算两个输出(两个不同的历元)的梯度,
.grad
中的值将是两个梯度的总和。我使用“model.layer1[0].weight.grad”返回梯度,您为什么感到惊讶?所讨论的层是Conv2D,它的权重是3D,所以梯度也是3D张量。