PyTorch:输入数据中的一维

PyTorch:输入数据中的一维,pytorch,Pytorch,我不明白为什么这个代码会起作用: # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(),

我不明白为什么这个代码会起作用:

# Hyperparameters for our network
input_size = 784
hidden_sizes = [128, 64]
output_size = 10

# Build a feed-forward network
model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[0], hidden_sizes[1]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[1], output_size),
                      nn.Softmax(dim=1))
print(model)

# Forward pass through the network and display output
images, labels = next(iter(trainloader))
images.resize_(images.shape[0], 1, 784)
ps = model.forward(images[0,:])
图像的大小是images.shape[0],1784,但我们的网络具有输入大小=784。网络如何处理输入图像中的一维?我试图更改图像。将_images.shape[0],1784的大小调整为images=images.viewmimages.shape[0],-1,但出现错误:

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
作为参考,数据加载器是通过以下方式创建的:

# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
                              transforms.Normalize((0.5,), (0.5,)),
                              ])
# Download and load the training data
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)

PyTorch网络将输入作为[批量大小,输入尺寸]

在您的例子中,图像[0,:]的形状是[1784],其中“1”是tue批处理大小,这就是代码工作的原因