Python PyTorch:预测单个示例

Python PyTorch:预测单个示例,python,machine-learning,pytorch,backpropagation,Python,Machine Learning,Pytorch,Backpropagation,以下示例来自: 此代码成功地训练: # Code in file tensor/two_layer_net_tensor.py import torch device = torch.device('cpu') # device = torch.device('cuda') # Uncomment this to run on GPU # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is

以下示例来自:

此代码成功地训练:

# Code in file tensor/two_layer_net_tensor.py
import torch

device = torch.device('cpu')
# device = torch.device('cuda') # Uncomment this to run on GPU

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random input and output data
x = torch.randn(N, D_in, device=device)
y = torch.randn(N, D_out, device=device)

# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device)
w2 = torch.randn(H, D_out, device=device)

learning_rate = 1e-6
for t in range(500):
  # Forward pass: compute predicted y
  h = x.mm(w1)
  h_relu = h.clamp(min=0)
  y_pred = h_relu.mm(w2)

  # Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor
  # of shape (); we can get its value as a Python number with loss.item().
  loss = (y_pred - y).pow(2).sum()
  print(t, loss.item())

  # Backprop to compute gradients of w1 and w2 with respect to loss
  grad_y_pred = 2.0 * (y_pred - y)
  grad_w2 = h_relu.t().mm(grad_y_pred)
  grad_h_relu = grad_y_pred.mm(w2.t())
  grad_h = grad_h_relu.clone()
  grad_h[h < 0] = 0
  grad_w1 = x.t().mm(grad_h)

  # Update weights using gradient descent
  w1 -= learning_rate * grad_w1
  w2 -= learning_rate * grad_w2
sigmoid\u激活\u 2
包含预测的向量属性


惯用的PyTorch方法是一样的吗?使用前向传播进行单一预测?

您发布的代码是一个简单的演示,试图揭示这种深度学习框架的内部机制。这些框架,包括PyTorch、Keras、Tensorflow等,只要您定义了网络结构,就可以为您自动处理正向计算、跟踪和应用梯度。但是,您显示的代码仍然尝试手动执行这些操作。这就是为什么你在预测一个例子时感到麻烦的原因,因为你仍然在从头开始

在实践中,我们将定义一个继承自
torch.nn.Module
的模型类,并在
\uuuuu init\uuuuu
函数中初始化所有网络组件(如神经层、GRU、LSTM层等),并定义这些组件如何与
forward
函数中的网络输入交互

以您提供的页面为例:

# Code in file nn/two_layer_net_module.py
import torch

class TwoLayerNet(torch.nn.Module):
    def __init__(self, D_in, H, D_out):
        """
        In the constructor we instantiate two nn.Linear modules and 
        assign them as
        member variables.
        """
        super(TwoLayerNet, self).__init__()
        self.linear1 = torch.nn.Linear(D_in, H)
        self.linear2 = torch.nn.Linear(H, D_out)

    def forward(self, x):
        """
        In the forward function we accept a Tensor of input data and we must return
        a Tensor of output data. We can use Modules defined in the constructor as
        well as arbitrary (differentiable) operations on Tensors.
        """
        h_relu = self.linear1(x).clamp(min=0)
        y_pred = self.linear2(h_relu)
        return y_pred

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

# Construct our model by instantiating the class defined above.
model = TwoLayerNet(D_in, H, D_out)

# Construct our loss function and an Optimizer. The call to 
model.parameters()
# in the SGD constructor will contain the learnable parameters of the two
# nn.Linear modules which are members of the model.
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
    # Forward pass: Compute predicted y by passing x to the model
    y_pred = model(x)

    # Compute and print loss
    loss = loss_fn(y_pred, y)
    print(t, loss.item())

    # Zero gradients, perform a backward pass, and update the weights.
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
代码定义了一个名为TwoLayerNet的模型,它初始化了
函数中的两个线性层,并进一步定义了这两个线性层如何与
前进
函数中的输入
x
交互。定义了模型后,我们只需调用模型实例即可执行单个前馈操作,如代码段末尾所示:

y_pred = model(x)

谢谢,“y_pred=model(x)”在x上执行前向传播,而不是单个实例。当我试图预测一个新的看不见的例子时,这个问题是如何回答的?换句话说,“y_pred=model(x)”预测x中的所有示例,而不是一个示例?PyTorch似乎把这一点“太简单了”:),似乎实现了预期的行为:“y_pred=model(x[0])”是的,“y_pred=model(x)”预测x中的所有示例。我不确定你仅仅预测一个例子的用例是什么。你可以像以前那样做,也可以将单个示例包装在更高的维度中,然后将其传递给模型实例。我的用例是在模型经过训练后预测单个看不见的示例
y_pred = model(x)