Pytorch 如何避免正确创建网络结构

Pytorch 如何避免正确创建网络结构,pytorch,Pytorch,我试图在我的神经网络模型中定义feedforwad函数: class FeedForward(nn.Module): def __init__(self): super(FeedForward,self).__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 64) self.fc2 = nn.Linear(64, 10) def

我试图在我的神经网络模型中定义feedforwad函数:

class FeedForward(nn.Module):
    def __init__(self):
        super(FeedForward,self).__init__() 
        self.fc1 = nn.Linear(784, 256)
        self.fc2 = nn.Linear(256, 64)
        self.fc2 = nn.Linear(64, 10)

    def feedforward(self, x):
        x = x.view(x.shape[0], -1)  # make sure inputs are flattened 

        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x= F.log_softmax(x, dim=1)  # preserve batch dim

        return x
消息说:

未实现错误


我不确定我遗漏了什么。

方法名称必须是
forward
,而不是
feedforward

类前馈(nn.模块):
定义初始化(自):
超级(前馈,自)。\uuuu初始化
self.fc1=nn.Linear(784256)
self.fc2=nn.Linear(256,64)
self.fc2=nn.Linear(64,10)
def前锋(赛尔夫,x):#这正是皮托克所期望的
x=x.view(x.shape[0],-1)#确保输入是平坦的
x=F.relu(自fc1(x))
x=F.relu(自身fc2(x))
x=F.relu(自身fc3(x))
x=F.log_softmax(x,dim=1)#保留批量dim
返回x