Parameters (Pytorch)模型参数初始化

Parameters (Pytorch)模型参数初始化,parameters,model,Parameters,Model,我正在通过jupyter笔记本学习Pytork 我从sklearn模块加载了乳腺癌数据,试图进行如下二进制澄清: for i in range(n_epochs): indices = torch.randperm(x[0].size(0)) x_ = torch.index_select(x[0], dim=0, index=indices) y_ = torch.index_select(y[0], dim=0, index=indices) x_ = x_

我正在通过jupyter笔记本学习Pytork

我从sklearn模块加载了乳腺癌数据,试图进行如下二进制澄清:

for i in range(n_epochs):
    indices = torch.randperm(x[0].size(0))
    x_ = torch.index_select(x[0], dim=0, index=indices)
    y_ = torch.index_select(y[0], dim=0, index=indices)

    x_ = x_.split(batch_size, dim=0)
    y_ = y_.split(batch_size, dim=0)

    train_loss = 0

    for x_i, y_i in zip(x_, y_):
        y_hat_i = model(x_i)
        loss = F.binary_cross_entropy(y_hat_i, y_i)

        optimizer.zero_grad()
        loss.backward()

        optimizer.step()        
        train_loss += float(loss) 

    train_loss = train_loss / len(x_)

    if (i+1) % print_interval == 0 :
        print("Epoch %d : loss = %.4e" % (i+1, train_loss))
然后,结果似乎收敛到零

第100纪元:损失=6.3986e-05

第200纪元:损失=1.4759e-05

纪元1700:损失=5.1606e-10

1800纪元:损失=0.0000e+00

1900年:损失=0.0000e+00

然而,当我随后再次运行代码时,丢失值从零开始,不再改变

当再次运行模型时,这些模型参数不是应该在第一个历元随机化吗

然而,之前优化的重量参数似乎应用于以后的运行

如果是,是否应手动初始化权重参数,以避免先前结果的干扰? 请给我一些建议

作为参考,

model = nn.Sequential(
    nn.Linear(x[0].size(-1), 25),
    nn.LeakyReLU(),
    nn.Linear(25, 20),
    nn.LeakyReLU(),
    nn.Linear(20, 15),
    nn.LeakyReLU(),
    nn.Linear(15, 10),
    nn.LeakyReLU(),
    nn.Linear(10, 5),
    nn.LeakyReLU(),
    nn.Linear(5,1),
    nn.Sigmoid()
)