Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/279.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 加载调度程序后更改学习速率';s状态命令_Python_Pytorch - Fatal编程技术网

Python 加载调度程序后更改学习速率';s状态命令

Python 加载调度程序后更改学习速率';s状态命令,python,pytorch,Python,Pytorch,我有我的模型: import torch import torch.nn as nn import torch.optim as optim class net_x(nn.Module): def __init__(self): super(net_x, self).__init__() self.fc1=nn.Linear(2, 20) self.fc2=nn.Linear(20, 20)

我有我的模型:

import torch
import torch.nn as nn
import torch.optim as optim

class net_x(nn.Module): 
        def __init__(self):
            super(net_x, self).__init__()
            self.fc1=nn.Linear(2, 20) 
            self.fc2=nn.Linear(20, 20)
            self.out=nn.Linear(20, 4) 

        def forward(self, x):
            x=self.fc1(x)
            x=self.fc2(x)
            x=self.out(x)
            return x

nx = net_x()


r = torch.tensor([1.0,2.0])
optimizer = optim.Adam(nx.parameters(), lr = 0.1)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-2, max_lr=0.1, step_size_up=1, mode="triangular2", cycle_momentum=False)

path = 'opt.pt'
for epoch in range(10):
    optimizer.zero_grad()
    net_predictions = nx(r)
    loss = torch.sum(torch.randint(0,10,(4,)) - net_predictions)
    loss.backward()
    optimizer.step()
    scheduler.step()
    print('loss:' , loss)
    torch.save({    'epoch': epoch,
                    'net_x_state_dict': nx.state_dict(),
                    'optimizer_state_dict': optimizer.state_dict(),
                    'scheduler': scheduler.state_dict(),    
                    }, path)

checkpoint = torch.load(path)        
nx.load_state_dict(checkpoint['net_x_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
scheduler.load_state_dict(checkpoint['scheduler'])

它可以很好地训练和加载,但我有兴趣在训练后将调度器的
base_lr
10^-2
降低到
10^-3
。有没有一种方法可以在不创建全新计划程序的情况下执行此操作?据我所知,如果这样做,我将丢失当前计划程序正在使用的参数,培训将变得更糟/从头开始

您的计划程序可以重置,因为您没有特别复杂/冗长的例程。无论如何,您只是在更改一个绑定,因此如果您正在加载它的状态,这应该无关紧要。我试图在计划程序中设置一个新的base\u lr/base\u lr,但是无法更改行为,尽管这取决于此行的self.base\u lr: