Tensorflow 无法加载保存的策略(TF代理)
我使用policy saver保存了经过培训的策略,如下所示:Tensorflow 无法加载保存的策略(TF代理),tensorflow,deep-learning,reinforcement-learning,Tensorflow,Deep Learning,Reinforcement Learning,我使用policy saver保存了经过培训的策略,如下所示: tf_policy_saver = policy_saver.PolicySaver(agent.policy) tf_policy_saver.save(policy_dir) 我想继续使用保存的策略进行培训。因此,我尝试使用保存的策略初始化培训,这导致了一些错误 agent = dqn_agent.DqnAgent( tf_env.time_step_spec(), tf_env.action_spec(), q_ne
tf_policy_saver = policy_saver.PolicySaver(agent.policy)
tf_policy_saver.save(policy_dir)
我想继续使用保存的策略进行培训。因此,我尝试使用保存的策略初始化培训,这导致了一些错误
agent = dqn_agent.DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
agent.policy=tf.compat.v2.saved_model.load(policy_dir)
错误:
File "C:/Users/Rohit/PycharmProjects/pythonProject/waypoint.py", line 172, in <module>
agent.policy=tf.compat.v2.saved_model.load('waypoints\\Two_rewards')
File "C:\Users\Rohit\anaconda3\envs\btp36\lib\site-packages\tensorflow\python\training\tracking\tracking.py", line 92, in __setattr__
super(AutoTrackable, self).__setattr__(name, value)
AttributeError: can't set attribute
文件“C:/Users/Rohit/PycharmProjects/pythonProject/waypoint.py”,第172行,在
agent.policy=tf.compat.v2.saved\u model.load('waypoints\\Two\u rewards')
文件“C:\Users\Rohit\anaconda3\envs\btp36\lib\site packages\tensorflow\python\training\tracking\tracking.py”,第92行,位于__
超级(自动跟踪,自我)。\uuuuu setattr\uuuuuuu(名称,值)
AttributeError:无法设置属性
我只是想节省每次第一次再培训的时间。如何加载保存的策略并继续培训
提前感谢为此,您应该检查Checkpointer。是的,如前所述,您应该使用Checkpointer来执行此操作。请查看下面的示例代码
agent = ... # Agent Definition
policy = agent.policy
# Policy --> Y
policy_checkpointer = common.Checkpointer(ckpt_dir='path/to/dir',
policy=policy)
... # Train the agent
# Policy --> X
policy_checkpointer.save(global_step=epoch_counter.numpy())
以后要重新加载策略时,只需运行相同的初始化代码
agent = ... # Agent Definition
policy = agent.policy
# Policy --> Y1, possibly Y1==Y depending on agent class you are using, if it's DQN
# then they are different because of random initialization of network weights
policy_checkpointer = common.Checkpointer(ckpt_dir='path/to/dir',
policy=policy)
# Policy --> X
创建后,policy\u checkpointer
将自动实现是否存在任何预先存在的检查点。如果有,它将在创建时自动更新所跟踪变量的值
需要注意的几点:
DqnAgent
的情况下,agent.policy
和agent.collect\u policy
基本上是QQ网络的包装。下面的代码显示了这一点(请查看有关策略变量状态的注释)这是因为TF中的张量在整个运行时共享。因此,当您使用
agent.train
更新代理的QNetwork
权重时,这些相同的权重也会隐式更新到您的策略变量的QNetwork
。实际上,不是因为策略的张量得到更新,而是因为它们与代理中的张量相同,检查点将保存训练状态、策略状态和回放缓冲区状态。我没有看到保存缓冲区在保存模型中起到什么作用。但是,如果我的目标是基本上保存权重,并在需要时恢复权重,那么checkpointer就是这样吗?回放缓冲区不一定在保存模型时起作用,但是如果出于某种原因,您希望中断训练并在其他时间提取它,然后您应该保存重播缓冲区,因为它是训练过程的一部分。如果您的唯一目标是培训代理,然后保存最佳策略,那么您可能希望使用PolicySaver
()来保存代理的greedPolicy
train_checkpointer = common.Checkpointer(ckpt_dir=first/dir,
agent=tf_agent, # tf_agent.TFAgent
train_step=train_step, # tf.Variable
epoch_counter=epoch_counter, # tf.Variable
metrics=metric_utils.MetricsGroup(
train_metrics, 'train_metrics'))
policy_checkpointer = common.Checkpointer(ckpt_dir=second/dir,
policy=agent.policy)
rb_checkpointer = common.Checkpointer(ckpt_dir=third/dir,
max_to_keep=1,
replay_buffer=replay_buffer # TFUniformReplayBuffer
)
agent = DqnAgent(...)
policy = agent.policy # Random initial policy ---> X
dataset = replay_buffer.as_dataset(...)
for data in dataset:
experience, _ = data
loss_agent_info = agent.train(experience=experience)
# policy variable stores a trained Policy object ---> Y