Python TensorFlow:如何重用Adam优化器变量?

Python TensorFlow:如何重用Adam优化器变量?,python,tensorflow,conv-neural-network,Python,Tensorflow,Conv Neural Network,最近升级了TensorFlow版本后,我遇到了无法解决的错误: Traceback (most recent call last): File "cross_train.py", line 177, in <module> train_network(use_gpu=True) File "cross_train.py", line 46, in train_network with tf.control_dependencies([s_opt.apply_g

最近升级了TensorFlow版本后,我遇到了无法解决的错误:

Traceback (most recent call last):
  File "cross_train.py", line 177, in <module>
    train_network(use_gpu=True)
  File "cross_train.py", line 46, in train_network
    with tf.control_dependencies([s_opt.apply_gradients(s_grads), s_increment_step]):

...

ValueError: Variable image-conv1-layer/weights/Adam/ already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

  File "cross_train.py", line 34, in train_network
    with tf.control_dependencies([e_opt.apply_gradients(e_grads), e_increment_step]):
  File "cross_train.py", line 177, in <module>
    train_network(use_gpu=True)

但我不确定我是否能为亚当做同样的事。有什么想法吗?非常感谢您的帮助。

事实证明,我不需要实例化两个不同的Adam优化器。我只创建了一个实例,没有名称冲突或试图共享变量的问题。无论更新哪个网络分支,我都使用相同的优化器:

    e_grads = opt.compute_gradients(e_loss)
with tf.control_dependencies([opt.apply_gradients(e_grads), e_increment_step]):
    e_train = tf.no_op(name='english_train') 
而且

    s_grads = opt.compute_gradients(s_loss)
with tf.control_dependencies([opt.apply_gradients(s_grads), s_increment_step]):
    s_train = tf.no_op(name='spanish_train')

有趣的是,对于旧版本的Tensorflow,使用两个Adam实例没有问题,即使M分支名称冲突…

您为优化操作设置了名称吗<代码>优化器=tf.train.AdamOptimizer(学习率=0.0001,name='first''优化器')。最小化(损失)
    s_grads = opt.compute_gradients(s_loss)
with tf.control_dependencies([opt.apply_gradients(s_grads), s_increment_step]):
    s_train = tf.no_op(name='spanish_train')