Python 变量求值和损失的奇怪排序
我想使用Python 变量求值和损失的奇怪排序,python,tensorflow,Python,Tensorflow,我想使用tf.identity在优化步骤之前或之后复制损失和变量: 以下是前一种情况: 复制当前损失和变量(保存变量和相关损失) 运行一个优化步骤(更改损耗和值) 重复 以下是后一种情况: 运行一个优化步骤(更改损耗和值) 复制当前损失和变量(保存变量和相关损失) 重复 所谓“复制”,我的意思是在计算图中创建节点,用tf.identity存储损失和变量的当前值 不知何故,这就是实际发生的情况: 拷贝丢失 运行一个优化步骤(更改损耗和值) 复制变量 (该值与步骤1中保存的损失不对应) 重复 如何
tf.identity
在优化步骤之前或之后复制损失和变量:
以下是前一种情况:
tf.identity
存储损失和变量的当前值
不知何故,这就是实际发生的情况:
测试:
将numpy导入为np
导入tensorflow作为tf
x=tf.get_变量('x',初始值设定项=np.array([1],dtype=np.float64))
损耗=x*x
optim=tf.列车AdamOptimizer(1)
##控制依赖项##
loss_ident=tf.identity(loss)#我不完全清楚为什么控件依赖项不能与张量一起工作。但是您可以让它使用变量和tf.assign()
。这是我的解决办法。据我所知,您所需要的只是在train\u op
之前进行复制。从我做的几个快速测试来看,这似乎有效
import tensorflow as tf
tf.reset_default_graph()
x = tf.get_variable('x', initializer=np.array([1], dtype=np.float64))
x_ident = tf.get_variable('x_ident', initializer=np.array([1], dtype=np.float64))
loss = x * x
loss_ident = tf.get_variable('loss', initializer=np.array([1.0]), dtype=tf.float64)
optim = tf.train.AdamOptimizer(1)
## Control Dependencies ##
loss_ident = tf.assign(loss_ident, loss, name='loss_assign') # <-- copy loss
x_ident = tf.assign(x_ident, x, name='x_assign') # <-- copy variable
with tf.control_dependencies([x_ident, loss_ident]):
train_op = optim.minimize(loss)
## Run ##
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
for i in range(10):
# step 1
a, x1 = sess.run([loss_ident, x_ident])
# step 2
b, x2, _ = sess.run([loss_ident, x_ident, train_op])
#print("loss:", a_, b_)
print('ab',a,b)
print('x1x2',x1, x2)
assert np.allclose(a, b)
#print("variables:", x1_, x2_)
assert np.allclose(x1, x2)
Hopefully, this is what you're looking for.
将tensorflow导入为tf
tf.reset_default_graph()
x=tf.get_变量('x',初始值设定项=np.array([1],dtype=np.float64))
x_ident=tf.get_变量('x_ident',初始值设定项=np.array([1],dtype=np.float64))
损耗=x*x
loss_ident=tf.get_变量('loss',初始值设定项=np.array([1.0]),dtype=tf.float64)
optim=tf.列车AdamOptimizer(1)
##控制依赖项##
loss_ident=tf.assign(loss_ident,loss,name='loss_assign')#
import tensorflow as tf
tf.reset_default_graph()
x = tf.get_variable('x', initializer=np.array([1], dtype=np.float64))
x_ident = tf.get_variable('x_ident', initializer=np.array([1], dtype=np.float64))
loss = x * x
loss_ident = tf.get_variable('loss', initializer=np.array([1.0]), dtype=tf.float64)
optim = tf.train.AdamOptimizer(1)
## Control Dependencies ##
loss_ident = tf.assign(loss_ident, loss, name='loss_assign') # <-- copy loss
x_ident = tf.assign(x_ident, x, name='x_assign') # <-- copy variable
with tf.control_dependencies([x_ident, loss_ident]):
train_op = optim.minimize(loss)
## Run ##
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
for i in range(10):
# step 1
a, x1 = sess.run([loss_ident, x_ident])
# step 2
b, x2, _ = sess.run([loss_ident, x_ident, train_op])
#print("loss:", a_, b_)
print('ab',a,b)
print('x1x2',x1, x2)
assert np.allclose(a, b)
#print("variables:", x1_, x2_)
assert np.allclose(x1, x2)
Hopefully, this is what you're looking for.