Python 3.x tensorflow没有为任何变量错误提供梯度
我试图用Python 3.x tensorflow没有为任何变量错误提供梯度,python-3.x,tensorflow,deep-learning,Python 3.x,Tensorflow,Deep Learning,我试图用重用选项构建一个简单的神经网络,但我得到了一个奇怪的错误。我不明白问题出在哪里。可能我没有正确使用mse import tensorflow as tf
重用
选项构建一个简单的神经网络,但我得到了一个奇怪的错误。我不明白问题出在哪里。可能我没有正确使用mse
import tensorflow as tf
n_inputs = 8
x_ = tf.placeholder(tf.float32, [None, n_inputs])
l1 = tf.layers.dense(x_, 100, activation=tf.nn.relu, use_bias=True, name='l1', reuse=None)
l2 = tf.layers.dense(l1, 100, activation=tf.nn.relu, use_bias=True, name='l2', reuse=None)
l3 = tf.layers.dense(l2, 20, activation=tf.nn.relu, use_bias=True, name='l3', reuse=None)
y_ = tf.placeholder(tf.float32, [None, n_inputs])
w1 = tf.layers.dense(y_, 100, activation=tf.nn.relu, use_bias=True, name='l1', reuse=True)
w2 = tf.layers.dense(w1, 100, activation=tf.nn.relu, use_bias=True, name='l2', reuse=True)
w3 = tf.layers.dense(w2, 20, activation=tf.nn.relu, use_bias=True, name='l3', reuse=True)
z_ = tf.placeholder(tf.float32, [None, n_inputs])
u1 = tf.layers.dense(z_, 100, activation=tf.nn.relu, use_bias=True, name='l1', reuse=True)
u2 = tf.layers.dense(u1, 100, activation=tf.nn.relu, use_bias=True, name='l2', reuse=True)
u3 = tf.layers.dense(u2, 20, activation=tf.nn.relu, use_bias=True, name='l3', reuse=True)
mse1, _ = tf.metrics.mean_squared_error(l3, w3)
mse2, _ = tf.metrics.mean_squared_error(l3,u3)
cost = tf.subtract(mse1, mse2)
opts = tf.train.AdamOptimizer().minimize(cost)
sess = tf.InteractiveSession()
错误:
ValueError Traceback (most recent call last)
<ipython-input-4-0e3679c2a898> in <module>()
----> 1 __pyfile = open('''/tmp/py3823Cbm''');exec(compile(__pyfile.read(), '''/home/lpuggini/mlp/scratch/Kerberos/flow_ui.py''', 'exec'));__pyfile.close()
/home/lpuggini/mlp/scratch/Kerberos/flow_ui.py in <module>()
33 cost = tf.subtract(mse1, mse2)
34
---> 35 opts = tf.train.AdamOptimizer().minimize(cost)
36 sess = tf.InteractiveSession()
37
/home/lpuggini/MyApps/scientific_python_2_7/lib/python2.7/site-packages/tensorflow/python/training/optimizer.pyc in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradi\
ents_with_ops, name, grad_loss)
320 "No gradients provided for any variable, check your graph for ops"
321 " that do not support gradients, between variables %s and loss %s." %
--> 322 ([str(v) for _, v in grads_and_vars], loss))
323
324 return self.apply_gradients(grads_and_vars, global_step=global_step,
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'l1/kernel:0' shape=(8, 100) dtype=float32_ref>", "<tf.Variable 'l1/b\
ias:0' shape=(100,) dtype=float32_ref>", "<tf.Variable 'l2/kernel:0' shape=(100, 100) dtype=float32_ref>", "<tf.Variable 'l2/bias:0' shape=(100,) dtype=float32_ref>", "<tf.Variable 'l3/kernel:0' shape=(100, 20)\
dtype=float32_ref>", "<tf.Variable 'l3/bias:0' shape=(20,) dtype=float32_ref>"] and loss Tensor("Sub:0", shape=(), dtype=float32).
ValueError回溯(最近一次调用)
在()
---->1 uu pyfile=open(''/tmp/py3823Cbm');exec(compile(uu pyfile.read(),'''/home/lpuggini/mlp/scratch/Kerberos/flow_ui.py'','exec')__pyfile.close()
/home/lpuggini/mlp/scratch/Kerberos/flow_ui.py in()
33成本=转移系数减去(mse1,mse2)
34
--->35 opts=tf.train.AdamOptimizer().最小化(成本)
36 sess=tf.InteractiveSession()
37
/home/lpuggini/MyApps/scientific_python_2_7/lib/python2.7/site-packages/tensorflow/python/training/optimizer.pyc in minimize(self、loss、global_step、var_list、gate_梯度、聚合法、colocate_梯度\
(包括运营、名称、梯度损失)
320“没有为任何变量提供梯度,请检查图形中的ops”
321“不支持变量%s和损失%s之间的梯度”。%
-->322([str(v)代表v,在梯度和变量中为v],损失))
323
324返回自。应用梯度(梯度和变量,全局步长=全局步长,
ValueError:没有为任何变量提供渐变,请检查图形中是否有不支持渐变的操作,在变量[“”、“”、“”、“”、“”、“”、“”、“”]和损失张量(“Sub:0”,shape=(),dtype=float32)之间。
指标
并不是损失
。指标是随着时间的推移记录某些统计数据。通过它们进行区分是没有意义的。除了关于指标的核心TF文档,这里有一个很好的例子
你想要的是,更具体地说