Python 由于Tensor数据类型和形状Tensorflow,运行会话失败

Python 由于Tensor数据类型和形状Tensorflow,运行会话失败,python,python-3.x,tensorflow,machine-learning,Python,Python 3.x,Tensorflow,Machine Learning,我尝试使用以下方法加载模型和图表: saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta") graph = tf.get_default_graph() outputs = graph.get_tensor_by_name('output:0') outputs = tf.cast(outputs,dtype=tf.float32) X = graph.get_tensor_by_nam

我尝试使用以下方法加载模型和图表:

saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")
graph = tf.get_default_graph()
outputs = graph.get_tensor_by_name('output:0')
outputs = tf.cast(outputs,dtype=tf.float32)
X = graph.get_tensor_by_name('input:0')
sess  = tf.Session()
sess.run(tf.global_variables_initializer())   
sess.run(tf.local_variables_initializer()) 
if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))):
    saver.restore(sess, tf.train.latest_checkpoint(model_path))
    print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing")   
成功了!。。。 但当我尝试运行会话时,出现以下错误:

y_test_output= sess.run(outputs, feed_dict={X: x_test})
错误是:

Caused by op 'output', defined at:
  File "testing_reality.py", line 21, in <module>
    saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")
  File "C:\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1674, in import_meta_graph
    meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]
  File "C:\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements
    **kwargs))
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
    return_elements=return_elements)
  File "C:\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
    _ProcessNewOps(graph)
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps
    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations
    for c_op in c_api_util.new_tf_operations(self)
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in <listcomp>
    for c_op in c_api_util.new_tf_operations(self)
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation
    ret = Operation(c_op, self)
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'output' with dtype float and shape [?,1]
         [[node output (defined at testing_reality.py:21)  = Placeholder[dtype=DT_FLOAT, shape=[?,1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

当您试图计算图形中依赖于占位符值的节点时,会发生这种情况。因此,您将得到一个错误,指出必须为占位符提供一个值。看看这个例子:

tf.reset_default_graph()
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = a + b
d = a

with tf.Session() as sess:
    print(c.eval(feed_dict={a:1.0}))
# Error because in order to evaluate c we must have the value for b.

with tf.Session() as sess:
    print(d.eval(feed_dict={a:1.0}))
# It works because d is not dependent on b.
现在,在您的情况下,您不应该执行输出占位符。您应该执行的是用于对模型进行预测的操作,同时在X占位符中输入一个值,假设您正在使用该占位符输入模型中的输入。另一方面,我猜您在训练时使用输出占位符来输入标签,因此不需要在该占位符中输入数据

根据您的最新更新:

通过doing:outputs=graph.get_tensor_By_name'output:0'加载名为output的占位符。您不需要这样做,您需要对输出进行切片的操作。在创建图形的代码部分,请执行以下操作:

outputs = tf.identity(outputs[:,n_steps-1,:], name="prediction")
然后,加载模型时,加载这两个张量:

X = graph.get_tensor_by_name('input:0')
prediction = graph.get_tensor_by_name('prediction:0')
最后,要获得所需输入的预测:

sess = tf.Session()
sess.run(tf.global_variables_initializer())   
sess.run(prediction, feed_dict={X: x_test})

我不是在寻求调试。我只是想知道,如果所有东西都加载正确,会出现什么错误。这似乎不是一个正确的选择。tf.train.latest_checkpointmodel_path将加载一个我们既没有也没有足够信息来复制的计算图。此外,导致错误的行中的x_测试在问题的任何地方都没有定义。从图形创建代码中可以看出,您希望检索推断输出的张量,而不是名为output的占位符。您必须给它一个唯一的操作名,以便以后可以检索。我正在使用套接字服务器运行会话。我甚至试图将输出声明为全局输出,但仍然得到了错误。这一点都不重要。请查看我的更新答案。我尝试使用outputs=graph.get\u operation\u by\u nameoutput而不是张量。我也犯了同样的错误。我不知道我犯了什么错误。你能在问题中包括图表创建吗?我已经编辑了我的问题。如果需要什么,请告诉我。
X = graph.get_tensor_by_name('input:0')
prediction = graph.get_tensor_by_name('prediction:0')
sess = tf.Session()
sess.run(tf.global_variables_initializer())   
sess.run(prediction, feed_dict={X: x_test})