Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/349.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何在TensorFlow'中检索预测标签;什么是cifar10示例?_Python_Machine Learning_Tensorflow_Deep Learning - Fatal编程技术网

Python 如何在TensorFlow'中检索预测标签;什么是cifar10示例?

Python 如何在TensorFlow'中检索预测标签;什么是cifar10示例?,python,machine-learning,tensorflow,deep-learning,Python,Machine Learning,Tensorflow,Deep Learning,我是tensorflow的新手,目前正在尝试在另一个没有标签的数据集上使用cifar10示例。在本例的评估部分,它只输出预测精度。我想知道如何修改这段代码以输出测试用例的预测标签 以下是本教程的代码: def eval_once(saver, summary_writer, top_k_op, summary_op): """Run Eval once. Args: saver: Saver. summary_writer: Summary writer. top_k_op: Top K op.

我是tensorflow的新手,目前正在尝试在另一个没有标签的数据集上使用cifar10示例。在本例的评估部分,它只输出预测精度。我想知道如何修改这段代码以输出测试用例的预测标签

以下是本教程的代码:

def eval_once(saver, summary_writer, top_k_op, summary_op):
"""Run Eval once.
Args:
saver: Saver.
summary_writer: Summary writer.
top_k_op: Top K op.
summary_op: Summary op.
"""
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
  # Restores from checkpoint
  saver.restore(sess, ckpt.model_checkpoint_path)
  # Assuming model_checkpoint_path looks something like:
  #   /my-favorite-path/cifar10_train/model.ckpt-0,
  # extract global_step from it.
  global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
else:
  print('No checkpoint file found')
  return

# Start the queue runners.
coord = tf.train.Coordinator()
try:
  threads = []
  for qr in tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS):
    threads.extend(qr.create_threads(sess, coord=coord, daemon=True,
                                     start=True))

  num_iter = int(math.ceil(FLAGS.num_examples / FLAGS.batch_size))
  true_count = 0  # Counts the number of correct predictions.
  total_sample_count = num_iter * FLAGS.batch_size
  step = 0
  while step < num_iter and not coord.should_stop():
    predictions = sess.run([top_k_op])
    true_count += np.sum(predictions)
    step += 1

  # Compute precision @ 1.
  precision = true_count / total_sample_count
  print('%s: precision @ 1 = %.3f' % (datetime.now(), precision))

  summary = tf.Summary()
  summary.ParseFromString(sess.run(summary_op))
  summary.value.add(tag='Precision @ 1', simple_value=precision)
  summary_writer.add_summary(summary, global_step)
except Exception as e:  # pylint: disable=broad-except
  coord.request_stop(e)

coord.request_stop()
coord.join(threads, stop_grace_period_secs=10)


def evaluate():
  """Eval CIFAR-10 for a number of steps."""
  with tf.Graph().as_default() as g:
  # Get images and labels for CIFAR-10.
  eval_data = FLAGS.eval_data == 'test'
  images, labels = cifar10.inputs(eval_data=eval_data)

  # Build a Graph that computes the logits predictions from the
  # inference model.
  logits = cifar10.inference(images)

  # Calculate predictions.
  top_k_op = tf.nn.in_top_k(logits, labels, 1)

  # Restore the moving average version of the learned variables for eval.
  variable_averages = tf.train.ExponentialMovingAverage(
      cifar10.MOVING_AVERAGE_DECAY)
  variables_to_restore = variable_averages.variables_to_restore()
  saver = tf.train.Saver(variables_to_restore)

  # Build the summary operation based on the TF collection of Summaries.
  summary_op = tf.summary.merge_all()

  summary_writer = tf.summary.FileWriter(FLAGS.eval_dir, g)

  while True:
    eval_once(saver, summary_writer, top_k_op, summary_op)
    if FLAGS.run_once:
      break
    time.sleep(FLAGS.eval_interval_secs)


def main(argv=None):  # pylint: disable=unused-argument
  cifar10.maybe_download_and_extract()
  if tf.gfile.Exists(FLAGS.eval_dir):
  tf.gfile.DeleteRecursively(FLAGS.eval_dir)
  tf.gfile.MakeDirs(FLAGS.eval_dir)
  evaluate()
def eval_一次(保存程序、摘要编写器、top_k_op、summary_op):
“运行一次评估。
Args:
储蓄者:储蓄者。
摘要作者:摘要作者。
顶呱呱:顶呱呱。
小结:小结小结。
"""
使用tf.Session()作为sess:
ckpt=tf.train.get\u checkpoint\u state(FLAGS.checkpoint\u dir)
如果ckpt和ckpt.model\u检查点路径:
#从检查点恢复
saver.restore(sess、ckpt.model\u检查点\u路径)
#假设模型检查点路径如下所示:
#/my favorite path/cifar10\U train/model.ckpt-0,
#从中提取全局步骤。
全局_step=ckpt.model_checkpoint_path.split('/')[-1]。split('-')[-1]
其他:
打印('未找到检查点文件')
返回
#启动队列运行程序。
coord=tf.train.Coordinator()
尝试:
线程=[]
对于tf.get_集合(tf.GraphKeys.QUEUE_runner)中的qr:
扩展(qr.create_)线程(sess,coord=coord,daemon=True,
开始=真)
num_iter=int(math.ceil(FLAGS.num_示例/FLAGS.batch_大小))
true_count=0#统计正确预测的数量。
总样本计数=数量*FLAGS.batch\u大小
步长=0
而步骤