Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用tensorflow在MNIST数据上进行逻辑回归编写摘要日志?_Python_Tensorflow_Logistic Regression_Tensorboard_Mnist - Fatal编程技术网

Python 如何使用tensorflow在MNIST数据上进行逻辑回归编写摘要日志?

Python 如何使用tensorflow在MNIST数据上进行逻辑回归编写摘要日志?,python,tensorflow,logistic-regression,tensorboard,mnist,Python,Tensorflow,Logistic Regression,Tensorboard,Mnist,我对tensorflow和tensorboard的实现是新手。这是我第一次使用tensorflow在MNIST数据上实现逻辑回归。我已经成功地对数据进行了逻辑回归,现在我正在尝试使用tf.summary.fileWriter将摘要记录到日志文件中 下面是我影响summary参数的代码 x = tf.placeholder(dtype=tf.float32, shape=(None, 784)) y = tf.placeholder(dtype=tf.float32, shape=(None, 1

我对
tensorflow
tensorboard
的实现是新手。这是我第一次使用tensorflow在MNIST数据上实现
逻辑回归
。我已经成功地对数据进行了逻辑回归,现在我正在尝试使用
tf.summary.fileWriter
将摘要记录到日志文件中

下面是我影响summary参数的代码

x = tf.placeholder(dtype=tf.float32, shape=(None, 784))
y = tf.placeholder(dtype=tf.float32, shape=(None, 10)) 

loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

tf.summary.scalar("loss", loss_op)
tf.summary.scalar("training_accuracy", accuracy_op)
summary_op = tf.summary.merge_all()
我就是这样训练我的模特的

with tf.Session() as sess:   
    sess.run(init)
    writer = tf.summary.FileWriter('./graphs', sess.graph)

    for iter in range(50):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        _, loss, tr_acc,summary = sess.run([optimizer_op, loss_op, accuracy_op, summary_op], feed_dict={x: batch_x, y: batch_y})
        summary = sess.run(summary_op, feed_dict={x: batch_x, y: batch_y})
        writer.add_summary(summary, iter)
在添加摘要行以获得合并摘要后,我发现下面的错误


InvalidArgumentError (see above for traceback): 
You must feed a value for placeholder tensor 'Placeholder_37' 
with dtype float and shape [?,10]

此错误指向
Y

y = tf.placeholder(dtype=tf.float32, shape=(None, 10)) 

你能帮我一下我做错了什么吗

从错误消息中可以看出,您正在某种jupyter环境中运行代码。尝试重新启动内核/运行时,然后再次运行所有操作。在图形模式下运行两次代码在jupyter中不起作用。如果我第一次运行下面的代码时没有返回任何错误,当我第二次运行它时(不重新启动内核/运行时),那么它将以与您相同的方式崩溃

我懒得在实际模型上检查它,所以我的
pred=y
;) 但是下面的代码不会崩溃,因此您应该能够根据需要对其进行调整。我已经在Google Colab上测试过了

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

x = tf.placeholder(dtype=tf.float32, shape=(None, 784), name='x-input')
y = tf.placeholder(dtype=tf.float32, shape=(None, 10), name='y-input')

pred = y
loss_op = tf.losses.mean_squared_error(y, pred)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.name_scope('summaries'):
  tf.summary.scalar("loss", loss_op, collections=["train_summary"])
  tf.summary.scalar("training_accuracy", accuracy_op, collections=["train_summary"])

with tf.Session() as sess:   
  summary_op = tf.summary.merge_all(key='train_summary')
  train_writer = tf.summary.FileWriter('./graphs', sess.graph)
  sess.run([tf.global_variables_initializer(),tf.local_variables_initializer()])

  for iter in range(50):
    batch_x, batch_y = mnist.train.next_batch(1)
    loss, acc, summary = sess.run([loss_op, accuracy_op, summary_op], feed_dict={x:batch_x, y:batch_y})
    train_writer.add_summary(summary, iter)

谢谢,议员。很抱歉我回信晚了。不知怎的,有了你的建议,我终于能达到最后的结果。不过,我不明白为什么每次我们都需要重新启动内核来运行与传感器流会话相关的代码?我认为这是因为一旦分配了GPU内存,就无法释放它。然后,您没有从内存中删除图形,因此它被卡在那里。如果您对图形使用了不同的内存分配方法(allow_growth=True),那么在遇到OOM问题之前,您可能会在不重新启动内核的情况下运行它几次,但我认为最好每次都重新启动它-对于afair O,O有一个键盘快捷键。