Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么我的神经网络手写数字测试集交叉熵是正确的,与标签相比,输出率总是10%正确?_Python_Tensorflow_Neural Network_Deep Learning_Mnist - Fatal编程技术网

Python 为什么我的神经网络手写数字测试集交叉熵是正确的,与标签相比,输出率总是10%正确?

Python 为什么我的神经网络手写数字测试集交叉熵是正确的,与标签相比,输出率总是10%正确?,python,tensorflow,neural-network,deep-learning,mnist,Python,Tensorflow,Neural Network,Deep Learning,Mnist,我在spyder中运行我的代码,测试集交叉熵是正确的,但是测试集的准确性总是很低。这是我的密码。我使用mnist。有什么建议我可以提高性能吗 #!/usr/bin/env python3 # -*- coding: utf-8 -*- import tensorflow as tf import numpy as np from tensorflow.contrib.layers import fully_connected from tensorflow.examples.tutorials.

我在spyder中运行我的代码,测试集交叉熵是正确的,但是测试集的准确性总是很低。这是我的密码。我使用mnist。有什么建议我可以提高性能吗

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import tensorflow as tf
import numpy as np
from tensorflow.contrib.layers import fully_connected
from tensorflow.examples.tutorials.mnist import input_data

x = tf.placeholder(dtype=tf.float32,shape=[None,784])
y = tf.placeholder(dtype=tf.float32,shape=[None,10])
test_x = tf.placeholder(dtype=tf.float32,shape=[None,784])
test_y = tf.placeholder(dtype=tf.float32,shape=[None,10])

mnist = input_data.read_data_sets("/home/xuenzhu/mnist_data", one_hot=True)

hidden1 = fully_connected(x,100,activation_fn=tf.nn.relu,weights_initializer=tf.random_normal_initializer())

hidden2 = fully_connected(hidden1,100,activation_fn=tf.nn.relu,weights_initializer=tf.random_normal_initializer())

outputs = fully_connected(hidden2,10,activation_fn=tf.nn.relu,weights_initializer=tf.random_normal_initializer())

loss = tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=outputs)
reduce_mean_loss = tf.reduce_mean(loss)

equal_result = tf.equal(tf.argmax(outputs,1),tf.argmax(y,1))
cast_result = tf.cast(equal_result,dtype=tf.float32)
accuracy = tf.reduce_mean(cast_result)

train_op = tf.train.AdamOptimizer(0.001).minimize(reduce_mean_loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
    for i in range(10000):
        xs,ys = mnist.train.next_batch(128)
        sess.run(train_op,feed_dict={x:xs,y:ys})
        if i%1000==0:
                print(sess.run(equal_result,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
                print(sess.run(reduce_mean_loss,feed_dict={x:mnist.test.images,y:mnist.test.labels}))[enter image description here][1]
                print(sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
试着改变这个

对于10000范围内的i:尝试将此值增加到>10k。试试100k或更高


此后,您应该能够看到精确度的提高。

在应用softmax\u cross\u熵之前,使用ReLu激活函数是没有用的。将最后一个完全连接层中的激活函数更改为“无”,您将获得良好的精度

from tensorflow.examples.tutorials.mnist import input_data

import tensorflow as tf
from tensorflow.contrib.layers import fully_connected

x = tf.placeholder(dtype=tf.float32,shape=[None,784])
y = tf.placeholder(dtype=tf.float32,shape=[None,10])
test_x = tf.placeholder(dtype=tf.float32,shape=[None,784])
test_y = tf.placeholder(dtype=tf.float32,shape=[None,10])

mnist = input_data.read_data_sets("/home/xuenzhu/mnist_data", one_hot=True)

hidden1 = fully_connected(x,100,activation_fn=tf.nn.relu,weights_initializer=tf.random_normal_initializer())

hidden2 = fully_connected(hidden1,100,activation_fn=tf.nn.relu,weights_initializer=tf.random_normal_initializer())

outputs = fully_connected(hidden2,10,activation_fn=None,weights_initializer=tf.random_normal_initializer())







loss = tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=outputs)
reduce_mean_loss = tf.reduce_mean(loss)

equal_result = tf.equal(tf.argmax(outputs,1),tf.argmax(y,1))
cast_result = tf.cast(equal_result,dtype=tf.float32)
accuracy = tf.reduce_mean(cast_result)

train_op = tf.train.AdamOptimizer(0.001).minimize(reduce_mean_loss)


with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
    for i in range(10000):
        xs,ys = mnist.train.next_batch(128)
        sess.run(train_op,feed_dict={x:xs,y:ys})
        if i%1000==0:
                print(sess.run(equal_result,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
                print(sess.run(reduce_mean_loss,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
                print(sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))

谢谢你的回答。不幸的是,我不认为这是它。我试着用范围为100000的for I运行OP代码,但精度从未超过。5.更新:@marco_gorelli我试着将输出层的激活函数更改为activation_fn=tf.nn.sigmoid。我用OP中完全相同的代码成功地达到了90%以上的准确率。测试集交叉熵是正确的,这是什么意思?你怎么知道它是正确的?非常感谢。我接受了你的建议,准确率达到了95%。但我不知道为什么在最后一层将relu更改为none会有这么好的效果。你能给我解释一下吗?很乐意帮助。ReLu将其一些输入设置为零。在隐藏层中这很好,但在输出层中,您希望保留前一层传递的内容。