Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 数字识别问题_Python_Python 3.x_Tensorflow_Neural Network_Artificial Intelligence - Fatal编程技术网

Python 数字识别问题

Python 数字识别问题,python,python-3.x,tensorflow,neural-network,artificial-intelligence,Python,Python 3.x,Tensorflow,Neural Network,Artificial Intelligence,对于神经网络和人工智能来说,这是一个全新的概念。 我在跟踪一个博客,创建一个数字识别系统 就在这里: File "main.py", line 61, in <module> X: batch_x, Y: batch_y, keep_prob: dropout File "C:\Users\umara\AppData\Local\Programs\Python\Python37\lib\site- packages\tensorflow\python\cli

对于神经网络和人工智能来说,这是一个全新的概念。 我在跟踪一个博客,创建一个数字识别系统

就在这里:

 File "main.py", line 61, in <module>
     X: batch_x, Y: batch_y, keep_prob: dropout
   File "C:\Users\umara\AppData\Local\Programs\Python\Python37\lib\site- 
   packages\tensorflow\python\client\session.py", line 929, in run
     run_metadata_ptr)
   File "C:\Users\umara\AppData\Local\Programs\Python\Python37\lib\site- 
   packages\tensorflow\python\client\session.py", line 1128, in _run
     str(subfeed_t.get_shape())))
 ValueError: Cannot feed value of shape (128, 28, 28, 1) for Tensor 'Placeholder:0', which has shape '(?, 784)'
代码:

import numpy as np
from PIL import Image
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

#Import data from MNIST DATA SET and save it in a folder
mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)
#n_train = [d.reshape(28, 28, 1) for d in mnist.train.num_examples]
n_train = mnist.train.num_examples
#train_features = 
#test_features = [d.reshape(28, 28, 1) for d in mnist.test.images]
#n_validation = [d.reshape(28, 28, 1) for d in mnist.validation.num_examples]
n_validation = mnist.validation.num_examples
n_test = mnist.test.num_examples
n_input = 784
n_hidden1 = 522
n_hidden2 = 348
n_hidden3 = 232
n_output = 10
learning_rate = 1e-4
n_iterations = 1000
batch_size = 128
dropout = 0.5
#X = tf.placeholder(tf.float32,[None, 28, 28, 1])
#X = tf.placeholder("float", [None, n_input])
X = tf.placeholder(tf.float32 , [None , 784])
#X = tf.reshape(X , [-1 , 784])
Y = tf.placeholder("float", [None, n_output])
keep_prob = tf.placeholder(tf.float32)
weights = {
    'w1': tf.Variable(tf.truncated_normal([n_input, n_hidden1], stddev=0.1)),
    'w2': tf.Variable(tf.truncated_normal([n_hidden1, n_hidden2], stddev=0.1)),
    'w3': tf.Variable(tf.truncated_normal([n_hidden2, n_hidden3], stddev=0.1)),
    'out': tf.Variable(tf.truncated_normal([n_hidden3, n_output], stddev=0.1)),
}
biases = {
    'b1': tf.Variable(tf.constant(0.1, shape=[n_hidden1])),
    'b2': tf.Variable(tf.constant(0.1, shape=[n_hidden2])),
    'b3': tf.Variable(tf.constant(0.1, shape=[n_hidden3])),
    'out': tf.Variable(tf.constant(0.1, shape=[n_output]))
}
layer_1 = tf.add(tf.matmul(X, weights['w1']), biases['b1'])
layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2'])
layer_3 = tf.add(tf.matmul(layer_2, weights['w3']), biases['b3'])
layer_drop = tf.nn.dropout(layer_3, keep_prob)
output_layer = tf.matmul(layer_drop, weights['out']) + biases['out']
cross_entropy = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(
        labels=Y, logits=output_layer
        ))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_pred = tf.equal(tf.argmax(output_layer, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(n_iterations):
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    batch_x = np.reshape(batch_x,(-1,28,28,1))
    sess.run(train_step, feed_dict={
        X: batch_x, Y: batch_y, keep_prob: dropout
        })

    # print loss and accuracy (per minibatch)
    if i % 100 == 0:
        minibatch_loss, minibatch_accuracy = sess.run(
            [cross_entropy, accuracy],
            feed_dict={X: batch_x, Y: batch_y, keep_prob: 1.0}
            )
        print(
            "Iteration",
            str(i),
            "\t| Loss =",
            str(minibatch_loss),
            "\t| Accuracy =",
            str(minibatch_accuracy)
            )
test_accuracy = sess.run(accuracy, feed_dict={X: mnist.test.images, Y: mnist.test.labels, keep_prob: 1.0})
print("\nAccuracy on test set:", test_accuracy)
img = np.invert(Image.open("n55.png").convert('L')).ravel()
prediction = sess.run(tf.argmax(output_layer, 1), feed_dict={X: [img]})
print ("Prediction for test image:", np.squeeze(prediction))
您的错误:

 ValueError: Cannot feed value of shape (128, 28, 28, 1) for Tensor 'Placeholder:0', which has shape '(?, 784)'
表示您的张量期望形状的值(?,784),但您提供了128个形状图像(28,28)

28*28=784

因此,请尝试通过以下操作将数据重塑为(128784):

n_train = [d.reshape(784) for d in mnist.train.num_examples]    
test_features =[d.reshape(784) for d in mnist.test.images]      
n_validation =[d.reshape(784) for d in mnist.validation.num_examples]
如果将列表转换为numpy数组,则可以打印形状:

import numpy as np
print(np.array(n_train).shape)
应打印
(样本数784)

import numpy as np
print(np.array(n_train).shape)