Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/windows/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 带有MNIST训练模型的Tensorflow总是打印错误的编号_Python_Numpy_Machine Learning_Tensorflow_Mnist - Fatal编程技术网

Python 带有MNIST训练模型的Tensorflow总是打印错误的编号

Python 带有MNIST训练模型的Tensorflow总是打印错误的编号,python,numpy,machine-learning,tensorflow,mnist,Python,Numpy,Machine Learning,Tensorflow,Mnist,我使用Tensorflow和Python进行文本识别,但当我尝试进行数字识别时,训练是可以的,但是当我恢复模型并使用它时,没有错误,但总是错误的预测,下面是我的训练和使用模型的代码。有人能指出这是怎么回事吗 培训: import input_data import matplotlib.pyplot as plt import tensorflow as tf import numpy as np mn = input_data.read_data_sets("tmp/data", one_ho

我使用Tensorflow和Python进行文本识别,但当我尝试进行数字识别时,训练是可以的,但是当我恢复模型并使用它时,没有错误,但总是错误的预测,下面是我的训练和使用模型的代码。有人能指出这是怎么回事吗

培训:

import input_data
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
mn = input_data.read_data_sets("tmp/data", one_hot=True)
training_epoch = 10000
learning_rate = 0.001
batch_size = 20000
display_step = 1
n_hidden1 = 512
n_hidden2 = 512
input_size = 784
n_class = 10
x = tf.placeholder("float", [None, input_size])
y = tf.placeholder("float", [None, n_class])
h = tf.Variable(tf.random_normal([input_size, n_hidden1]))
layer1_bias = tf.Variable(tf.random_normal([n_hidden1]))
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x,h),layer1_bias))
w = tf.Variable(tf.random_normal([n_hidden1, n_hidden2]))
layer2_bias = tf.Variable(tf.random_normal([n_hidden2]))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1,w),layer2_bias))
output = tf.Variable(tf.random_normal([n_hidden2, n_class]))
bias_output = tf.Variable(tf.random_normal([n_class]))
output_layer = tf.matmul(layer_2, output) + bias_output
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,     logits=output_layer))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
avg_set = []
epoch_set = []
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    saver = tf.train.Saver()
    for epoch in range(training_epoch):
        avg_cost = 0.
        batch_total = int(mn.train.num_examples/batch_size)
        for i in range(batch_total):
            batch_x, batch_y = mn.train.next_batch(batch_size)
            print(batch_x.shape)
            sess.run(optimizer, feed_dict={x:batch_x, y:batch_y})
            avg_cost += sess.run(cost, feed_dict={x:batch_x, y:batch_y})/batch_total            
        if(epoch % display_step == 0):
            print("Epoch:%d " % (epoch), "cost:", "{:.9f}".format(avg_cost))
        avg_set.append(avg_cost)
        epoch_set.append(epoch+1)
    print("Training finished")
    plt.plot(epoch_set,avg_set, 'o', label='MLP Training phase')
    plt.ylabel('cost')
    plt.xlabel('epoch')
    plt.legend()
    plt.show()
    correct_prediction = tf.equal(tf.argmax(output_layer, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print("Model Accuracy:", accuracy.eval({x: mn.test.images, y: mn.test.labels}))
    saver.save(sess, "model-batchsize-20000-epoch-10000-learningrate-0.001/tf_mlp_model.ckpt")
测试:

import numpy as np
import tensorflow as tf
import input_data
import cv2
import os
dir = os.path.dirname(os.path.realpath(__file__))
img = cv2.imread('6-1.png')
img.astype("float")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = img.flatten()
img = np.expand_dims(img, axis=0)
n_hidden1 = 512
n_hidden2 = 512
input_size = 784
n_class = 10
x = tf.placeholder("float", [1, input_size])
y = tf.placeholder("float", [None, n_class])
h = tf.Variable(tf.random_normal([input_size, n_hidden1]))
layer1_bias = tf.Variable(tf.random_normal([n_hidden1]))
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x,h),layer1_bias))
w = tf.Variable(tf.random_normal([n_hidden1, n_hidden2]))
layer2_bias = tf.Variable(tf.random_normal([n_hidden2]))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1,w),layer2_bias))
output = tf.Variable(tf.random_normal([n_hidden2, n_class]))
bias_output = tf.Variable(tf.random_normal([n_class]))
output_layer = tf.matmul(layer_2, output) + bias_output
with tf.Session() as sess:
    tf.train.Saver()
    tf.train.Saver().restore(sess, dir + "/model-batchsize-20000-epoch-10000-   learningrate-0.001/tf_mlp_model.ckpt")
    pred = tf.argmax(sess.run(output_layer, feed_dict={x:img}), 1)
    print(pred.eval())
测试输出:

[2]
run(输出层,提要dict={x:img}):

编辑1: 我忘了说实际值是6,不是2,这是我的MacBook上的28x28屏幕截图,link:

编辑2: 2号,返回:

[3]

结果证明,这是正常的MLP。那时我应该使用卷积神经网络。问题解决了。

你处理完了吗?SimpleHTR是Github上类似于此的项目。
[3]