tensorflow中MNIST数据库的大输出预测

tensorflow中MNIST数据库的大输出预测,tensorflow,Tensorflow,在一个测试示例中,经过网络培训后,我无法收到结果。 这是帮助perceptron.py的标准示例 我试图以这种方式获得结果 examples_to_show = 5 y_result = sess.run(y_pred, feed_dict={x:mnist.test.images[:examples_to_show]}) print("y_result=",y_result) 我收到的不是不清楚的数字,而是[0 0 1 0 0 0 0 0 0 0] In [20]: ''' A Multil

在一个测试示例中,经过网络培训后,我无法收到结果。 这是帮助perceptron.py的标准示例

我试图以这种方式获得结果

examples_to_show = 5
y_result = sess.run(y_pred, feed_dict={x:mnist.test.images[:examples_to_show]})
print("y_result=",y_result)
我收到的不是不清楚的数字,而是[0 0 1 0 0 0 0 0 0 0]

In [20]:
'''
A Multilayer Perceptron implementation example using TensorFlow library.
This example is using the MNIST database of handwritten digits
Author: Aymeric Damien
'''

In [21]:
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf

Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

In [22]:
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])

In [23]:
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer

In [24]:
# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

# Construct model
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
cost =  tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

The designer for predictions!!!
In [25]:

# Prediction
y_pred = pred

In [30]:
# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    # Training cycle
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(mnist.train.num_examples/batch_size)
        # Loop over all batches
        for i in range(total_batch):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Run optimization op (backprop) and cost op (to get loss value)
        _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
                                                      y: batch_y})
        # Compute average loss
        avg_cost += c / total_batch
    # Display logs per epoch step
    if epoch % display_step == 0:
        print("Epoch:", '%04d' % (epoch+1), "cost=", \
            "{:.9f}".format(avg_cost))
    print("Optimization Finished!")

    # Test model
    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
    # Calculate accuracy
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))

    # We will try to receive result of training!!!
    examples_to_show = 5
    y_result = sess.run(y_pred, feed_dict={x: mnist.test.images[:examples_to_show]})
    print("y_result=",y_result)


Epoch: 0001 cost= 142.664078834
Epoch: 0002 cost= 37.176684845
Epoch: 0003 cost= 23.608409217
Epoch: 0004 cost= 16.678811304
Epoch: 0005 cost= 12.175642554
Epoch: 0006 cost= 9.083989911
Epoch: 0007 cost= 6.624555320
Epoch: 0008 cost= 4.970751049
Epoch: 0009 cost= 3.595181121
Epoch: 0010 cost= 2.671157273
Epoch: 0011 cost= 2.032964239
Epoch: 0012 cost= 1.588672840
Epoch: 0013 cost= 1.133152580
Epoch: 0014 cost= 0.805134769
Epoch: 0015 cost= 0.689760053
Optimization Finished!
Accuracy: 0.941

y_result= [
[ -203.50767517  -437.82525635   186.90861511   590.15588379
-471.18536377  -283.88424683 -1150.14709473  1022.75799561  -391.6159668
432.9206543 ]
[ -855.87487793     6.88715792   903.70776367   252.00227356
-1407.09313965   441.29104614   344.09405518 -1691.98535156
40.62039566 -1391.43688965]
[ -244.32698059   618.91705322    12.79210854   -36.14464951
-8.12554073   183.12348938    50.32661057   147.05378723   152.9332428
-210.40829468]
[ 1091.7199707   -919.26574707  -333.54571533  -953.7399292    -1072.82226562
73.99294281   305.2588501   -166.91053772  -985.14654541
452.14318848]
[  200.62698364    89.34638214  -280.01904297  -342.19534302  1240.4128418
229.24633789  -424.91091919   298.81100464  -194.70623779
934.27703857]]

结果必须是y_result=[0 0 1 0 0 0 0 0 0 0]???为什么?

您的
y\u结果在这里计算:
out\u layer=tf.matmul(layer\u 2,weights['out'])+偏差['out']
。很明显,它不是一个热向量,而是一个矩阵或向量(取决于
层2
权重['out']
)。查看您的结果,它是一个矩阵

您的
y_结果
在这里计算:
out_layer=tf.matmul(layer_2,weights['out'])+偏差['out']
。很明显,它不是一个热向量,而是一个矩阵或向量(取决于
层2
权重['out']
)。查看您的结果,它是一个矩阵

您的pred没有输出激活,无法将logits转换为概率。因此,应用
tf.softmax(pred)
并将其用作预测。记住不要将其传递给
softmax\u cross\u entropy()
,因为它在内部应用softmax。 您可以将代码更改为:

# Construct model
logits = multilayer_perceptron(x, weights, biases)
cost =  tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))

# Apply softmax to obtain probabilities
pred = tf.nn.softmax(logits)

您的pred没有输出激活,无法将登录转换为概率。因此,应用
tf.softmax(pred)
并将其用作预测。记住不要将其传递给
softmax\u cross\u entropy()
,因为它在内部应用softmax。 您可以将代码更改为:

# Construct model
logits = multilayer_perceptron(x, weights, biases)
cost =  tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))

# Apply softmax to obtain probabilities
pred = tf.nn.softmax(logits)

对我遵循示例_至_show=5个示例,它是一个向量矩阵。网络在纯向量[0 0 1 0 0 0 0 0 0 0 0]处训练。如何进行标准间转换结果[-203.50767517-437.82525635 186.90861511 590.15588379-471.18536377-283.88424683-1150.14709473 1022.75799561-391.6159668 432.9206543]。还是一套随意的天平?那么为什么准确率是0.941。如何获得正常结果?我宣布了无偏差的输出模型。def多层感知器(x,权重,偏差):层乘=tf.matmul(x,权重['h1'])层乘=tf.nn.relu(层乘)层乘=tf.matmul(层乘1,权重['h2'])层乘=tf.nn.relu(层乘)out层乘=tf.matmul(layer_2,weights['out'])返回“layer_multiply”你是霍德尼数字7 2 1 0 4结果=[[-367.19445801 128.42245483 152.20715332 433.81906128-540.16162109-159.51565552-1093.81323242 1366.43115234-319.55679321 252.82269287]最大值=7[ 205.2769165 927.01916504 1632.89233398 517.36358643 -798.52838135 447.796875 -681.25366211 -701.4230957 272.33825684 -381.25271606]MAX=2Mistake。没有偏差是不可能的。一切正常!是的。我遵循示例\u到\u show=5个示例,它是一个向量矩阵。网络在纯向量[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]下训练。如何解释结果[-203.50767517-437.82525635 186.90861511 590.15588379-471.18536377-283.88424683-1150.14709473 1022.75799561-391.6159668 432.9206543]或者它是一组随意的刻度?那么为什么精度是:0.941。如何接收正常结果?我宣布了没有偏差的输出模型。def multiployee_perceptron_res(x,weights,bias):layer_1_乘法=tf.matmul(x,权重['h1'])layer_1=tf.nn.relu(layer_1_乘法)layer_2_乘法=tf.matmul(layer_1,权重['h2'])layer_2=tf.nn.relu(layer_2_乘法)out_layer_乘法=tf.matmul(layer_2,权重['out'])返回“层”乘法你是霍德尼数字7 2 1 0 4结果=[[-367.19445801 128.42245483 152.20715332 433.81906128-540.16162109-159.5156552-1093.81323242 1366.43115234-319.55679321 252.82269287]最大值=7[205.2769165 927.01916504 1632.89233398 517.36358643-798.52838135 447.796875-681.25366211-701.4230957 272.33825684-381.25271606]MAX=2距离。没有偏差是不可能的。一切都能正常工作!#预测我已经理解了如何应用函数!!!#预测#y#u pred=pred y#u pred=tf.nn.softmax(pred)我已经理解了如何应用函数!!!