Python 用张量流模型进行故障预测

Python 用张量流模型进行故障预测,python,tensorflow,machine-learning,deep-learning,Python,Tensorflow,Machine Learning,Deep Learning,我在MNIST数据集上训练了一个深度神经网络。这是培训代码 n_classes = 10 batch_size = 100 x = tf.placeholder(tf.float32, [None, 784],name='Xx') y = tf.placeholder(tf.float32,[None,10],name='Yy') input = 784 n_nodes_1 = 300 n_nodes_2 = 300 def neural_network_model(data):

我在MNIST数据集上训练了一个深度神经网络。这是培训代码

n_classes = 10
batch_size = 100

x = tf.placeholder(tf.float32, [None, 784],name='Xx')
y = tf.placeholder(tf.float32,[None,10],name='Yy')

input = 784
n_nodes_1 = 300
n_nodes_2 = 300


def neural_network_model(data):
    variables = {'w1':tf.Variable(tf.random_normal([input,n_nodes_1])),
               'w2':tf.Variable(tf.random_normal([n_nodes_1,n_nodes_2])),
               'w3':tf.Variable(tf.random_normal([n_nodes_2,n_classes])),
                 'b1':tf.Variable(tf.random_normal([n_nodes_1])),
                 'b2':tf.Variable(tf.random_normal([n_nodes_2])),
                 'b3':tf.Variable(tf.random_normal([n_classes]))}
    output1 = tf.add(tf.matmul(data,variables['w1']),variables['b1'])
    output2 = tf.nn.relu(output1)
    output3 = tf.add(tf.matmul(output2, variables['w2']), variables['b2'])
    output4 = tf.nn.relu(output3)
    output5 = tf.add(tf.matmul(output4, variables['w3']), variables['b3'],name='last')
    return output5

def train_neural_network(x):
    prediction = neural_network_model(x)
    name_of_final_layer = 'fin'
    final = tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction,
                                                       labels=y,name=name_of_final_layer)
    cost = tf.reduce_mean(final)
    optimizer = tf.train.AdamOptimizer().minimize(cost)
    hm_epochs = 3
    saver = tf.train.Saver()
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch in range(hm_epochs):
            epoch_loss = 0
            for _ in range(int(mnist.train.num_examples/batch_size)):
                epoch_x, epoch_y = mnist.train.next_batch(batch_size)
                _,c=sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y})
                epoch_loss += c
            print("Epoch",epoch+1,"Completed Total Loss:",epoch_loss)
            correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
            accuracy = tf.reduce_mean(tf.cast(correct,'float'))
            print('Accuracy on val_set:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
        path = saver.save(sess,"net/network")
        print("Saved to",path)
下面是我计算单个数据点的代码

def eval_neural_network():
    with tf.Session() as sess:
        new_saver = tf.train.import_meta_graph('net/network.meta')
        new_saver.restore(sess, "net/network")
        sing = np.reshape(mnist.test.images[0],(-1,784))
        output = sess.run([y],feed_dict={x:sing})
    print(output)
eval_neural_network()
弹出的错误是:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Yy' with dtype float and shape [?,10]
     [[Node: Yy = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

我已经在网上研究了好几天了,但仍然无法让它发挥作用。有什么建议吗?

这个基于tensorflow github的完整示例对我很有用: (我修改了几行代码,删除了x的名称范围,保留了_prob,并用_default改为tf.placeholder_。也许有更好的方法可以做到这一点。 ​

如果要从新的python运行还原,请执行以下操作:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import numpy as np

import argparse
import sys
import tempfile
from tensorflow.examples.tutorials.mnist import input_data

sess =  tf.Session() 
saver = tf.train.import_meta_graph('/tmp/network.meta')
saver.restore(sess,tf.train.latest_checkpoint('/tmp'))
graph = tf.get_default_graph()
mnist = input_data.read_data_sets("/tmp")
simg = np.reshape(mnist.test.images[0],(-1,784))
op_to_restore = graph.get_tensor_by_name("fc2/MatMul:0")
x = graph.get_tensor_by_name("x:0")
output = sess.run(op_to_restore,feed_dict= {x:simg})
print("Result = ", np.argmax(output))

损失是这样振荡的,但预测似乎并不坏,它是有效的

它还可以重复提取mnist存档。使用更简单的网络,精确度也可以达到0.98

Epoch 1 Completed Total Loss: 47.47844
Accuracy on val_set: 0.8685
Epoch 2 Completed Total Loss: 10.217445
Accuracy on val_set: 0.9
Epoch 3 Completed Total Loss: 14.013474
Accuracy on val_set: 0.9104
[2]
将tensorflow导入为tf
导入tensorflow.examples.tutorials.mnist.input_数据作为输入_数据
将numpy作为np导入
将matplotlib.pyplot作为plt导入
n_类=10
批量大小=100
x=tf.placeholder(tf.float32,[None,784],name='Xx')
y=tf.placeholder(tf.float32,[None,10],name='Yy')
输入=784
n_节点_1=300
n_节点_2=300
mnist=输入数据。读取数据集('mnist\u data/',one\u hot=True)
def神经网络模型(数据):
变量={'w1':tf.Variable(tf.random_normal([input,n_nodes_1]),
“w2”:tf.变量(tf.random_normal([n_nodes_1,n_nodes_2]),
“w3”:tf.Variable(tf.random\u normal([n\u节点\u 2,n\u类]),
'b1':tf.变量(tf.random_normal([n_nodes_1]),
'b2':tf.变量(tf.random_normal([n_nodes_2]),
'b3':tf.Variable(tf.random_normal([n_类])}
output1=tf.add(tf.matmul(数据,变量['w1']),变量['b1']))
output2=tf.nn.relu(output1)
output3=tf.add(tf.matmul(output2,变量['w2']),变量['b2']))
output4=tf.nn.relu(output3)
output5=tf.add(tf.matmul(output4,变量['w3']),变量['b3'],name='last')
返回输出5
def系列神经网络(x):
预测=神经网络模型(x)
最后一层的名称='fin'
final=tf.nn.softmax\u cross\u entropy\u与logits\u v2(logits=prediction,
标签=y,名称=名称(最终层的名称)
成本=tf.减少平均值(最终)
优化器=tf.train.AdamOptimizer()。最小化(成本)
hm_时代=3
saver=tf.train.saver()
使用tf.Session()作为sess:
sess.run(tf.global\u variables\u initializer())
对于范围内的历元(hm_历元):
对于范围内的(int(mnist.train.num\u示例/批次大小)):
历元x,历元y=mnist.train.next\u批次(批次大小)
_,c=sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y})
打印(“历元”,历元+1,“已完成全损:”,c)
correct=tf.equal(tf.argmax(预测,1),tf.argmax(y,1))
准确度=tf.减少平均值(tf.投射(正确,'float'))
打印('valu集合上的精度:',accurity.eval({x:mnist.test.images,y:mnist.test.labels}))
#path=saver.save(sess,“网络/网络”)
#打印(“保存到”,路径)
收益预测
def eval_神经网络(预测):
使用tf.Session()作为sess:
new\u saver=tf.train.import\u meta\u图('net/network.meta'))
新的\u saver.restore(sess,“网络/网络”)
singleprediction=tf.argmax(预测,1)
sing=np.重塑(列表测试图像[1],(-1784))
output=singleprediction.eval(feed_dict={x:sing},session=sess)
digit=mnist.test.images[1]。重塑((28,28))
plt.imshow(数字,cmap='gray')
plt.show()
打印(输出)
预测=训练神经网络(x)
评估神经网络(预测)

您正在尝试计算
y
,这是一个占位符,在您不进行培训时未分配。看起来您可能想将
预测
张量传递给
sess。运行
而不是
y
。这不起作用。当前的错误为InvalidArgumentError(回溯请参见上文):您必须为占位符张量“Xx_1”输入一个值,其中包含数据类型float和形状[?,784]
Epoch 1 Completed Total Loss: 47.47844
Accuracy on val_set: 0.8685
Epoch 2 Completed Total Loss: 10.217445
Accuracy on val_set: 0.9
Epoch 3 Completed Total Loss: 14.013474
Accuracy on val_set: 0.9104
[2]