Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何准备培训数据和预测数据?_Python_Tensorflow - Fatal编程技术网

Python 如何准备培训数据和预测数据?

Python 如何准备培训数据和预测数据?,python,tensorflow,Python,Tensorflow,我是TensorFlow和机器学习(python)的新手。 在创建图像识别程序的第一步,我在准备喂食数据时遇到了困惑。有人能帮我吗? 我曾仔细研究过本教程,但数据准备工作很混乱。 我没想到会从这个问题中得到一个完整的完美程序,相反,我很想听听你是否能告诉我TensorFlow是如何在feed_dict上工作的。现在在我的脑海中,它是:“像一个[为]工作的人一样工作。”循环,通过imageHolder,获得2352字节/1图像的数据并放入训练op,在那里它基于当前模型执行预测,并与相同索引的la

我是TensorFlow和机器学习(python)的新手。 在创建图像识别程序的第一步,我在准备喂食数据时遇到了困惑。有人能帮我吗? 我曾仔细研究过本教程,但数据准备工作很混乱。

我没想到会从这个问题中得到一个完整的完美程序,相反,我很想听听你是否能告诉我TensorFlow是如何在feed_dict上工作的。现在在我的脑海中,它是:“像一个[为]工作的人一样工作。”循环,通过imageHolder,获得2352字节/1图像的数据并放入训练op,在那里它基于当前模型执行预测,并与相同索引的labelHolder的数据进行比较,然后对模型执行校正。”因此,我希望放入一组2352字节的数据(另一个大小相同的图像)并得到预测。我还将把代码放在这里,以防我的想法是正确的,并且错误来自糟糕的实现


说:我有5个班的抵销数据,总共3670张图片。 当加载数据到feed_dict进行训练时,我已将所有图像转换为28x28像素,有3个通道。它为提要中的图像保持器生成了一个张量(36702352)。之后,我为提要中的标签保持器准备了一个张量(3670,)。 培训代码如下所示:

for step in xrange(FLAGS.max_steps):
        feed_dict = {
            imageHolder: imageTrain,
            labelHolder: labelTrain,
        }
        _, loss_rate = sess.run([train_op, loss_op], feed_dict=feed_dict)
然后,我有我的代码,用上面的模型预测一个新图像:

testing_dataset = do_get_file_list(FLAGS.guess_dir)
x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
for data in testing_dataset:
    image = Image.open(data)
    image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
    image = np.array(image).reshape(IMAGE_PIXELS)
    prediction = session.run(tf.argmax(logits, 1), feed_dict={x: image})
但问题是预测线总是会产生“无法输入形状值…”的错误,无论我的测试数据是(2352,),(12352)(要求(36702352)形状,但没有办法)


这是我用过的国旗

IMAGE_SIZE = 28
CHANNELS = 3
IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE * CHANNELS
培训op和损耗计算:

def do_get_op_compute_loss(logits, labels):
    labels = tf.to_int64(labels)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy')
    loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
    return loss

def do_get_op_training(loss_op, training_rate):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    train_op = optimizer.minimize(loss_op, global_step=global_step)
    return train_op
变数

imageHolder = tf.placeholder(tf.float32, [data_count, IMAGE_PIXELS])
labelHolder = tf.placeholder(tf.int32, [data_count])
对于完整的程序:

import os
import math
import tensorflow as tf
from PIL import Image
import numpy as np
from six.moves import xrange

flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('max_steps', 200, 'Number of steps to run trainer.')
flags.DEFINE_integer('hidden1', 128, 'Number of units in hidden layer 1.')
flags.DEFINE_integer('hidden2', 32, 'Number of units in hidden layer 2.')
flags.DEFINE_integer('batch_size', 4, 'Batch size.  '
                     'Must divide evenly into the dataset sizes.')
flags.DEFINE_string('train_dir', 'data', 'Directory to put the training data.')
flags.DEFINE_string('save_file', '.\\data\\model.ckpt', 'Directory to put the training data.')
flags.DEFINE_string('guess_dir', 'work', 'Directory to put the testing data.')
#flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
#                    'for unit testing.')

IMAGE_SIZE = 28
CHANNELS = 3
IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE * CHANNELS

def do_inference(images, hidden1_units, hidden2_units, class_count):
    #HIDDEN LAYER 1
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
            tf.truncated_normal([IMAGE_PIXELS, hidden1_units], stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
            name='weights')
        biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')
        hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
    #HIDDEN LAYER 2
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
            tf.truncated_normal([hidden1_units, hidden2_units], stddev=1.0 / math.sqrt(float(hidden1_units))),
            name='weights')
        biases = tf.Variable(tf.zeros([hidden2_units]), name='biases')
        hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
    #LINEAR
    with tf.name_scope('softmax_linear'):
        weights = tf.Variable(
            tf.truncated_normal([hidden2_units, class_count], stddev=1.0 / math.sqrt(float(hidden2_units))),
            name='weights')
        biases = tf.Variable(tf.zeros([class_count]), name='biases')
        logits = tf.matmul(hidden2, weights) + biases
    return logits

def do_get_op_compute_loss(logits, labels):
    labels = tf.to_int64(labels)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy')
    loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
    return loss

def do_get_op_training(loss_op, training_rate):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    train_op = optimizer.minimize(loss_op, global_step=global_step)
    return train_op

def do_get_op_evaluate(logits, labels):
    correct = tf.nn.in_top_k(logits, labels, 1)
    return tf.reduce_sum(tf.cast(correct, tf.int32))

def do_evaluate(session, eval_correct_op, imageset_holder, labelset_holder, train_images, train_labels):
    true_count = 0
    num_examples = FLAGS.batch_size * FLAGS.batch_size
    for step in xrange(FLAGS.batch_size):
        feed_dict = {imageset_holder: train_images, labelset_holder: train_labels,}
        true_count += session.run(eval_correct_op, feed_dict=feed_dict)
        precision = true_count / num_examples
    # print('  Num examples: %d  Num correct: %d  Precision @ 1: %0.04f' %
        # (num_examples, true_count, precision))

def do_init_param(data_count, class_count): 
    # Generate placeholder
    imageHolder = tf.placeholder(tf.float32, shape=(data_count, IMAGE_PIXELS))
    labelHolder = tf.placeholder(tf.int32, shape=(data_count))

    # Build a graph for prediction from inference model
    logits = do_inference(imageHolder, FLAGS.hidden1, FLAGS.hidden2, class_count)

    # Add loss calculating op
    loss_op = do_get_op_compute_loss(logits, labelHolder)

    # Add training op
    train_op = do_get_op_training(loss_op, FLAGS.learning_rate)

    # Add evaluate correction op
    evaluate_op = do_get_op_evaluate(logits, labelHolder)

    # Create session for op operating
    sess = tf.Session()

    # Init param
    init = tf.initialize_all_variables()
    sess.run(init)
    return sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits

def do_get_class_list():
    return [{'name': name, 'path': os.path.join(FLAGS.train_dir, name)} for name in os.listdir(FLAGS.train_dir)
            if os.path.isdir(os.path.join(FLAGS.train_dir, name))]

def do_get_file_list(folderName):
    return [os.path.join(folderName, name) for name in os.listdir(folderName)
            if (os.path.isdir(os.path.join(folderName, name)) == False)]

def do_init_data_list():
    file_list = []
    for classItem in do_get_class_list():
        for dataItem in do_get_file_list(classItem['path']):
            file_list.append({'name': classItem['name'], 'path': dataItem})

    # Renew data feeding dictionary
    imageTrainList, labelTrainList = do_seperate_data(file_list)
    imageTrain = []
    for imagePath in imageTrainList:
        image = Image.open(imagePath)
        image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
        imageTrain.append(np.array(image))

    imageCount = len(imageTrain)
    imageTrain = np.array(imageTrain)
    imageTrain = imageTrain.reshape(imageCount, IMAGE_PIXELS)

    id_list, id_map = do_generate_id_label(labelTrainList)
    labelTrain = np.array(id_list)
    return imageTrain, labelTrain, id_map

def do_init():
    imageTrain, labelTrain, id_map = do_init_data_list()
    sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits = do_init_param(len(imageTrain), len(id_map))
    return sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain, id_map, logits

def do_seperate_data(data):
    images = [item['path'] for item in data]
    labels = [item['name'] for item in data]
    return images, labels

def do_generate_id_label(label_list):
    trimmed_label_list = list(set(label_list))
    id_map = {trimmed_label_list.index(label): label for label in trimmed_label_list}
    reversed_id_map = {label: trimmed_label_list.index(label) for label in trimmed_label_list}
    id_list = [reversed_id_map.get(item) for item in label_list]
    return id_list, id_map

def do_training(sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain):
    # Training state checkpoint saver
    saver = tf.train.Saver()
    # feed_dict = {
        # imageHolder: imageTrain,
        # labelHolder: labelTrain,
    # }

    for step in xrange(FLAGS.max_steps):
        feed_dict = {
            imageHolder: imageTrain,
            labelHolder: labelTrain,
        }
        _, loss_rate = sess.run([train_op, loss_op], feed_dict=feed_dict)

        if step % 100 == 0:
            print('Step {0}: loss = {1}'.format(step, loss_rate))
        if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
            saver.save(sess, FLAGS.save_file, global_step=step)
            print('Evaluate training data')
            do_evaluate(sess, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain)

def do_predict(session, logits):
    # xentropy
    testing_dataset = do_get_file_list(FLAGS.guess_dir)
    x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
    print('Perform predict')
    print('==================================================================================')
    # TEMPORARY CODE
    for data in testing_dataset:
        image = Image.open(data)
        image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
        image = np.array(image).reshape(IMAGE_PIXELS)
        print(image.shape)
        prediction = session.run(logits, {x: image})
        print('{0}: {1}'.format(data, prediction))

def main(_):
    # TF notice default graph
    with tf.Graph().as_default():
        sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain, id_map, logits = do_init()
        print("done init")
        do_training(sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain)
        print("done training")
        do_predict(sess, logits)

# NO IDEA
if __name__ == '__main__':
    tf.app.run()

你说,理解错误很重要

但问题是预测线总是会产生一个“不能”的误差 形状的进给值…“无论我的测试数据是什么形状 (2352,),(12352)(要求(36702352)形状,但没有办法)

哦,是的,我的朋友,是的。上面说你的形状有问题,你需要检查一下。它要求3670,为什么

因为您的模型接受形状输入(数据计数、图像像素),您在下面声明:

def do_init_param(data_count, class_count): 
    # Generate placeholder
    imageHolder = tf.placeholder(tf.float32, shape=(data_count, IMAGE_PIXELS))
    labelHolder = tf.placeholder(tf.int32, shape=(data_count))
此函数在此处调用:

sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits = do_init_param(len(imageTrain), len(id_map))
len(imageTrain)
是数据集的长度,可能是3670张图像

那么你就有了预测功能:

def do_predict(session, logits):
    # xentropy
    testing_dataset = do_get_file_list(FLAGS.guess_dir)
    x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
    ...
    prediction = session.run(logits, {x: image})
注意
x
这里没有用。你正在将你的图像输入到你的模型中去预测,而这个模型并不期望这个形状,它期望原始的占位符形状(36702352),因为这就是你所说的

解决方案是将
x
声明为具有非特定第一维度的占位符,例如:

imageHolder = tf.placeholder(tf.float32, shape=(None, IMAGE_PIXELS))
当您预测图像的标签时,您可以有单个图像或多个图像(小批量),但必须始终具有形状[数字图像,图像像素]


有道理吗?

你好,维加,非常感谢你费尽心思解释。我试图理解,但这是毫无意义的,因为我们必须用训练模型所用的相同数量的图像进行预测,还是这是在训练阶段不过度输入数据的原因?我以前尝试过“非特定”第一维度,但似乎这只是代码和函数参数的问题。(在我的想象中,[number_images,IMAGE_PIXELS]保持-在本例中为3670x2352字节,但我只需要预测1个图像-2352字节-其余可以设置为0,但为什么?TensorFlow会忽略额外的吗?)