Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow图像分类器精度无法改变_Python_Tensorflow_Machine Learning_Computer Vision - Fatal编程技术网

Python Tensorflow图像分类器精度无法改变

Python Tensorflow图像分类器精度无法改变,python,tensorflow,machine-learning,computer-vision,Python,Tensorflow,Machine Learning,Computer Vision,我是tensorflow的新手。我正在为图像分类创建一个简单的完全连接的神经网络。图像是(-1224224,3),标签是(-1,2)。然而,我的代码的结果是,准确性根本没有提高;它保持在47%,并且不会改变-即使改变了学习率、优化器和不同的测试集。。任何帮助都将不胜感激!谢谢大家! import matplotlib.pyplot as plt from util.MacOSFile import MacOSFile import numpy as np import _pickle as p

我是tensorflow的新手。我正在为图像分类创建一个简单的完全连接的神经网络。图像是(-1224224,3),标签是(-1,2)。然而,我的代码的结果是,准确性根本没有提高;它保持在47%,并且不会改变-即使改变了学习率、优化器和不同的测试集。。任何帮助都将不胜感激!谢谢大家!

import matplotlib.pyplot as plt 
from util.MacOSFile import MacOSFile
import numpy as np
import _pickle as pickle
import tensorflow as tf

def pickle_load(file_path):
    with open(file_path, "rb") as f:
        return pickle.load(MacOSFile(f))

###hyperparameters###
batch_size = 32
iterations = 10

###loading training data start###
data = pickle_load('training.pickle')
x_train = []
y_train = []

for features, labels in data:
    x_train.append(features)
    y_train.append(labels)

x_train = np.array(x_train)
y_train = np.array(y_train)

###################################

###loading test data start###
data = pickle_load('testing.pickle')
x_test = []
y_test = []

for features, labels in data:
    x_test.append(features)
    y_test.append(labels)

x_test = np.array(x_test)
y_test = np.array(y_test)

###################################


###neural network###

x_s = tf.placeholder(tf.float32, [None, 224, 224, 3])
y_s = tf.placeholder(tf.float32, [None, 2])
x_image = tf.reshape(x_s, [-1, 150528])

W_1 = tf.Variable(tf.truncated_normal([150528, 8224]))
b_1 = tf.Variable(tf.zeros([8224]))
h_fc1 = tf.nn.relu(tf.matmul(x_image, W_1) + b_1)

W_2 = tf.Variable(tf.truncated_normal([8224, 1028]))
b_2 = tf.Variable(tf.zeros([1028]))
h_fc2 = tf.nn.relu(tf.matmul(h_fc1, W_2) + b_2)

W_3 = tf.Variable(tf.truncated_normal([1028, 2]))
b_3 = tf.Variable(tf.zeros([2]))
prediction = tf.nn.softmax(tf.matmul(h_fc2, W_3) + b_3)

cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_s, logits=prediction)
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
init = tf.global_variables_initializer()

###neural network end###


with tf.Session() as sess:
    sess.run(init)

    train_sample_size = len(data) #how many data points?
    max_batches_in_data = int(train_sample_size/batch_size) #max number of batches possible; 623 

    for iteration in range(iterations):
            print('Iteration ', iteration)
            epoch = int(iteration/max_batches_in_data)
            start_idx = (iteration-epoch*max_batches_in_data)*batch_size
            end_idx = (iteration+1 - epoch*max_batches_in_data)*batch_size
            mini_x_train = x_train[start_idx: end_idx] 
            mini_y_train = y_train[start_idx: end_idx]

            ##actual training is here
            sess.run(train_step, feed_dict={x_s: mini_x_train, y_s: mini_y_train})

            #test accuracy#
            y_pre = sess.run(prediction, feed_dict={x_s: x_train[:100]})
            correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(y_train[:100], 1))
            accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
            result = sess.run(accuracy, feed_dict={x_s: x_train[:100], y_s: y_train[:100]})
            print("Result: {0}".format(result))

我做了一些观察,首先,您的代码有点过时,您不必手动设置完全连接的层,这是有道理的:。 如果加载图像,为什么不使用卷积层呢?
我还建议将参数保留为默认值。我希望我能帮上一点忙:)

你的学习率设置为零,这实际上意味着培训什么也做不了。嗨,Matias,零是为调试设置的,我忘了更改它。但当我将学习率改回0.1时,网络仍然无法学习。整个过程的准确率为52.99%。Hi T.Kelher!谢谢你的回答。将我的全连接层切换到Tensorflow的稠密层似乎可以做到这一点。这很奇怪,它不适用于我的初始代码。嗨,让你确定密集层和完全连接层是完全一样的。顺便说一下,您在网络中也应用了两次Softmax。我再试了一次。原来问题出在我使用的relu激活函数上——由于某种原因,relu“死了”,因为它一直输出零,因此,网络没有优化。换成leaky_relu或tanh帮了我的忙!