Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 训练精度高,但在keras DNN模型中,输入始终分类为同一类别_Python_Tensorflow_Machine Learning_Keras_Deep Learning - Fatal编程技术网

Python 训练精度高,但在keras DNN模型中,输入始终分类为同一类别

Python 训练精度高,但在keras DNN模型中,输入始终分类为同一类别,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,我在3个类上使用了Exception架构和数据增强。我的原始数据集在三个类中的每个类中都有3个图像,它们组织在一个目录中 # import the necessary packages from main.cnn.networks.lenet import LeNet from sklearn.model_selection import train_test_split from keras.datasets import mnist from keras.optimizers import

我在3个类上使用了Exception架构和数据增强。我的原始数据集在三个类中的每个类中都有3个图像,它们组织在一个目录中

# import the necessary packages
from main.cnn.networks.lenet import LeNet
from sklearn.model_selection import train_test_split
from keras.datasets import mnist
from keras.optimizers import SGD
from keras.utils import np_utils
from keras import backend as K
import numpy as np
import argparse
import cv2 as cv
import ssl

from tensorflow import keras
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator 

train_datagen = ImageDataGenerator(
    fill_mode='constant',
    cval=255.0,
    rotation_range=90,
    zoom_range=[1.0, 1.3],
    rescale=1.0/255.0
)
it = train_datagen.flow_from_directory('training_data/',
    target_size=(260, 380),
    batch_size=9,
    save_to_dir='augmented_data/',
    save_format='jpeg'
)

validation_ds = image_dataset_from_directory (
    directory='validation_data/',
    labels='inferred',
    label_mode='categorical',
    batch_size=1,
    image_size=(380, 260)) 

# scale data to the range of [0, 1]
def normalize(data, labels):
    return data / 255.0, labels 
validation_ds = validation_ds.map(normalize)

# initialize the optimize and model
print("[INFO] compiling model...")
model = keras.applications.Xception(weights=None, input_shape=(380, 260, 3), classes=3)
opt = SGD(lr=0.01)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])

print("[INFO] training...")
model.fit(it, steps_per_epoch=1, epochs=100, verbose=1)

# show the accuracy on the testing set
print("[INFO] evaluating...")
(loss, accuracy) = model.evaluate(validation_ds, batch_size=3, verbose=1)
print("[INFO] accuracy: {:.2f}%".format(accuracy * 100))

for img, label in validation_ds:
    probs = model.predict(img)
    prediction = probs.argmax(axis=1)

    print("PREDICTION: " + str(probs))
    print("ACTUAL LABEL: " + str(label))
我的训练精度收敛到1.0000,但对model.predict的调用如下所示:

[INFO] accuracy: 33.33%
PREDICTION: [[0.30813622 0.3550096  0.3368542 ]]
ACTUAL LABEL: tf.Tensor([[1. 0. 0.]], shape=(1, 3), dtype=float32)
PREDICTION: [[0.3081677  0.35502157 0.33681074]]
ACTUAL LABEL: tf.Tensor([[0. 1. 0.]], shape=(1, 3), dtype=float32)
PREDICTION: [[0.3081628  0.35502544 0.3368117 ]]
ACTUAL LABEL: tf.Tensor([[0. 1. 0.]], shape=(1, 3), dtype=float32)
PREDICTION: [[0.30813095 0.3550423  0.33682677]]
ACTUAL LABEL: tf.Tensor([[0. 0. 1.]], shape=(1, 3), dtype=float32)
...

出于某种原因,总是选择第二类,这就是为什么我的model.evaluate准确率停留在33.3%。我尝试过切换各种批次大小、学习速率和其他超参数,但无法改变结果。

可能是您的神经网络训练不够。批处理大小为9,每个历元1步,100个历元。这意味着到培训结束时,只有900个样本通过了网络。我不确定Keras在什么时候洗牌,但这甚至可能是相同图像的900倍


我建议您将每个历元的步数增加很多,并且可能也会增加批量大小。

在某一点上,您的图像大小是(260,380),那么(380,260),这正常吗?是的,这是因为目标大小出于某种原因是(高度,宽度),而其他实例是(宽度,高度)。无论如何,谢谢你的建议,我会试试的。