Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/362.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/logging/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python ValueError:layer sequential的输入0与layer::expected min\u ndim=4不兼容,found ndim=3。收到完整形状:[无、32、32]_Python_Tensorflow_Machine Learning_Keras_Deep Learning - Fatal编程技术网

Python ValueError:layer sequential的输入0与layer::expected min\u ndim=4不兼容,found ndim=3。收到完整形状:[无、32、32]

Python ValueError:layer sequential的输入0与layer::expected min\u ndim=4不兼容,found ndim=3。收到完整形状:[无、32、32],python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,我有一个cnn用于图像分类,trainImages和trainLabel(从0到8)是训练数据,validationImages和ValidationLabel用于测试。图像是32*32。我无法使此算法正常工作,请告诉我是否观察到更多错误。我无法确切说出问题出在哪里,因为我无法访问加载的图像,但问题是您提供的样本没有“通道”轴,在指定的输入形状=(32,32,3)中显示为大小为3。每个样本(图像)必须有3个维度(宽度、高度、通道),但相反,您传递的样本只有2个维度(宽度和高度) 这很可能是因为您

我有一个cnn用于图像分类,trainImages和trainLabel(从0到8)是训练数据,validationImages和ValidationLabel用于测试。图像是32*32。我无法使此算法正常工作,请告诉我是否观察到更多错误。

我无法确切说出问题出在哪里,因为我无法访问加载的图像,但问题是您提供的样本没有“通道”轴,在指定的
输入形状=(32,32,3)
中显示为大小为3。每个样本(图像)必须有3个维度(宽度、高度、通道),但相反,您传递的样本只有2个维度(宽度和高度)

这很可能是因为您可能仅使用一个通道加载灰度图像,而numpy没有明确指定轴。如果是这种情况,请确保trainImages和validationImages都具有形状(32,32,1),否则只需使用
np展开最后一个维度。在将其馈送到模型之前,展开dims(\u trainImages,axis=-1)
(对于验证集相同)。相应地,调整到(32,32,1)第一Conv2D层中的输入_形状


希望有帮助,否则让我了解更多细节。

@Babenco5只需调用
model.predict(.)
,对一批您想要分类的图像进行预测。然后,由于您正在输出logits,因此需要应用
np.argmax(预测结果,轴=-1)
,以获得与给定输入相对应的一批预测标签。关于这一点,因为你有9个标签(如你所说,从0到8),我相信你应该在最后一个致密层中有9个神经元(而不是像你那样有10个)。如果我的回答有帮助,请接受。
import imageio
import glob
import numpy as np
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

trainImages = []
for imagePath in glob.glob('C:/Users/razva/*.png'):
     image = imageio.imread(imagePath)
     trainImages.append(image)
trainImages = np.array(trainImages)

f = open('C:/Users/razva/train.txt')
trainLabels = f.readlines()
for i in range(len(trainLabels)):
     trainLabels[i] = int(trainLabels[i][11])
trainLabels = np.array(trainLabels)

validationImages = []
for imagePath in glob.glob('C:/Users/razva/*.png'):
     image = imageio.imread(imagePath)
     validationImages.append(image)
validationImages = np.array(validationImages)
f = open('C:/Users/razva/validation.txt')
validationLabels = f.readlines()
for i in range(len(validationLabels)):
     validationLabels[i] = int(validationLabels[i][11])
validationLabels = np.array(validationLabels)

mean_image = np.mean(trainImages, axis = 0)
sd = np.std(trainImages)
trainImages = (trainImages - mean_image) / sd

mean_image1 = np.mean(validationImages, axis = 0)
sd1 = np.std(validationImages)
validationImages = (validationImages - mean_image1) / sd1

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))

model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])

history = model.fit(trainImages, trainLabels, epochs=10, validation_data=(validationImages, validationLabels))