Keras 如何改变时装设计师的造型

Keras 如何改变时装设计师的造型,keras,Keras,我通过“fashion_Mnist.load_data()”加载Fahion_Mnist数据集,并尝试训练一个ResNet50神经网络。但我不知道如何根据需要将数据集图像从(28,28,1)重塑为(224224,3),作为ResNet中的输入 我使用的是Python 3,Keras 2.2.4 这是我的代码: from __future__ import absolute_import, division, print_function import tensorflow as tf from

我通过“fashion_Mnist.load_data()”加载Fahion_Mnist数据集,并尝试训练一个ResNet50神经网络。但我不知道如何根据需要将数据集图像从(28,28,1)重塑为(224224,3),作为ResNet中的输入

我使用的是Python 3,Keras 2.2.4

这是我的代码:

from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import time
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Flatten, Dense, Dropout
from tensorflow.python.keras.applications.resnet50 import ResNet50, preprocess_input
from tensorflow.python.keras.optimizers import Adam
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.preprocessing import image
from PIL import Image

fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat','Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

IMAGE_SIZE    = (224,224)
NUM_CLASSES   = 10
BATCH_SIZE    = 8  # try reducing batch size or freeze more layers if your GPU runs out of memory
FREEZE_LAYERS = 2  # freeze the first this many layers for training
NUM_EPOCHS    = 20
WEIGHTS_FINAL = 'model_fashion_resnet.h5'

train_images = preprocess_input(train_images)
train_images = np.expand_dims(train_images, axis=0)

train_labels = preprocess_input(train_labels)
train_labels = np.expand_dims(train_labels, axis=0)

test_images = preprocess_input(test_images)
test_images = np.expand_dims(test_images, axis=0)

net = ResNet50(include_top=False, weights='imagenet', input_tensor=None,
           input_shape=(IMAGE_SIZE[0],IMAGE_SIZE[1],3))
x = net.output
x = Flatten()(x)
x = Dropout(0.5)(x)
output_layer = Dense(NUM_CLASSES, activation='softmax', name='softmax')(x)
model = Model(inputs=net.input, outputs=output_layer)
for layer in model.layers[:FREEZE_LAYERS]:
    layer.trainable = False
for layer in model.layers[FREEZE_LAYERS:]:
    layer.trainable = True
model.compile(optimizer=Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
print(model.summary())


inizio=time.time()


datagen = ImageDataGenerator(
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True)



model.fit_generator(datagen.flow(train_images, train_labels, batch_size=BATCH_SIZE),
                steps_per_epoch=len(train_images) / BATCH_SIZE, epochs=NUM_EPOCHS)
这是我跑步后收到的信息:

ValueError: Error when checking input: expected input_1 to have shape (224, 224, 3) but got array with shape (60000, 28, 28)

如何更改MNIST图像,以便它们可以输入ResNet50神经网络?

也许我遗漏了一些东西,但您想用瓷砖实现什么?我猜原始图像格式是一个28x28的灰度像素值数组——这不是Pillow想要的吗?我猜您需要告诉Image.fromarray这是灰度输入(mode='L'?),它应该在RGB模式下将大小调整后的图像转换回位图。但是,给一个神经网络x8^2放大的图像输入感觉是错误的。如果这样做效果不好,我不会感到惊讶。我用完整的代码更新了帖子,删除了“瓷砖”,这是不同形状的错误。好的:这听起来像是期望一个RGB图像(224x224图像尺寸每像素x3颜色),但你给它的是60000个灰度图像阵列,28x28(每像素一个灰度采样)。抱歉,我不知道该建议什么。也许我遗漏了一些东西,但你想用磁贴实现什么?我猜原始图像格式是一个28x28灰度像素值数组-这不是枕头想要的吗?我猜你需要告诉image.fromarray这是灰度输入(mode='L'?)它应该在RGB模式下将大小调整后的图像转换回位图。但是给一个神经网络x8^2放大的图像输入感觉是错误的。如果效果不太好,我不会感到惊讶。我用完整的代码更新了帖子,并删除了“平铺”,这是不同形状的错误。好的:这听起来像是期望一个RGB图像(224x224图像尺寸每像素x3颜色),但你给它的是一个60000灰度图像阵列,28x28(每像素一个灰度样本)。抱歉,我不知道该建议什么。