Tensorflow KERA使用经过培训的InceptionV3型号&x2B;CIFAR10获取有关批大小的错误

Tensorflow KERA使用经过培训的InceptionV3型号&x2B;CIFAR10获取有关批大小的错误,tensorflow,machine-learning,keras,deep-learning,conv-neural-network,Tensorflow,Machine Learning,Keras,Deep Learning,Conv Neural Network,我不熟悉机器学习和Keras等 为了使用经过训练的模型来提高准确性,在我的例子中,我跟随Jerry Kurata在Pluralsight上使用了InceptionV3,只修改了最后一层来训练识别鸟类 我拥有的数据集来自Keras内置的CIFAR10,下面是 以下是错误消息: F tensorflow/stream_executor/cuda/cuda_dnn.cc:516]检查失败: CUDNNSetterNSNDDescriptor(handle.get(),elem\u类型,nd,dims.

我不熟悉机器学习和Keras等

为了使用经过训练的模型来提高准确性,在我的例子中,我跟随Jerry Kurata在Pluralsight上使用了InceptionV3,只修改了最后一层来训练识别鸟类

我拥有的数据集来自Keras内置的CIFAR10,下面是

以下是错误消息:

F tensorflow/stream_executor/cuda/cuda_dnn.cc:516]检查失败: CUDNNSetterNSNDDescriptor(handle.get(),elem\u类型,nd,dims.data(), strips.data())==CUDNN\u状态\u成功(3对0)批处理\u描述符: {计数:32要素\地图\计数:288空间:%d 0%d 0值\最小值: 0.000000值\u最大值:0.000000布局:BatchDepthYX}中止(堆芯转储)

我从中发现了一个可能的原因

CIFAR10(32*32)中的图像样本太小,导致此问题 发行

但我想不出如何修复它

这是我的密码:

import matplotlib.pyplot as plt
import keras
from keras import backend as K
with K.tf.device("/device:GPU:0"):
    config = K.tf.ConfigProto(intra_op_parallelism_threads=4,
           inter_op_parallelism_threads=4, allow_soft_placement=True,
           device_count = {'CPU' : 1, 'GPU' : 1})
    session = K.tf.Session(config=config)
    K.set_session(session)

from keras.callbacks import EarlyStopping
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import SGD
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras.datasets import cifar10
# "/device:GPU:0"
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'


def create_generator():
    return ImageDataGenerator(
            featurewise_center=False,  # set input mean to 0 over the dataset
            samplewise_center=False,  # set each sample mean to 0
            featurewise_std_normalization=False,  # divide inputs by std of the dataset
            samplewise_std_normalization=False,  # divide each input by its std
            zca_whitening=False,  # apply ZCA whitening
            zca_epsilon=1e-06,  # epsilon for ZCA whitening
            rotation_range=0,  # randomly rotate images in the range (degrees, 0 to 180)
            # randomly shift images horizontally (fraction of total width)
            width_shift_range=0.1,
            # randomly shift images vertically (fraction of total height)
            height_shift_range=0.1,
            shear_range=0.,  # set range for random shear
            zoom_range=0.,  # set range for random zoom
            channel_shift_range=0.,  # set range for random channel shifts
            # set mode for filling points outside the input boundaries
            fill_mode='nearest',
            cval=0.,  # value used for fill_mode = "constant"
            horizontal_flip=True,  # randomly flip images
            vertical_flip=False,  # randomly flip images
            # set rescaling factor (applied before any other transformation)
            rescale=None,
            # set function that will be applied on each input
            preprocessing_function=None,
            # image data format, either "channels_first" or "channels_last"
            data_format=None,
            # fraction of images reserved for validation (strictly between 0 and 1)
            validation_split=0.0)

Training_Epochs = 1
Batch_Size = 32
Number_FC_Neurons = 1024
Num_Classes = 10

(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, Num_Classes)
y_test = keras.utils.to_categorical(y_test, Num_Classes)


# load cifar10 data here https://keras.io/datasets/

datagen = create_generator()
datagen.fit(x_train)

Inceptionv3_model = InceptionV3(weights='imagenet', include_top=False)
print('Inception v3 model without last FC loaded')

x = Inceptionv3_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(Number_FC_Neurons, activation='relu')(x)
predictions = Dense(Num_Classes, activation='softmax')(x)

model = Model(inputs=Inceptionv3_model.input, outputs=predictions)
# print(model.summary())

print('\nFine tuning existing model')

Layers_To_Freeze = 172
for layer in model.layers[:Layers_To_Freeze]:
    layer.trainable = False
for layer in model.layers[Layers_To_Freeze:]:
    layer.trainable = True

model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

cbk_early_stopping = EarlyStopping(monitor='val_acc', mode='max')

print(len(x_train))

history_transfer_learning = model.fit_generator(
    datagen.flow(x_train, y_train, batch_size=Batch_Size),
    epochs=Training_Epochs,
    validation_data=(x_test, y_test),
    workers=4,
    steps_per_epoch=len(x_train)//Batch_Size,
    callbacks=[cbk_early_stopping]
)

model.save('incepv3_transfer_cifar10.h5', overwrite=True, include_optimizer=True)

# Score trained model.
scores = model.evaluate(x_test, 12, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])

您所说的错误是输入大小的差异。预训练的Imagenet模型比Cifar-10(32,32)的图像尺寸更大

您需要像这样指定模型的输入形状

Inceptionv3_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(32, 32, 3))

有关更多说明,请查看此项。

谢谢!在放入参数后,我得到了这个错误>ValueError:输入大小必须至少为75x75;got
input\u shape=(32,32,3)
我建议将数据集调整为更大的形状(75,96,150),以获得更好的结果或从网络中删除一些层。我建议第一个选项,因为它是我一直注意到的最有利的选项。第三个维度是什么?在我的例子中,32个宽度32个高度和3个颜色通道,3个颜色通道如何变成150个通道??还有,第一个选项是什么?我的意思是尝试将图像的大小从(32,32)调整到(75,75),(96,96)或(150,150)。