Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/328.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras图像分类验证精度较高_Python_Image_Deep Learning_Classification_Keras - Fatal编程技术网

Python Keras图像分类验证精度较高

Python Keras图像分类验证精度较高,python,image,deep-learning,classification,keras,Python,Image,Deep Learning,Classification,Keras,我正在使用图像运行一个图像分类模型,我的问题是我的验证精度高于我的训练精度。 数据(训练/验证)是随机设置的。我使用InceptionV3作为预先培训的模型。准确度和验证准确度之间的比率在100个时期内保持不变。 我尝试了一个较低的学习率和一个额外的批量标准化层 有人对调查什么有什么想法吗?谢谢你的帮助 base_model = InceptionV3(weights='imagenet', include_top=False) # add a global spatial average po

我正在使用图像运行一个图像分类模型,我的问题是我的验证精度高于我的训练精度。 数据(训练/验证)是随机设置的。我使用InceptionV3作为预先培训的模型。准确度和验证准确度之间的比率在100个时期内保持不变。
我尝试了一个较低的学习率和一个额外的批量标准化层

有人对调查什么有什么想法吗?谢谢你的帮助

base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# add a fully-connected layer
x = Dense(468, activation='relu')(x)
x = Dropout(0.5)(x)

# and a logistic layer
predictions = Dense(468, activation='softmax')(x)

# this is the model we will train
model = Model(base_model.input,predictions)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
    layer.trainable = False

# compile the model (should be done *after* setting layers to non-trainable)
adam = Adam(lr=0.0001, beta_1=0.9)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])

# train the model on the new data for a few epochs
batch_size = 64
epochs = 100
img_height = 224
img_width = 224
train_samples = 127647
val_samples = 27865

train_datagen = ImageDataGenerator(
    rescale=1./255,
    #shear_range=0.2,
    zoom_range=0.2,
    zca_whitening=True,
    #rotation_range=0.5,
    horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
    'AD/AutoDetect/',
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    'AD/validation/',
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='categorical')

# fine-tune the model
model.fit_generator(
    train_generator,
    samples_per_epoch=train_samples // batch_size,
    nb_epoch=epochs,
    validation_data=validation_generator,
    nb_val_samples=val_samples // batch_size)
找到了属于468个类的127647个图像。
找到了属于468个类别的27865张图像。
纪元1/100
2048/1994[=========================================================================================================-48s-损失:6.2839-会计科目:0.0073-会计科目:5.8506-会计科目:0.0179
纪元2/100
2048/1994[=============================================================================================-44s-损失:5.8338-会计科目:0.0430-会计科目:5.4865-会计科目:0.1004
纪元3/100
2048/1994[=================================================================================-45秒-损耗:5.5147-附件:0.0786-损耗:5.1474-损耗:0.1161
纪元4/100
2048/1994[============================================================================================================-44s-损失:5.1921-会计科目:0.1074-会计科目:4.8049-会计科目:0.1786


这是因为您在模型中添加了一个退出层,以防止精度在培训期间变为1.0。

您能否详细说明为什么要缩放、翻转和增白数据?对于超过100k的图像,您似乎有足够的数据,至少可以在不进行增强的情况下尝试。除此之外,您还可以使完全连接的层更加复杂。我会尝试1024个或更大的神经元,并且会去除掉掉/批处理规范。只是为了完整性:盗梦空间的合适图像大小是299x299px。224代表VGG。请看这里: