Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/353.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无论体系结构如何,迁移学习模型的精确度均为0_Python_Tensorflow_Keras_Tensorflow2.0_Transfer Learning - Fatal编程技术网

Python 无论体系结构如何,迁移学习模型的精确度均为0

Python 无论体系结构如何,迁移学习模型的精确度均为0,python,tensorflow,keras,tensorflow2.0,transfer-learning,Python,Tensorflow,Keras,Tensorflow2.0,Transfer Learning,我正在尝试使用Keras和迁移学习开发一个模型。我正在使用的数据集可以在此处找到: 我选取了样本最多的10类汽车品牌,并使用迁移学习对基于VGG16体系结构构建的两个模型进行了培训,如下面的代码所示 samples_counts = utils.read_dictionary(utils.TOP10_BRANDS_COUNTS_NAME) train_dataset = image_dataset_from_directory( directory=utils.TRAIN_SET_LO

我正在尝试使用Keras和迁移学习开发一个模型。我正在使用的数据集可以在此处找到:

我选取了样本最多的10类汽车品牌,并使用迁移学习对基于VGG16体系结构构建的两个模型进行了培训,如下面的代码所示

samples_counts = utils.read_dictionary(utils.TOP10_BRANDS_COUNTS_NAME)

train_dataset = image_dataset_from_directory(
    directory=utils.TRAIN_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    validation_split=0.2,
    subset='training',
    interpolation='bilinear'
)

validation_dataset = image_dataset_from_directory(
    directory=utils.TRAIN_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    validation_split=0.2,
    subset='validation',
    interpolation='bilinear'
)

test_dataset = image_dataset_from_directory(
    directory=utils.TEST_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    interpolation='bilinear'
)

image_shape = (utils.RESIZE_HEIGHT, utils.RESIZE_WIDTH, 3)
base_model = apps.VGG16(include_top=False, weights='imagenet', input_shape=image_shape)
base_model.trainable = False

preprocess_input = apps.vgg16.preprocess_input
flatten_layer = layers.Flatten(name='flatten')
specialisation_layer = layers.Dense(1024, activation='relu', name='specialisation_layer')
avg_pooling_layer = layers.GlobalAveragePooling2D(name='pooling_layer')
dropout_layer = layers.Dropout(0.2, name='dropout_layer')
classification_layer = layers.Dense(10, activation='softmax', name='classification_layer')

inputs = tf.keras.Input(shape=(utils.RESIZE_HEIGHT, utils.RESIZE_WIDTH, 3))
x = preprocess_input(inputs)
x = base_model(x, training=False)

# First model
# x = flatten_layer(x)
# x = specialisation_layer(x)

# Second model
x = avg_pooling_layer(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

model.summary()

steps_per_epoch = len(train_dataset)
validation_steps = len(validation_dataset)
base_learning_rate = 0.0001
optimizer = optimizers.Adam(learning_rate=base_learning_rate)
loss_function = losses.CategoricalCrossentropy()
train_metrics = [metrics.Accuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()]

model.compile(optimizer=optimizer,
              loss=loss_function,
              metrics=train_metrics)

initial_results = model.evaluate(validation_dataset,
                                 steps=validation_steps,
                                 return_dict=True)

training_history = model.fit(train_dataset, epochs=10, verbose=0,
                             validation_data=validation_dataset,
                             callbacks=[TqdmCallback(verbose=2)],
                             steps_per_epoch=steps_per_epoch,
                             validation_steps=validation_steps)

history = training_history.history
final_results = model.evaluate(test_dataset,
                              return_dict=True,
                              callbacks=[TqdmCallback(verbose=2)])

总的来说,我的准确度一直为0,指标也很差。我尝试了和中提到的解决方案,但没有成功

第一个模型的总结和结果如下:

Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
flatten (Flatten)            (None, 512)               0
specialisation_layer (Dense) (None, 1024)              525312
classification_layer (Dense) (None, 10)                10250

Total params: 15,250,250
Trainable params: 535,562
Non-trainable params: 14,714,688
Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
pooling_layer (GlobalAverage (None, 512)               0
dropout_layer (Dropout)      (None, 512)               0
classification_layer (Dense) (None, 10)                5130

Total params: 14,719,818
Trainable params: 5,130
Non-trainable params: 14,714,688

第二个模型的总结和结果如下:

Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
flatten (Flatten)            (None, 512)               0
specialisation_layer (Dense) (None, 1024)              525312
classification_layer (Dense) (None, 10)                10250

Total params: 15,250,250
Trainable params: 535,562
Non-trainable params: 14,714,688
Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
pooling_layer (GlobalAverage (None, 512)               0
dropout_layer (Dropout)      (None, 512)               0
classification_layer (Dense) (None, 10)                5130

Total params: 14,719,818
Trainable params: 5,130
Non-trainable params: 14,714,688
在下面的代码中

# Second model
x = avg_pooling_layer(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)
您需要在avg_pooling_层之后添加一个展平层。或者,改变 ave_pooling_是一个globalxpooling2d层,我认为这是最好的。所以你的第二个模型是

x=tf.keras.layers.GlobalMaxPooling2D()(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)
此外,在Vgg中,您可以设置参数pooling='average,然后输出为一维张量,因此不需要展平它,也不需要添加
全球平均池。在测试数据集和验证数据集中,设置shuffle=False,设置seed=None。您的每_历元步骤和验证步骤的值不正确。它们通常设置为样本数//批次大小。您可以在model.fit中将这些值保留为None,它将在内部确定这些值,还可以将verbose设置为1,以便查看每个历元的训练结果。leve callbacks=None我甚至不知道什么是TqdmCallback(verbose=2)。没有在我能找到的任何文档中列出。

您是否尝试过设置base_模型。trainable=True?没有,因为我不想再培训VGG16网络,我只想培训我放在顶部的层(转移学习)谢谢,我尝试过您的建议。设置shuffle=False和seed=None并使用max pooling确实有帮助,但是每个历元的步骤和验证步骤都是正确的,因为数据集的长度计算为可用的批数。TqdmCallback是一个显示进度条的回调,因此它不会影响任何内容。我不再获得0精度,但验证集结果与测试集结果非常不同。我将针对这个问题发布另一个问题。