使用Keras中的多gpu模型冻结图层

使用Keras中的多gpu模型冻结图层,keras,multi-gpu,Keras,Multi Gpu,我正试图在Keras中对改进的InceptionV3模型进行微调 我遵循上的示例“在一组新类上微调InceptionV3” 因此,我首先使用以下代码对添加到InceptionV3基础模型的顶层密集层进行了培训: model = Model(inputs=base_model.input, outputs=predictions) for layer in base_model.layers: layer.trainable = False parallel_model = multi

我正试图在Keras中对改进的InceptionV3模型进行微调

我遵循上的示例“在一组新类上微调InceptionV3”

因此,我首先使用以下代码对添加到InceptionV3基础模型的顶层密集层进行了培训:

model = Model(inputs=base_model.input, outputs=predictions)

for layer in base_model.layers:
    layer.trainable = False

parallel_model = multi_gpu_model(model, gpus=2)

parallel_model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

history = parallel_model.fit_generator(generate_batches(path), steps_per_epoch = num_images/batch_size, epochs = num_epochs)
之后,我尝试从InceptionV3中微调前2个inception块。根据这个例子,我应该做的是:

for layer in model.layers[:249]:
   layer.trainable = False
for layer in model.layers[249:]:
   layer.trainable = True

model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')

model.fit_generator(...)
但是我使用的是
多gpu\u模型
,所以我不知道如何冻结前249层

我的意思是,如果我冻结无gpu模型中的层(如示例),并使用
parallel\u model=multi\u gpu\u model(model,gpu=2)
冻结
parallel\u model
中的层,那么刚刚训练并包含在
parallel\u model
中的顶层密集层中的权重将被覆盖,对吗

另一方面,我尝试在并行_模型中直接为层使用
。层[:249]:layer.trainable=False
,但当我检查
并行_模型中的层时,它显示:

for i, layer in enumerate(parallel_model.layers):
   print(i, layer.name)

(0, 'input_1')
(1, 'lambda_1')
(2, 'lambda_2')
(3, 'model_1')
(4, 'dense_3')
那么什么是“lambda_1”、“lambda_2”和“model_1”层?为什么在
并行_模型中只显示5层


更重要的是,如何冻结
parallel\u模型中的层

此示例有点复杂,因为您要嵌套一个基本模型

base_model = InceptionV3(weights='imagenet', include_top=False)
在模型中添加自己的密集层

model = Model(inputs=base_model.input, outputs=predictions)
然后调用multi_gpu_model,当它使用lambda为每个gpu拆分一次模型时,再次嵌套模型,然后将输出连接在一起,以便将模型分布在多个gpu上

parallel_model = multi_gpu_model(model, gpus=2)
在这种情况下,请记住两件事:在基本_模型的层中更改可训练性,并将非并行模型加载到cpu上以获得最佳性能

这里是完整的微调示例,只需更新train_data_dir以指向您自己的数据位置

import tensorflow as tf
from keras import Model
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.layers import Dense, GlobalAveragePooling2D
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import multi_gpu_model

train_data_dir = '/home/ubuntu/work/data/train'
batch_size_per_gpu = 32
nb_classes = 3
my_gpus = 2
target_size = (224, 224)
num_epochs_to_fit_dense_layer = 2
num_epochs_to_fit_last_two_blocks = 3

batch_size = batch_size_per_gpu * my_gpus
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_iterator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=target_size,
    batch_size=batch_size,
    class_mode='categorical',
    shuffle=True)

# Check to make sure our model will match our data
assert nb_classes == train_iterator.num_classes

# Create base and template models on cpu
with tf.device('/cpu:0'):
    base_model = InceptionV3(weights='imagenet', include_top=False)
    for layer in base_model.layers:
        layer.trainable = False

    # Add prediction layer to base pre-trained model
    x = base_model.output
    x = GlobalAveragePooling2D()(x)
    x = Dense(1024, activation='relu')(x)
    predictions = Dense(nb_classes, activation='softmax')(x)

    template_model = Model(inputs=base_model.input, outputs=predictions)

    # If you need to load weights from previous training, do so here:
    # template_model.load_weights('template_model.h5', by_name=True)

# Create parallel model on GPUs
parallel_model = multi_gpu_model(template_model, gpus=2)
parallel_model.compile(optimizer='adam', loss='categorical_crossentropy')

# Train parallel model.
history = parallel_model.fit_generator(
    train_iterator,
    steps_per_epoch=train_iterator.n // batch_size,
    epochs=num_epochs_to_fit_dense_layer)

# Unfreeze some layers in our model
for layer in base_model.layers[:249]:
   layer.trainable = False
for layer in base_model.layers[249:]:
   layer.trainable = True

# Train parallel_model with more trainable layers
parallel_model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
history2 = parallel_model.fit_generator(
    train_iterator,
    steps_per_epoch=train_iterator.n // batch_size,
    epochs=num_epochs_to_fit_last_two_blocks)

# Save model via the template model which shares the same weights as the parallel model.
template_model.save('template_model.h5')

这个例子有点复杂,因为您要嵌套一个基础模型

base_model = InceptionV3(weights='imagenet', include_top=False)
在模型中添加自己的密集层

model = Model(inputs=base_model.input, outputs=predictions)
然后调用multi_gpu_model,当它使用lambda为每个gpu拆分一次模型时,再次嵌套模型,然后将输出连接在一起,以便将模型分布在多个gpu上

parallel_model = multi_gpu_model(model, gpus=2)
在这种情况下,请记住两件事:在基本_模型的层中更改可训练性,并将非并行模型加载到cpu上以获得最佳性能

这里是完整的微调示例,只需更新train_data_dir以指向您自己的数据位置

import tensorflow as tf
from keras import Model
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.layers import Dense, GlobalAveragePooling2D
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import multi_gpu_model

train_data_dir = '/home/ubuntu/work/data/train'
batch_size_per_gpu = 32
nb_classes = 3
my_gpus = 2
target_size = (224, 224)
num_epochs_to_fit_dense_layer = 2
num_epochs_to_fit_last_two_blocks = 3

batch_size = batch_size_per_gpu * my_gpus
train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
train_iterator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=target_size,
    batch_size=batch_size,
    class_mode='categorical',
    shuffle=True)

# Check to make sure our model will match our data
assert nb_classes == train_iterator.num_classes

# Create base and template models on cpu
with tf.device('/cpu:0'):
    base_model = InceptionV3(weights='imagenet', include_top=False)
    for layer in base_model.layers:
        layer.trainable = False

    # Add prediction layer to base pre-trained model
    x = base_model.output
    x = GlobalAveragePooling2D()(x)
    x = Dense(1024, activation='relu')(x)
    predictions = Dense(nb_classes, activation='softmax')(x)

    template_model = Model(inputs=base_model.input, outputs=predictions)

    # If you need to load weights from previous training, do so here:
    # template_model.load_weights('template_model.h5', by_name=True)

# Create parallel model on GPUs
parallel_model = multi_gpu_model(template_model, gpus=2)
parallel_model.compile(optimizer='adam', loss='categorical_crossentropy')

# Train parallel model.
history = parallel_model.fit_generator(
    train_iterator,
    steps_per_epoch=train_iterator.n // batch_size,
    epochs=num_epochs_to_fit_dense_layer)

# Unfreeze some layers in our model
for layer in base_model.layers[:249]:
   layer.trainable = False
for layer in base_model.layers[249:]:
   layer.trainable = True

# Train parallel_model with more trainable layers
parallel_model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
history2 = parallel_model.fit_generator(
    train_iterator,
    steps_per_epoch=train_iterator.n // batch_size,
    epochs=num_epochs_to_fit_last_two_blocks)

# Save model via the template model which shares the same weights as the parallel model.
template_model.save('template_model.h5')