Tensorflow 如何使用不同的可训练变量从Inception-3检查点恢复训练

Tensorflow 如何使用不同的可训练变量从Inception-3检查点恢复训练,tensorflow,tf-slim,Tensorflow,Tf Slim,我有一个非常常见的用例,冻结Inception的底层,只训练前两层,然后降低学习率并微调整个Inception模型 下面是我运行第一部分的代码 train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers' with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = get_dataset() images,

我有一个非常常见的用例,冻结Inception的底层,只训练前两层,然后降低学习率并微调整个Inception模型

下面是我运行第一部分的代码

train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers'
with tf.Graph().as_default():
    tf.logging.set_verbosity(tf.logging.INFO)

    dataset = get_dataset()
    images, _, labels = load_batch(dataset, batch_size=32)

    # Create the model, use the default arg scope to configure the batch norm parameters.
    with slim.arg_scope(inception.inception_v3_arg_scope()):
        logits, _ = inception.inception_v3(images, num_classes=5, is_training=True)

    # Specify the loss function:
    one_hot_labels = slim.one_hot_encoding(labels, 5)
    tf.losses.softmax_cross_entropy(one_hot_labels, logits)
    total_loss = tf.losses.get_total_loss()

    # Create some summaries to visualize the training process:
    tf.summary.scalar('losses/Total Loss', total_loss)

    # Specify the optimizer and create the train op:
    optimizer = tf.train.RMSPropOptimizer(0.001, 0.9,
                                    momentum=0.9, epsilon=1.0)
    train_op = slim.learning.create_train_op(total_loss, optimizer, variables_to_train=get_variables_to_train())

    # Run the training:
    final_loss = slim.learning.train(
        train_op,
        logdir=train_dir,
        init_fn=get_init_fn(),
        number_of_steps=4500,
        save_summaries_secs=30,
        save_interval_secs=30,
        session_config=tf.ConfigProto(gpu_options=gpu_options))

print('Finished training. Last batch loss %f' % final_loss)
正确运行的代码,然后是运行第二部分的代码

train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers'
with tf.Graph().as_default():
    tf.logging.set_verbosity(tf.logging.INFO)

    dataset = get_dataset()
    images, _, labels = load_batch(dataset, batch_size=32)

    # Create the model, use the default arg scope to configure the batch norm parameters.
    with slim.arg_scope(inception.inception_v3_arg_scope()):
        logits, _ = inception.inception_v3(images, num_classes=5, is_training=True)

    # Specify the loss function:
    one_hot_labels = slim.one_hot_encoding(labels, 5)
    tf.losses.softmax_cross_entropy(one_hot_labels, logits)
    total_loss = tf.losses.get_total_loss()
    # Create some summaries to visualize the training process:
    tf.summary.scalar('losses/Total Loss', total_loss)

    # Specify the optimizer and create the train op:
    optimizer = tf.train.RMSPropOptimizer(0.0001, 0.9,
                                    momentum=0.9, epsilon=1.0)
    train_op = slim.learning.create_train_op(total_loss, optimizer)

    # Run the training:
    final_loss = slim.learning.train(
        train_op,
        logdir=train_dir,
        init_fn=get_init_fn(),
        number_of_steps=10000,
        save_summaries_secs=30,
        save_interval_secs=30,
        session_config=tf.ConfigProto(gpu_options=gpu_options))

print('Finished training. Last batch loss %f' % final_loss)
请注意,在第二部分中,我没有将任何内容传递到
create\u train\u op
variables\u to\u train
参数中。然后显示此错误

NotFoundError (see above for traceback): Key InceptionV3/Conv2d_4a_3x3/BatchNorm/beta/RMSProp not found in checkpoint
     [[Node: save_1/RestoreV2_49 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_49/tensor_names, save_1/RestoreV2_49/shape_and_slices)]]
     [[Node: save_1/Assign_774/_1550 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_2911_save_1/Assign_774", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

我怀疑它正在为InceptionV3/Conv2d_4a_3x3层寻找RMSProp变量,该层不存在,因为我没有在上一个检查点中训练该层。我不确定如何实现我想要的,因为我在文档中看不到关于如何实现的示例。

TF Slim支持从变量名不匹配的检查点读取,如下所述:

您可以指定检查点中的变量名称如何映射到模型中的变量

我希望这有帮助