Python 基于tf.estimator.estimator框架的迁移学习

Python 基于tf.estimator.estimator框架的迁移学习,python,tensorflow,tensorflow-estimator,Python,Tensorflow,Tensorflow Estimator,我正在尝试使用我自己的数据集和类,对imagenet上预训练的Inception resnet v2模型进行迁移学习。 我原来的代码库是对tf.slim示例的修改,我再也找不到了,现在我正试图使用tf.estimator.*框架重写相同的代码 但是,我遇到的问题是,只从预先训练的检查点加载一些权重,用默认的初始值设定项初始化其余的层 通过研究这个问题,我发现了和,都提到需要在我的模型中使用tf.train.init\u from\u checkpoint。我试过了,但考虑到这两方面都缺乏实例,我

我正在尝试使用我自己的数据集和类,对imagenet上预训练的Inception resnet v2模型进行迁移学习。 我原来的代码库是对
tf.slim
示例的修改,我再也找不到了,现在我正试图使用
tf.estimator.*
框架重写相同的代码

但是,我遇到的问题是,只从预先训练的检查点加载一些权重,用默认的初始值设定项初始化其余的层

通过研究这个问题,我发现了和,都提到需要在我的
模型中使用
tf.train.init\u from\u checkpoint
。我试过了,但考虑到这两方面都缺乏实例,我想我弄错了

这是我最简单的例子:

import sys
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
import tensorflow as tf
import numpy as np

import inception_resnet_v2

NUM_CLASSES = 900
IMAGE_SIZE = 299

def input_fn(mode, num_classes, batch_size=1):
  # some code that loads images, reshapes them to 299x299x3 and batches them
  return tf.constant(np.zeros([batch_size, 299, 299, 3], np.float32)), tf.one_hot(tf.constant(np.zeros([batch_size], np.int32)), NUM_CLASSES)


def model_fn(images, labels, num_classes, mode):
  with tf.contrib.slim.arg_scope(inception_resnet_v2.inception_resnet_v2_arg_scope()):
    logits, end_points = inception_resnet_v2.inception_resnet_v2(images,
                                             num_classes, 
                                             is_training=(mode==tf.estimator.ModeKeys.TRAIN))
  predictions = {
      'classes': tf.argmax(input=logits, axis=1),
      'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
  }

  if mode == tf.estimator.ModeKeys.PREDICT:
    return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)

  exclude = ['InceptionResnetV2/Logits', 'InceptionResnetV2/AuxLogits']
  variables_to_restore = tf.contrib.slim.get_variables_to_restore(exclude=exclude)
  scopes = { os.path.dirname(v.name) for v in variables_to_restore }
  tf.train.init_from_checkpoint('inception_resnet_v2_2016_08_30.ckpt',
                                {s+'/':s+'/' for s in scopes})
  
  tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)
  total_loss = tf.losses.get_total_loss()    #obtain the regularization losses as well
  
  # Configure the training op
  if mode == tf.estimator.ModeKeys.TRAIN:
    global_step = tf.train.get_or_create_global_step()
    optimizer = tf.train.AdamOptimizer(learning_rate=0.00002)
    train_op = optimizer.minimize(total_loss, global_step)
  else:
    train_op = None
  
  return tf.estimator.EstimatorSpec(
    mode=mode,
    predictions=predictions,
    loss=total_loss,
    train_op=train_op)

def main(unused_argv):
  # Create the Estimator
  classifier = tf.estimator.Estimator(
      model_fn=lambda features, labels, mode: model_fn(features, labels, NUM_CLASSES, mode),
      model_dir='model/MCVE')

  # Train the model  
  classifier.train(
      input_fn=lambda: input_fn(tf.estimator.ModeKeys.TRAIN, NUM_CLASSES, batch_size=1),
      steps=1000)
    
  # Evaluate the model and print results
  eval_results = classifier.evaluate(
      input_fn=lambda: input_fn(tf.estimator.ModeKeys.EVAL, NUM_CLASSES, batch_size=1))
  print()
  print('Evaluation results:\n    %s' % eval_results)
 
if __name__ == '__main__':
  tf.app.run(main=main, argv=[sys.argv[0]])
其中
inception\u resnet\u v2

如果我运行这个脚本,我会从检查点获得一堆信息日志,但是在会话创建时,它似乎试图从检查点加载
Logits
权重,并且由于形状不兼容而失败。这是完整的回溯:

Traceback (most recent call last):

  File "<ipython-input-6-06fadd69ae8f>", line 1, in <module>
    runfile('C:/Users/1/Desktop/transfer_learning_tutorial-master/MCVE.py', wdir='C:/Users/1/Desktop/transfer_learning_tutorial-master')

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
    execfile(filename, namespace)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/Users/1/Desktop/transfer_learning_tutorial-master/MCVE.py", line 77, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]])

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))

  File "C:/Users/1/Desktop/transfer_learning_tutorial-master/MCVE.py", line 68, in main
    steps=1000)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 302, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 780, in _train_model
    log_step_count_steps=self._config.log_step_count_steps) as mon_sess:

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 368, in MonitoredTrainingSession
    stop_grace_period_secs=stop_grace_period_secs)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 673, in __init__
    stop_grace_period_secs=stop_grace_period_secs)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 493, in __init__
    self._sess = _RecoverableSession(self._coordinated_creator)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 851, in __init__
    _WrappedSession.__init__(self, self._create_session())

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 856, in _create_session
    return self._sess_creator.create_session()

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 554, in create_session
    self.tf_sess = self._session_creator.create_session()

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\monitored_session.py", line 428, in create_session
    init_fn=self._scaffold.init_fn)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\training\session_manager.py", line 279, in prepare_session
    sess.run(init_op, feed_dict=init_feed_dict)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
    run_metadata_ptr)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
    options, run_metadata)

  File "C:\Users\1\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)

InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [900] rhs shape= [1001]    [[Node: Assign_1145 = Assign[T=DT_FLOAT,
_class=["loc:@InceptionResnetV2/Logits/Logits/biases"], use_locking=true, validate_shape=true,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](InceptionResnetV2/Logits/Logits/biases, checkpoint_initializer_1145)]]
使用
{v.name:v}
相反,如果我尝试使用
name:variable
映射,则会出现以下错误:

ValueError: Tensor InceptionResnetV2/Conv2d_2a_3x3/weights:0 is not found in
inception_resnet_v2_2016_08_30.ckpt checkpoint
{'InceptionResnetV2/Repeat_2/block8_4/Branch_1/Conv2d_0c_3x1/BatchNorm/moving_mean': [256], 
'InceptionResnetV2/Repeat/block35_9/Branch_0/Conv2d_1x1/BatchNorm/beta': [32], ...
错误继续列出我认为检查点中的所有变量名(或者可能是作用域?)

更新(2) 在检查了上面的最新错误后,我看到检查点变量列表中有
InceptionResnetV2/Conv2d\u 2a\u 3x3/weights
问题是,
:0
最后
现在,我将验证这是否确实解决了问题,如果是这样,我将发布一个答案。

多亏@KathyWu的评论,我找到了正确的方向并找到了问题

事实上,我计算
作用域的方法将包括
InceptionResnetV2/
作用域,这将在“作用域”(即网络中的所有变量)下触发所有变量的加载。然而,用正确的字典来替换它并不是件小事

在可能的作用域模式中,我必须使用的是
'scope\u variable\u name':variable
one,但不使用实际的
variable.name
属性

variable.name
看起来像:
'some\u scope/variable\u name:0'
检查点变量的名称中没有
:0
,因此使用
作用域={v.name:v.name for v in variables\u to\u restore}
将引发“variable not found”错误

使其工作的技巧是从名称中去掉张量索引:

tf.train.init_from_checkpoint('inception_resnet_v2_2016_08_30.ckpt', 
                              {v.name.split(':')[0]: v for v in variables_to_restore})

我发现
{s+'/':s+'/'for s in scopes}
不起作用,只是因为
要还原的变量包含类似
的“全局步骤”
,所以作用域包括可以包含所有内容的全局作用域。您需要打印要还原的
变量
,找到
“全局步骤”
东西,并将其放入
“排除”

估计器目录
模型/MCVE
中是否有任何检查点?否,目录为空将行
范围映射为变量
正在将
InceptionResnetV2
添加到作用域列表中,因此将加载
InceptionResnetV2/
下的所有变量。不必构建作用域列表,您可以尝试直接列出变量:
tf.train.init\u from\u checkpoint('inception\u resnet\u v2\u 2016\u 08\u 30.ckpt',{v.name:v.name for v in variables})
这是可能的,是的。但是,如果我尝试使用您建议的代码,则会出现以下错误:
ValueError:仅作用域名称为InceptionResnetV2/Conv2d\u 2a\u 3x3的赋值映射应映射到仅作用域的InceptionResnetV2/Conv2d\u 2a\u 3x3/权重:0。应为“scope/”:“other_scope/”。
。变量名称必须以不同的方式使用,如果您完全从“代码> SLIM TF.Tr.Irr.Frask.GETYVIABABESLYtotoReals< /Cord>。这是相似的,但只是记账的问题(令人讨厌的一个)。
tf.train.init_from_checkpoint('inception_resnet_v2_2016_08_30.ckpt', 
                              {v.name.split(':')[0]: v for v in variables_to_restore})