Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/295.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 从检查点创建估计器并另存为SavedModel,无需进一步培训_Python_Tensorflow_Tensorflow Estimator - Fatal编程技术网

Python 从检查点创建估计器并另存为SavedModel,无需进一步培训

Python 从检查点创建估计器并另存为SavedModel,无需进一步培训,python,tensorflow,tensorflow-estimator,Python,Tensorflow,Tensorflow Estimator,我已经从TF Slim Resnet V2检查点创建了一个估计器,并对其进行了测试以进行预测。我所做的主要工作基本上类似于普通估计器,以及从检查点分配检查点: def model_fn(features, labels, mode, params): ... slim.assign_from_checkpoint_fn(os.path.join(checkpoint_dir, 'resnet_v2_50.ckpt'), slim.get_model_variables('resnet_v

我已经从TF Slim Resnet V2检查点创建了一个估计器,并对其进行了测试以进行预测。我所做的主要工作基本上类似于普通估计器,以及从检查点分配检查点:

def model_fn(features, labels, mode, params):
  ...
  slim.assign_from_checkpoint_fn(os.path.join(checkpoint_dir, 'resnet_v2_50.ckpt'), slim.get_model_variables('resnet_v2')
  ...
  if mode == tf.estimator.ModeKeys.PREDICT:
    predictions = {
      'class_ids': predicted_classes[:, tf.newaxis],
      'probabilities': tf.nn.softmax(logits),
      'logits': logits,
    }
  return tf.estimator.EstimatorSpec(mode, predictions=predictions)
为了将估计器导出为保存的模型,我做了一个服务输入,如下所示:

def image_preprocess(image_buffer):
    image = tf.image.decode_jpeg(image_buffer, channels=3)
    image_preprocessing_fn = preprocessing_factory.get_preprocessing('inception', is_training=False)
    image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
    return image

def serving_input_fn():
    input_ph = tf.placeholder(tf.string, shape=[None], name='image_binary')
    image_tensors = image_preprocess(input_ph)
    return tf.estimator.export.ServingInputReceiver(image_tensors, input_ph)
config = tf.estimator.RunConfig(save_summary_steps = None, save_checkpoints_secs = None)
warm_start = tf.estimator.WarmStartSettings(checkpoint_dir, checkpoint_name)
classifier = tf.estimator.Estimator(model_fn=model_fn, warm_start_from = warm_start, config = config)
classifier.export_savedmodel(export_dir_base = FLAGS.output_dir, serving_input_receiver_fn =  serving_input_fn)
在main函数中,我使用export_saved_model尝试将估计器导出为SavedModel格式:

def main():
    ...
    classifier = tf.estimator.Estimator(model_fn=model_fn)
    classifier.export_saved_model(dir_path, serving_input_fn)

然而,当我试图运行代码时,它说“在/tmp/tmpn3spty2z处找不到经过训练的模型”。据我所知,这个export_saved_模型试图找到一个经过训练的估计器模型来导出到SavedModel。但是,我想知道是否有任何方法可以将预训练的检查点恢复到估计器中,并将估计器导出到保存的模型中,而无需任何进一步的培训?

我已经解决了我的问题。要使用TF 1.14将TF Slim Resnet检查点导出到SavedModel,可以将热启动与export_SavedModel一起使用,如下所示:

def image_preprocess(image_buffer):
    image = tf.image.decode_jpeg(image_buffer, channels=3)
    image_preprocessing_fn = preprocessing_factory.get_preprocessing('inception', is_training=False)
    image = image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size)
    return image

def serving_input_fn():
    input_ph = tf.placeholder(tf.string, shape=[None], name='image_binary')
    image_tensors = image_preprocess(input_ph)
    return tf.estimator.export.ServingInputReceiver(image_tensors, input_ph)
config = tf.estimator.RunConfig(save_summary_steps = None, save_checkpoints_secs = None)
warm_start = tf.estimator.WarmStartSettings(checkpoint_dir, checkpoint_name)
classifier = tf.estimator.Estimator(model_fn=model_fn, warm_start_from = warm_start, config = config)
classifier.export_savedmodel(export_dir_base = FLAGS.output_dir, serving_input_receiver_fn =  serving_input_fn)