如何使用tf.saved_模型加载模型并调用预测函数[TENSORFLOW 2.0 API]

如何使用tf.saved_模型加载模型并调用预测函数[TENSORFLOW 2.0 API],tensorflow,model,save,load,predict,Tensorflow,Model,Save,Load,Predict,我对tensorflow非常陌生,尤其是2.0,因为关于该API的示例不多,但它似乎比1.x更方便 到目前为止,我使用tf.estimator api训练了一个线性模型,然后使用tf.estimator.exporter保存了它 在此之后,我想使用tf.saved_model api加载此模型,我想我成功地做到了这一点,但我的过程中存在一些疑问,因此下面是我的代码的快速查看: 因此,我使用tf.feature\u列api创建了一系列功能,如下所示: feature_columns = [Num

我对tensorflow非常陌生,尤其是2.0,因为关于该API的示例不多,但它似乎比1.x更方便 到目前为止,我使用tf.estimator api训练了一个线性模型,然后使用tf.estimator.exporter保存了它

在此之后,我想使用tf.saved_model api加载此模型,我想我成功地做到了这一点,但我的过程中存在一些疑问,因此下面是我的代码的快速查看:

因此,我使用tf.feature\u列api创建了一系列功能,如下所示:

feature_columns = 
[NumericColumn(key='geoaccuracy', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
 NumericColumn(key='longitude', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
 NumericColumn(key='latitude', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
 NumericColumn(key='bidfloor', shape=(1,), default_value=None, dtype=tf.float32, normalizer_fn=None),
 VocabularyListCategoricalColumn(key='adid', vocabulary_list=('115', '124', '139', '122', '121', '146', '113', '103', '123', '104', '147', '114', '149', '148'), dtype=tf.string, default_value=-1, num_oov_buckets=0),
 VocabularyListCategoricalColumn(key='campaignid', vocabulary_list=('36', '31', '33', '28'), dtype=tf.string, default_value=-1, num_oov_buckets=0),
 VocabularyListCategoricalColumn(key='exchangeid', vocabulary_list=('1241', '823', '1240', '1238'), dtype=tf.string, default_value=-1, num_oov_buckets=0),
...]
之后,我用这种方式使用我的特征列数组定义了一个估计器,并对其进行训练。直到这里,没问题

linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
训练完我的模型后,我想保存它,所以这里开始怀疑,下面是我如何进行的,但不确定它是否正确:

serving_input_parse = tf.feature_column.make_parse_example_spec(feature_columns=feature_columns)

""" view of the variable : serving_input_parse = 
 {'adid': VarLenFeature(dtype=tf.string),
 'at': VarLenFeature(dtype=tf.string),
 'basegenres': VarLenFeature(dtype=tf.string),
 'bestkw': VarLenFeature(dtype=tf.string),
 'besttopic': VarLenFeature(dtype=tf.string),
 'bidfloor': FixedLenFeature(shape=(1,), dtype=tf.float32, default_value=None),
 'browserid': VarLenFeature(dtype=tf.string),
 'browserlanguage': VarLenFeature(dtype=tf.string)
 ...} """

# exporting the model :
linear_est.export_saved_model(export_dir_base='./saved',
 serving_input_receiver_fn=tf.estimator.export.build_parsing_serving_input_receiver_fn(serving_input_receiver_fn),
 as_text=True)
现在我尝试加载它,但我不知道如何使用加载的模型调用预测,例如使用pandas dataframe中的原始数据

loaded = tf.saved_model.load('saved/1573144361/')
还有一件事,我试图查看模型的签名,但我无法真正理解输入形状的情况

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['classification']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['inputs'] tensor_info:
        dtype: DT_STRING
        shape: (-1)
        name: input_example_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 2)
        name: head/Tile:0
    outputs['scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: head/predictions/probabilities:0
  Method name is: tensorflow/serving/classify

signature_def['predict']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['examples'] tensor_info:
        dtype: DT_STRING
        shape: (-1)
        name: input_example_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['all_class_ids'] tensor_info:
        dtype: DT_INT32
        shape: (-1, 2)
        name: head/predictions/Tile:0
    outputs['all_classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 2)
        name: head/predictions/Tile_1:0
    outputs['class_ids'] tensor_info:
        dtype: DT_INT64
        shape: (-1, 1)
        name: head/predictions/ExpandDims:0
    outputs['classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 1)
        name: head/predictions/str_classes:0
    outputs['logistic'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: head/predictions/logistic:0
    outputs['logits'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: linear/linear_model/linear/linear_model/linear/linear_model/weighted_sum:0
    outputs['probabilities'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: head/predictions/probabilities:0
  Method name is: tensorflow/serving/predict

signature_def['regression']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['inputs'] tensor_info:
        dtype: DT_STRING
        shape: (-1)
        name: input_example_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['outputs'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 1)
        name: head/predictions/logistic:0
  Method name is: tensorflow/serving/regress

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['inputs'] tensor_info:
        dtype: DT_STRING
        shape: (-1)
        name: input_example_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 2)
        name: head/Tile:0
    outputs['scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: head/predictions/probabilities:0
  Method name is: tensorflow/serving/classify

看起来您在输出的最后一部分使用了保存的\u model\u cli命令行工具。因此,您有一个“predict”函数,它显示输入类型、列等。当我这样做时,我会看到所有的输入列。在您的例子中,它只是显示一个输入,它是一个名为examples的字符串。这看起来不正确

下面是
$saved_model_cli show--dir/somedir/export/exporter/123456789--all
输出的摘录。在输出中,点显示删除的行,因为它们看起来相似

signature_def['predict']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['feature_num_1'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1)
        name: Placeholder_29:0
...
...
 The given SavedModel SignatureDef contains the following output(s):
    outputs['all_class_ids'] tensor_info:
        dtype: DT_INT32
        shape: (-1, 2)
        name: dnn/head/predictions/Tile:0
    outputs['all_classes'] tensor_info:
        dtype: DT_STRING
        shape: (-1, 2)
        name: dnn/head/predictions/Tile_1:0
...
...
  Method name is: tensorflow/serving/predict
保存的\u model.load(…)
演示了如下基本机制:

imported=tf.saved\u model.load(路径)
f=已导入。签名[“提供默认值”]
打印(f(x=tf.常数([[1.]]))
我自己对此还是新手,但是使用
saved\u model.save(…)
时,
service\u default
似乎是默认签名

(我的理解是,
saved_model.save(…)
不保存模型,而是保存图形。为了解释图形,您需要显式存储“签名”并在图形上定义操作。如果不显式存储,则“SERVICE_default”将是您唯一的签名。)

我在下面提供了一个实现。有几个细节值得注意:

  • 输入需要成为张量;所以我需要手动进行转换
  • 输出为字典。该文档将其描述为“具有从签名键到函数的签名属性映射的可跟踪对象”
  • 在我的例子中,字典的关键是一个相对随意的“稠密的83”。这似乎有点。。。具体的因此,我推广了使用迭代器忽略键的解决方案:

    将tensorflow导入为tf
    输入数据=tf.constant(输入数据,数据类型=tf.float32)
    预测张量=签名集。签名[“服务默认值](输入数据)
    对于u,预测张量中的值。项()
    预测=值。numpy()[0]
    回归预测
    引发异常(“预期来自预测(…)的响应”)
    
    感谢您的回答,这非常有趣。您能否分享如何生成此保存模型的代码?您的模型是否使用了tf.estimator api?因为我用tf.keras试过了,它实际上产生了很好的信号,但是我在努力用tf.estimator…我用了一个估计器。我在想,也许您正在使用单个字符串作为输入。也许我是错的。我也在学习,所以请记住这一点。我不明白为什么你有六个或更多的功能,但你的预测功能作为一个功能,这是一个字符串。您的servingbinput函数有一个或多个参数吗?我的服务输入有多个参数,而且不止六个。事实上,我更期待的是您提供的hadCheck解决方案。谢谢您,先生,这帮了我很大的忙,@ThinkTeam。这个社区帮了我很多忙。我很乐意回报你一点。