Python Bazel误差估计模型

Python Bazel误差估计模型,python,tensorflow,bazel,tensorflow-estimator,Python,Tensorflow,Bazel,Tensorflow Estimator,我正在尝试使用tf.estimator和export\u savedmodel()建立一个*.pb模型,它是一个简单的分类器,用于对虹膜数据集进行分类(4个特征,3个类): 这将生成一个保存的\u model.pb文件。我已经证实这个模型是有效的。我还可以制作另一个程序来加载和运行它。现在,我想总结并冻结使用Bazel的模型。如果我构建Bazel,然后运行以下命令: bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \ --i

我正在尝试使用
tf.estimator
export\u savedmodel()
建立一个*.pb模型,它是一个简单的分类器,用于对虹膜数据集进行分类(4个特征,3个类):

这将生成一个
保存的\u model.pb
文件。我已经证实这个模型是有效的。我还可以制作另一个程序来加载和运行它。现在,我想总结并冻结使用Bazel的模型。如果我构建Bazel,然后运行以下命令:

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \
--in_graph=saved_model.pb
我得到以下错误:

[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307]解析文本格式tensorflow.GraphDef:1:1:文本中遇到无效控制字符。
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307]解析文本格式tensorflow.GraphDef:1:4:解释非ascii码点218时出错。
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307]解析文本格式tensorflow.GraphDef:1:4:预期标识符时出错,获取:�
2018-08-14 11:50:17.759617:E tensorflow/tools/graph_transforms/Summary_graph_main.cc:320]加载图形“saved_model.pb”失败,无法将saved_model.pb解析为二进制协议
(保存的文件_model.pb的文本和二进制解析均失败)
2018-08-14 11:50:17.759670:E tensorflow/tools/graph\u transforms/summary\u graph\u main.cc:322]用法:bazel bin/tensorflow/tools/graph\u transforms/summary\u graph
标志:
--in_graph=”“字符串输入图形文件名
--print_structure=false bool是否打印图形的网络连接

我不理解这个错误。我已经尝试过使用它,并且它工作得很好,所以我认为问题在于
tf.estimator
如何构建
.pb
文件

使用
export\u saved model()
tf.estimator
创建保存的模型时,我是否遗漏了什么

更新

Tensorflow版本:v1.9.0-0-g25c197e023 1.9.0

tf_env_collect.sh的结果

== cat /etc/issue ===============================================
Linux rianadam 4.15.0-32-generic #35-Ubuntu SMP Fri Aug 10 17:58:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
VERSION="18.04.1 LTS (Bionic Beaver)"
VERSION_ID="18.04"
VERSION_CODENAME=bionic

== are we in docker =============================================
No

== compiler =====================================================
c++ (Ubuntu 7.3.0-16ubuntu3) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


== uname -a =====================================================
Linux rianadam 4.15.0-32-generic #35-Ubuntu SMP Fri Aug 10 17:58:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

== check pips ===================================================
numpy               1.15.0 
protobuf            3.6.0  
tensorflow-gpu      1.9.0  

== check for virtualenv =========================================
True

== tensorflow import ============================================
tf.VERSION = 1.9.0
tf.GIT_VERSION = v1.9.0-0-g25c197e023
tf.COMPILER_VERSION = v1.9.0-0-g25c197e023
Sanity check: array([1], dtype=int32)
/home/rian/NgodingYuk/tf_env/env/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/rian/NgodingYuk/tf_env/env/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)

== env ==========================================================
LD_LIBRARY_PATH /usr/local/cuda/lib64:/usr/local/cuda-9.0/lib64:/usr/local/cuda/lib64:/usr/local/cuda-9.0/lib64:
DYLD_LIBRARY_PATH is unset

== nvidia-smi ===================================================
Tue Aug 21 11:13:55 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.77                 Driver Version: 390.77                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce 920M        Off  | 00000000:04:00.0 N/A |                  N/A |
| N/A   51C    P0    N/A /  N/A |    367MiB /  2004MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+

== cuda libs  ===================================================
/usr/local/cuda-9.0/lib64/libcudart_static.a
/usr/local/cuda-9.0/lib64/libcudart.so.9.0.176
/usr/local/cuda-9.0/doc/man/man7/libcudart.7
/usr/local/cuda-9.0/doc/man/man7/libcudart.so.7

当我试图从一个模型中找到输入/输出节点时,我遇到了同样的问题,我使用了。错误是因为使用
export\u savedmodel
时得到的输出是一个可维护的(据我目前理解,这是一个
GraphDef
和其他元数据),而不仅仅是一个
GraphDef

要查找输入和输出节点,可以执行以下操作

# -*- coding: utf-8 -*-

import tensorflow as tf
from tensorflow.saved_model import tag_constants

with tf.Session(graph=tf.Graph()) as sess:
    gf = tf.saved_model.loader.load(
        sess,
        [tf.saved_model.tag_constants.SERVING],
        "/path/to/saved/model/")

    nodes = gf.graph_def.node
    print([n.name + " -> " + n.op for n in nodes
           if n.op in ('Softmax', 'Placeholder')])

    # ... ['Placeholder -> Placeholder',
    #      'dnn/head/predictions/probabilities -> Softmax']
我也使用了罐装DNNEstimator,因此OP的节点应该与我的相同,其他用户,您的操作名称可能不同于
占位符
Softmax
,具体取决于您的分类器

现在已经有了输入/输出节点的名称,可以冻结已寻址的图形

如果希望使用经过训练的参数值,例如量化权重,则需要运行tensorflow/python/tools/freeze_graph.py将检查点值转换为图形文件本身中的嵌入常量

然后假设您已经构建了
graph\u变换

#!/bin/bash

tensorflow/bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \
  --in_graph=pruned_saved_model_or_whatever.pb
输出

Found 1 possible inputs: (name=Placeholder, type=string(7), shape=[?])
No variables spotted.
Found 1 possible outputs: (name=dnn/head/predictions/probabilities, op=Softmax)
Found 256974297 (256.97M) const parameters, 0 (0) variable parameters, and 0 
control_edges
Op types used: 155 Const, 41 Identity, 32 RegexReplace, 18 Gather, 9 
StridedSlice, 9 MatMul, 6 Shape, 6 Reshape, 6 Relu, 5 ConcatV2, 4 BiasAdd, 4 
Add, 3 ExpandDims, 3 Pack, 2 NotEqual, 2 Where, 2 Select, 2 StringJoin, 2 Cast, 
2 DynamicPartition, 2 Fill, 2 Maximum, 1 Size, 1 Unique, 1 Tanh, 1 Sum, 1 
StringToHashBucketFast, 1 StringSplit, 1 Equal, 1 Squeeze, 1 Square, 1 
SparseToDense, 1 SparseSegmentSqrtN, 1 SparseFillEmptyRows, 1 Softmax, 1 
FloorDiv, 1 Rsqrt, 1 FloorMod, 1 HashTableV2, 1 LookupTableFindV2, 1 Range, 1 
Prod, 1 Placeholder, 1 ParallelDynamicStitch, 1 LookupTableSizeV2, 1 Max, 1 Mul
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- -- 
graph=pruned_saved_model.pb --show_flops --input_layer=Placeholder -- 
input_layer_type=string --input_layer_shape=-1 -- 
output_layer=dnn/head/predictions/probabilities
希望这有帮助

更新(2018-12-03)

Found 1 possible inputs: (name=Placeholder, type=string(7), shape=[?])
No variables spotted.
Found 1 possible outputs: (name=dnn/head/predictions/probabilities, op=Softmax)
Found 256974297 (256.97M) const parameters, 0 (0) variable parameters, and 0 
control_edges
Op types used: 155 Const, 41 Identity, 32 RegexReplace, 18 Gather, 9 
StridedSlice, 9 MatMul, 6 Shape, 6 Reshape, 6 Relu, 5 ConcatV2, 4 BiasAdd, 4 
Add, 3 ExpandDims, 3 Pack, 2 NotEqual, 2 Where, 2 Select, 2 StringJoin, 2 Cast, 
2 DynamicPartition, 2 Fill, 2 Maximum, 1 Size, 1 Unique, 1 Tanh, 1 Sum, 1 
StringToHashBucketFast, 1 StringSplit, 1 Equal, 1 Squeeze, 1 Square, 1 
SparseToDense, 1 SparseSegmentSqrtN, 1 SparseFillEmptyRows, 1 Softmax, 1 
FloorDiv, 1 Rsqrt, 1 FloorMod, 1 HashTableV2, 1 LookupTableFindV2, 1 Range, 1 
Prod, 1 Placeholder, 1 ParallelDynamicStitch, 1 LookupTableSizeV2, 1 Max, 1 Mul
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- -- 
graph=pruned_saved_model.pb --show_flops --input_layer=Placeholder -- 
input_layer_type=string --input_layer_shape=-1 -- 
output_layer=dnn/head/predictions/probabilities

我打开的一个相关问题似乎在一篇详细的博文中得到了解决,该博文列在罚单的末尾。

您是否与tensorflow就这个问题提出了一个问题?我不确定这是否是一个bug,在tensorflow问题的注释中,据说是先问这里的:/哇,谢谢,您的代码帮助我找到输入和输出节点,我刚刚意识到我的删减的保存的模型或随便什么.pb不能用
tensorflow.contrib.from\u saved\u model
打开。错误表示在SavedModel中找不到与标记“serve”关联的
MetaGraphDef,您知道为什么会发生这种情况吗?一旦冻结图形,它就不再是
保存的模型
Found 1 possible inputs: (name=Placeholder, type=string(7), shape=[?])
No variables spotted.
Found 1 possible outputs: (name=dnn/head/predictions/probabilities, op=Softmax)
Found 256974297 (256.97M) const parameters, 0 (0) variable parameters, and 0 
control_edges
Op types used: 155 Const, 41 Identity, 32 RegexReplace, 18 Gather, 9 
StridedSlice, 9 MatMul, 6 Shape, 6 Reshape, 6 Relu, 5 ConcatV2, 4 BiasAdd, 4 
Add, 3 ExpandDims, 3 Pack, 2 NotEqual, 2 Where, 2 Select, 2 StringJoin, 2 Cast, 
2 DynamicPartition, 2 Fill, 2 Maximum, 1 Size, 1 Unique, 1 Tanh, 1 Sum, 1 
StringToHashBucketFast, 1 StringSplit, 1 Equal, 1 Squeeze, 1 Square, 1 
SparseToDense, 1 SparseSegmentSqrtN, 1 SparseFillEmptyRows, 1 Softmax, 1 
FloorDiv, 1 Rsqrt, 1 FloorMod, 1 HashTableV2, 1 LookupTableFindV2, 1 Range, 1 
Prod, 1 Placeholder, 1 ParallelDynamicStitch, 1 LookupTableSizeV2, 1 Max, 1 Mul
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- -- 
graph=pruned_saved_model.pb --show_flops --input_layer=Placeholder -- 
input_layer_type=string --input_layer_shape=-1 -- 
output_layer=dnn/head/predictions/probabilities