Tensorflow Can';t使用TF2.4.1将onnx模型转换为tflite

Tensorflow Can';t使用TF2.4.1将onnx模型转换为tflite,tensorflow,tensorflow-lite,onnx,Tensorflow,Tensorflow Lite,Onnx,我有一个ONNX模型,我可以用TF2.4.1成功地将其转换为TF。但是,在将保存的模型转换为TFLite时,会发生错误 守则: import onnx import tensorflow as tf from onnx_tf.backend import prepare print(tf.__version__) # Convert model.onnx to Tensorflow onnx_model = onnx.load('model.onnx') onnx.checker.check

我有一个ONNX模型,我可以用TF2.4.1成功地将其转换为TF。但是,在将保存的模型转换为TFLite时,会发生错误

守则:

import onnx
import tensorflow as tf
from onnx_tf.backend import prepare

print(tf.__version__)

# Convert model.onnx to Tensorflow
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model) 
tf_rep = prepare(onnx_model)  
tf_rep.export_graph('model')  

# Convert saved model to tflite
converter = tf.lite.TFLiteConverter.from_saved_model('model')
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)
在转换步骤结束之前,一切正常,如下所示:

 >>> tf_lite_model = converter.convert()
    2021-04-22 18:18:14.715046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
    2021-04-22 18:18:14.715072: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
    2021-04-22 18:18:14.715078: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
    2021-04-22 18:18:14.716044: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: model
    2021-04-22 18:18:14.778050: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
    2021-04-22 18:18:14.778083: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: model
    2021-04-22 18:18:14.998062: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
    2021-04-22 18:18:15.043862: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
    2021-04-22 18:18:15.438804: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: model
    2021-04-22 18:18:15.809851: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1093808 microseconds.
    2021-04-22 18:18:18.757257: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
    loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): error: operand #0 does not dominate this use
    Traceback (most recent call last):
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
        model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
        return _pywrap_toco_api.TocoConvert(
    Exception: <unknown>:0: error: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
    <unknown>:0: note: loc("PartitionedCall"): called from
    <unknown>:0: note: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here


    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
        result = _convert_saved_model(**converter_kwargs)
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
        data = toco_convert_protos(
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
        raise ConverterError(str(e))
    tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
    <unknown>:0: note: loc("PartitionedCall"): called from
    <unknown>:0: note: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here

    
>>tf_lite_model=converter.convert()
2021-04-22 18:18:14.715046:W tensorflow/compiler/mlir/lite/python/tf\u tfl\u flatbuffer\u helpers.cc:316]忽略输出格式。
2021-04-22 18:18:14.715072:W tensorflow/compiler/mlir/lite/python/tf\u tfl\u flatbuffer\u helpers.cc:319]忽略了drop\u控件依赖项。
2021-04-22 18:18:14.715078:W tensorflow/compiler/mlir/lite/python/tf\u tfl\u flatbuffer\u helpers.cc:325]忽略了输入范围的更改。
2021-04-22 18:18:14.716044:I tensorflow/cc/saved_model/reader.cc:32]从:model读取saved model
2021-04-22 18:18:14.778050:I tensorflow/cc/saved_model/reader.cc:55]读取带标记的元图{serve}
2021-04-22 18:18:14.778083:I tensorflow/cc/saved_model/reader.cc:93]从:model读取saved model调试信息(如果存在)
2021-04-22 18:18:14.998062:I tensorflow/compiler/mlir/mlir\u graph\u optimization\u pass.cc:196]没有一个mlir优化过程被启用(注册了0个过程)
2021-04-22 18:18:15.043862:I tensorflow/cc/saved_model/loader.cc:206]正在恢复saved模型包。
2021-04-22 18:18:15.438804:I tensorflow/cc/saved_model/loader.cc:190]在路径为model的saved model bundle上运行初始化操作
2021-04-22 18:18:15.809851:I tensorflow/cc/saved_model/loader.cc:277]为标记{serve}保存模型负载;状态:成功:好。花了1093808微秒。
2021-04-22 18:18:18.757257:I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194]禁用mlir崩溃复制程序,将env var`mlir_崩溃复制程序目录`设置为启用。
loc(呼叫站点)呼叫站点(“Pad_16@__inference___call___16503“在”PartitionedCall@__inference_signature_wrapper_16752):错误:操作数#0不主导此用法
回溯(最近一次呼叫最后一次):
文件“/Users/degreats/anaconda3/envs/py38/lib/python3.8/site packages/tensorflow/lite/python/convert.py”,第210行,在toco_convert_protos中
model_str=wrap_toco.wrapped_toco_convert(model_flags_str,
文件“/Users/degreats/anaconda3/envs/py38/lib/python3.8/site packages/tensorflow/lite/python/wrap_toco.py”,第32行,在wrapped_toco_convert中
返回\u pywrap\u toco\u api.TocoConvert(
异常::0:错误:loc(调用站点_16@__inference___call___16503“在”PartitionedCall@__inference_signature_wrapper_16752)在“PartitionedCall”):操作数#0不主导此用法
:0:注意:loc(“PartitionedCall”):从调用
:0:注:loc(呼叫站点_16@__inference___call___16503“在”PartitionedCall@__inference_signature_wrapper_16752):此处定义的操作数
在处理上述异常期间,发生了另一个异常:
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/Users/degreats/anaconda3/envs/py38/lib/python3.8/site packages/tensorflow/lite/python/lite.py”,第739行,格式为convert
结果=\u转换\u保存的\u模型(**转换器\u kwargs)
文件“/Users/degreats/anaconda3/envs/py38/lib/python3.8/site packages/tensorflow/lite/python/convert.py”,第632行,在convert\u saved\u模型中
数据=toco\u转换\u协议(
文件“/Users/degreats/anaconda3/envs/py38/lib/python3.8/site packages/tensorflow/lite/python/convert.py”,第216行,在toco_convert_protos中
升起转换器错误(str(e))
tensorflow.lite.python.convert.ConverterError::0:错误:loc(调用站点_16@__inference___call___16503“在”PartitionedCall@__inference_signature_wrapper_16752)在“PartitionedCall”):操作数#0不主导此用法
:0:注意:loc(“PartitionedCall”):从调用
:0:注:loc(呼叫站点_16@__inference___call___16503“在”PartitionedCall@__inference_signature_wrapper_16752):此处定义的操作数
我不知道这个消息是什么意思,但是如果我切换到TF2.2,转换会传递w/o错误。坏的是,由于另一个问题,现在初始的ONNX到TF转换失败

任何人都有想法,这个信息意味着什么,可以用它做什么


TIA

是否可以将您保存的模型目录共享给我?我可以帮助调试

一般的建议是,有两种可能性

(1) TF Lite转换器可能无法正确处理保存的模型

(2) onnx转换工具可能无法创建有效的TF保存模型

使用最新的TF版本(2.5或TF nightly)可能有助于解决(1)案例中的此问题,但不能保证


我确认tf nightly版本可以转换附带的已保存模型,而不会出现任何问题:

converter = tf.lite.TFLiteConverter.from_saved_model(
      "/tmp/onnx_model")
tflite_model = converter.convert()
with open("/tmp/onnx.tflite", "wb") as f:
  f.write(tflite_model)

当然。对不起。你能共享保存的模型目录而不是onnx模型吗?你是说这个吗?谢谢。我在tf nightly版本中尝试了相同的转换代码。我可以将附加的保存模型转换为相应的TFLite模型文件。谢谢。你能提供给我吗?你是如何安装夜间版本的?