Deep learning 将pb文件转换为tflite文件,以便在Coral开发板上运行(分段故障(堆芯转储))

Deep learning 将pb文件转换为tflite文件,以便在Coral开发板上运行(分段故障(堆芯转储)),deep-learning,tensorflow2.0,google-coral,tensorflow-lite,Deep Learning,Tensorflow2.0,Google Coral,Tensorflow Lite,如何使用python3或终端将pb文件转换为tflite文件。 我不知道模型的任何细节 (已编辑) 我已通过以下代码将pb文件转换为tflite文件: 将tensorflow.compat.v1导入为tf 将numpy作为np导入 graph_def_file = "./models/20170512-110547.pb" def representative_dataset_gen(): for _ in range(num_calibration_steps): # Get s

如何使用python3或终端将pb文件转换为tflite文件。
我不知道模型的任何细节

(已编辑) 我已通过以下代码将pb文件转换为tflite文件: 将tensorflow.compat.v1导入为tf 将numpy作为np导入

graph_def_file = "./models/20170512-110547.pb"

def representative_dataset_gen():
  for _ in range(num_calibration_steps):
    # Get sample input data as a numpy array in a method of your choosing.
    yield [input]


converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file,
                                                      input_arrays=["input","phase_train"],
                                                      output_arrays=["embeddings"],
                                                      input_shapes={"input":[1,160,160,3],"phase_train":False})                                                                 

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()

print("converting")
open("./models/converted_model.tflite", "wb").write(tflite_model)
print("Done")
错误:获取分段错误(堆芯转储)


如果没有模型的任何详细信息,则无法将其转换为.tflite模型。我建议你再看一遍这份文件。因为这里有太多多余的细节

下面是一个冻结图的训练后量化示例。模型取自

导入系统、操作系统、全局
导入tensorflow作为tf
导入路径库
将numpy作为np导入
如果len(sys.argv)!=2:
打印('用法:')
退出()
tf.compat.v1.enable_eager_execution()
tf.compat.v1.logging.set_详细性(tf.compat.v1.logging.DEBUG)
def fake_代表_数据_gen():
对于范围内的uu(100):
伪_image=np.random.random((1192192,3)).astype(np.float32)
产量[假图像]
冻结图=sys.argv[1]
input_数组=['input']
输出_数组=['MobilenetV1/预测/重塑_1']
converter=tf.compat.v1.lite.TFLiteConverter.from\u冻结图(冻结图、输入数组、输出数组)
converter.optimizations=[tf.lite.Optimize.DEFAULT]
converter.representative\u数据集=伪\u representative\u数据\u gen
converter.target\u spec.supported\u ops=[tf.lite.optset.TFLITE\u BUILTINS\u INT8]
converter.inference\u input\u type=tf.uint8
converter.inference\u输出类型=tf.uint8
tflite_model=converter.convert()
quant_dir=pathlib.Path(os.getcwd(),'output')
quant\u dir.mkdir(exist\u ok=True,parents=True)
tflite\u model\u file=quant\u dir/'mobilenet\u v1\u 0.25\u 192\u quant.tflite'
tflite\u model\u文件。写入字节(tflite\u model)

尝试tensorboard或类似工具,了解模型。您好,感谢您的回复。现在我得到了分割错误。你能帮我一下吗?@Amulya我不确定这个断层,因为它可能意味着什么,不幸的是:/这确实是tensorflow团队的一个问题:
2020-01-20 11:42:18.153263: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-01-20 11:42:18.153363: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-01-20 11:42:18.153385: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2020-01-20 11:42:18.905028: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-01-20 11:42:18.906845: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2020-01-20 11:42:18.906874: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (kalgudi-GA-78LMT-USB3-6-0): /proc/driver/nvidia/version does not exist
2020-01-20 11:42:18.934144: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3616020000 Hz
2020-01-20 11:42:18.934849: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x39aa0f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-01-20 11:42:18.934910: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
Segmentation fault (core dumped)
import sys, os, glob
import tensorflow as tf
import pathlib
import numpy as np

if len(sys.argv) != 2:
  print('Usage: <' + sys.argv[0] + '> <frozen_graph_file> <representative_image_dir>')
  exit()

tf.compat.v1.enable_eager_execution()
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.DEBUG)

def fake_representative_data_gen():
  for _ in range(100):
    fake_image = np.random.random((1,192,192,3)).astype(np.float32)
    yield [fake_image]

frozen_graph = sys.argv[1]
input_array = ['input']
output_array = ['MobilenetV1/Predictions/Reshape_1']

converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(frozen_graph, input_array, output_array)

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = fake_representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

tflite_model = converter.convert()

quant_dir = pathlib.Path(os.getcwd(), 'output')
quant_dir.mkdir(exist_ok=True, parents=True)

tflite_model_file = quant_dir/'mobilenet_v1_0.25_192_quant.tflite'
tflite_model_file.write_bytes(tflite_model)