Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/285.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用trt.TrtGraphConverterV2(或其他建议)将tensorflow模型转换为TensorRT优化模型?_Python_Tensorflow_Tensorrt_Nvidia Jetson Nano - Fatal编程技术网

Python 如何使用trt.TrtGraphConverterV2(或其他建议)将tensorflow模型转换为TensorRT优化模型?

Python 如何使用trt.TrtGraphConverterV2(或其他建议)将tensorflow模型转换为TensorRT优化模型?,python,tensorflow,tensorrt,nvidia-jetson-nano,Python,Tensorflow,Tensorrt,Nvidia Jetson Nano,我遇到了一个关于TensorRT和Tensorflow的问题。 我使用的是NVIDIA jetson nano,我尝试将简单的Tensorflow模型转换为TensorRT优化模型。 我正在使用tensorflow 2.1.0和python 3.6.9。 我尝试使用以下代码示例: 为了测试这一点,我从tensorflow网站上取了一个简单的例子。要将模型转换为TensorRT模型,我将模型保存为“savedModel”,并将其加载到trt.TrtGraphConverterV2函数中: #htt

我遇到了一个关于TensorRT和Tensorflow的问题。 我使用的是NVIDIA jetson nano,我尝试将简单的Tensorflow模型转换为TensorRT优化模型。 我正在使用tensorflow 2.1.0和python 3.6.9。 我尝试使用以下代码示例:

为了测试这一点,我从tensorflow网站上取了一个简单的例子。要将模型转换为TensorRT模型,我将模型保存为“savedModel”,并将其加载到trt.TrtGraphConverterV2函数中:

#https://www.tensorflow.org/tutorials/quickstart/beginner

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import os

#mnist = tf.keras.datasets.mnist

#(x_train, y_train), (x_test, y_test) = mnist.load_data()
#x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  #tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy'])


# create paths to save models
model_name = "simpleModel"
pb_model  = os.path.join(os.path.dirname(os.path.abspath(__file__)),(model_name+"_pb")) 
trt_model = os.path.join(os.path.dirname(os.path.abspath(__file__)),(model_name+"_trt")) 

if not os.path.exists(pb_model):
    os.mkdir(pb_model)

if not os.path.exists(trt_model):
    os.mkdir(trt_model)

tf.saved_model.save(model, pb_model)


# https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example
print("\nconverting to trt-model")
converter = trt.TrtGraphConverterV2(input_saved_model_dir=pb_model )
print("\nconverter.convert")
converter.convert()
print("\nconverter.save")
converter.save(trt_model)

print("trt-model saved under: ",trt_model)
当我运行此代码时,它保存了trt优化模型,但该模型无法使用。例如,当我加载模型并尝试model.summary()时,它会告诉我:

Traceback (most recent call last):
  File "/home/al/Code/Benchmark_70x70/test-load-pb.py", line 45, in <module>
    model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'

看来转换成功了,
我尝试过使用Keras和TensorRT的.pb文件

下面是示例代码

saved_model_loaded = tf.saved_model.load(
    'path to trt converted model') # path to keras .pb or TensorRT .pb
#for layer in saved_model_loaded.keras_api.layers:

graph_func = saved_model_loaded.signatures['serving_default']
frozen_func = convert_variables_to_constants_v2(
    graph_func)

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

#convert to tensors
input_tensors = tf.cast(x_test, dtype=tf.float32)

output = frozen_func(input_tensors[:1])[0].numpy()
print(output) 
import argparse
import sys
import tensorflow as tf
%load_ext tensorboard
from tensorflow.python.platform import app
from tensorflow.python.summary import summary

def import_to_tensorboard(model_dir, log_dir):
  """View an imported protobuf model (`.pb` file) as a graph in Tensorboard.

  Args:
    model_dir: The location of the protobuf (`pb`) model to visualize
    log_dir: The location for the Tensorboard log to begin visualization from.

  Usage:
    Call this function with your model location and desired log directory.
    Launch Tensorboard by pointing it to the log directory.
    View your imported `.pb` model as a graph.
  """

  with tf.compat.v1.Session(graph=tf.Graph()) as sess:
    tf.compat.v1.saved_model.loader.load(
        sess, [tf.compat.v1.saved_model.tag_constants.SERVING], model_dir)

    pb_visual_writer = summary.FileWriter(log_dir)
    pb_visual_writer.add_graph(sess.graph)
    print("Model Imported. Visualize by running: "
          "tensorboard --logdir={}".format(log_dir))
注意:我试过keras和TensorRT的两种模型,结果是一样的。

关于model.summary()错误,似乎一旦转换了模型,就会删除一些方法,如.summary() 但是,如果要检查tensorRT转换模型中的图形,可以使用Tensorboard作为备选方案
下面是示例代码

saved_model_loaded = tf.saved_model.load(
    'path to trt converted model') # path to keras .pb or TensorRT .pb
#for layer in saved_model_loaded.keras_api.layers:

graph_func = saved_model_loaded.signatures['serving_default']
frozen_func = convert_variables_to_constants_v2(
    graph_func)

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

#convert to tensors
input_tensors = tf.cast(x_test, dtype=tf.float32)

output = frozen_func(input_tensors[:1])[0].numpy()
print(output) 
import argparse
import sys
import tensorflow as tf
%load_ext tensorboard
from tensorflow.python.platform import app
from tensorflow.python.summary import summary

def import_to_tensorboard(model_dir, log_dir):
  """View an imported protobuf model (`.pb` file) as a graph in Tensorboard.

  Args:
    model_dir: The location of the protobuf (`pb`) model to visualize
    log_dir: The location for the Tensorboard log to begin visualization from.

  Usage:
    Call this function with your model location and desired log directory.
    Launch Tensorboard by pointing it to the log directory.
    View your imported `.pb` model as a graph.
  """

  with tf.compat.v1.Session(graph=tf.Graph()) as sess:
    tf.compat.v1.saved_model.loader.load(
        sess, [tf.compat.v1.saved_model.tag_constants.SERVING], model_dir)

    pb_visual_writer = summary.FileWriter(log_dir)
    pb_visual_writer.add_graph(sess.graph)
    print("Model Imported. Visualize by running: "
          "tensorboard --logdir={}".format(log_dir))
调用函数

import_to_tensorboard('path to trt model', '/logs/')
打开张力板

%tensorboard --logdir='path to logs'

如果有此帮助,请告诉我。

转换似乎已成功,
我尝试过使用Keras和TensorRT的.pb文件

下面是示例代码

saved_model_loaded = tf.saved_model.load(
    'path to trt converted model') # path to keras .pb or TensorRT .pb
#for layer in saved_model_loaded.keras_api.layers:

graph_func = saved_model_loaded.signatures['serving_default']
frozen_func = convert_variables_to_constants_v2(
    graph_func)

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

#convert to tensors
input_tensors = tf.cast(x_test, dtype=tf.float32)

output = frozen_func(input_tensors[:1])[0].numpy()
print(output) 
import argparse
import sys
import tensorflow as tf
%load_ext tensorboard
from tensorflow.python.platform import app
from tensorflow.python.summary import summary

def import_to_tensorboard(model_dir, log_dir):
  """View an imported protobuf model (`.pb` file) as a graph in Tensorboard.

  Args:
    model_dir: The location of the protobuf (`pb`) model to visualize
    log_dir: The location for the Tensorboard log to begin visualization from.

  Usage:
    Call this function with your model location and desired log directory.
    Launch Tensorboard by pointing it to the log directory.
    View your imported `.pb` model as a graph.
  """

  with tf.compat.v1.Session(graph=tf.Graph()) as sess:
    tf.compat.v1.saved_model.loader.load(
        sess, [tf.compat.v1.saved_model.tag_constants.SERVING], model_dir)

    pb_visual_writer = summary.FileWriter(log_dir)
    pb_visual_writer.add_graph(sess.graph)
    print("Model Imported. Visualize by running: "
          "tensorboard --logdir={}".format(log_dir))
注意:我试过keras和TensorRT的两种模型,结果是一样的。

关于model.summary()错误,似乎一旦转换了模型,就会删除一些方法,如.summary() 但是,如果要检查tensorRT转换模型中的图形,可以使用Tensorboard作为备选方案
下面是示例代码

saved_model_loaded = tf.saved_model.load(
    'path to trt converted model') # path to keras .pb or TensorRT .pb
#for layer in saved_model_loaded.keras_api.layers:

graph_func = saved_model_loaded.signatures['serving_default']
frozen_func = convert_variables_to_constants_v2(
    graph_func)

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

#convert to tensors
input_tensors = tf.cast(x_test, dtype=tf.float32)

output = frozen_func(input_tensors[:1])[0].numpy()
print(output) 
import argparse
import sys
import tensorflow as tf
%load_ext tensorboard
from tensorflow.python.platform import app
from tensorflow.python.summary import summary

def import_to_tensorboard(model_dir, log_dir):
  """View an imported protobuf model (`.pb` file) as a graph in Tensorboard.

  Args:
    model_dir: The location of the protobuf (`pb`) model to visualize
    log_dir: The location for the Tensorboard log to begin visualization from.

  Usage:
    Call this function with your model location and desired log directory.
    Launch Tensorboard by pointing it to the log directory.
    View your imported `.pb` model as a graph.
  """

  with tf.compat.v1.Session(graph=tf.Graph()) as sess:
    tf.compat.v1.saved_model.loader.load(
        sess, [tf.compat.v1.saved_model.tag_constants.SERVING], model_dir)

    pb_visual_writer = summary.FileWriter(log_dir)
    pb_visual_writer.add_graph(sess.graph)
    print("Model Imported. Visualize by running: "
          "tensorboard --logdir={}".format(log_dir))
调用函数

import_to_tensorboard('path to trt model', '/logs/')
打开张力板

%tensorboard --logdir='path to logs'

如果有帮助,请告诉我。

非常感谢您的回复。它包含了我需要的一切。 为了测试转换器脚本,我在colab中运行了代码,它工作得很好,所以我想我需要检查我的环境是否有错误。 关于model.summary()问题:
正如您正确指出的,在转换模型时,似乎删除了KerasAPI中的方法。我特别需要model.predict()方法来使用新模型进行预测。幸运的是,还有其他运行推断的方法。除了您发布的那个,我还找到了中描述的那个,并使用了它。 我在本文中总结了整个示例和说明


非常感谢您的回复。它包含了我需要的一切。 为了测试转换器脚本,我在colab中运行了代码,它工作得很好,所以我想我需要检查我的环境是否有错误。 关于model.summary()问题:
正如您正确指出的,在转换模型时,似乎删除了KerasAPI中的方法。我特别需要model.predict()方法来使用新模型进行预测。幸运的是,还有其他运行推断的方法。除了您发布的那个,我还找到了中描述的那个,并使用了它。 我在本文中总结了整个示例和说明


如果你发现有什么有用的,请告诉我。谢谢。如果你觉得有什么有用的,请告诉我。谢谢。
将变量转换成常量的方法是从哪里来的?
将变量转换成常量的方法是从哪里来的?