Python 如何将参数传递给已加载的tensorflow图(内存中)

Python 如何将参数传递给已加载的tensorflow图(内存中),python,tensorflow,machine-learning,deep-learning,object-detection,Python,Tensorflow,Machine Learning,Deep Learning,Object Detection,我有一个使用ssd mobilenet架构训练的对象检测模型。我正在使用我的网络摄像头从这个模型实时驱动推理。输出是覆盖在网络摄像头图像上的边界框 我正在访问我的网络摄像头,如下所示: import cv2 cap = cv2.VideoCapture(0) 用于在视频馈送上实时运行推断的函数: with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: while True:

我有一个使用ssd mobilenet架构训练的对象检测模型。我正在使用我的网络摄像头从这个模型实时驱动推理。输出是覆盖在网络摄像头图像上的边界框

我正在访问我的网络摄像头,如下所示:

import cv2
cap = cv2.VideoCapture(0)
用于在视频馈送上实时运行推断的函数:

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    while True:
      ret, image_np = cap.read()
      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)
      image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
      # Each box represents a part of the image where a particular object was detected.
      boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
      # Each score represent how level of confidence for each of the objects.
      # Score is shown on the result image, together with the class label.
      scores = detection_graph.get_tensor_by_name('detection_scores:0')
      classes = detection_graph.get_tensor_by_name('detection_classes:0')
      num_detections = detection_graph.get_tensor_by_name('num_detections:0')
      # Actual detection.
      (boxes, scores, classes, num_detections) = sess.run(
          [boxes, scores, classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)

      #print(boxes)


      for i, box in enumerate(np.squeeze(boxes)):
          if(np.squeeze(scores)[i] > 0.98):
              print("ymin={}, xmin={}, ymax={}, xmax{}".format(box[0]*height,box[1]*width,box[2]*height,box[3]*width))
      break

      cv2.imshow('object detection', cv2.resize(image_np, (300,300)))
      if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break
当检测到对象时,我的终端将显示其标准化坐标

这非常适合视频馈送,因为:

  • 模型已加载到内存中
  • 每当新对象出现在网络摄像头前,加载的模型就会预测该对象并输出其坐标
我希望图像具有相同的功能,即我希望:

  • 模型已加载到内存中
  • 每当新参数提到图像位置时,加载的模型都会预测该对象并输出其坐标
我应该如何通过修改上述代码来做到这一点?我不希望单独的服务器执行此任务(如tensorflow服务中所述)


如何在我的机器上本地执行此操作?

您可以使用
os.listdir()
命令列出给定目录中的所有文件,然后遵循相同的管道

import os
import cv2
path = "./path/to/image/folder"
images = os.listdir(path)

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    for image in images:
      image_path = os.path.join(path, image)
      image_np = cv2.imread(image_path)
      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)
      image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
      # Each box represents a part of the image where a particular object was detected.
      boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
      # Each score represent how level of confidence for each of the objects.
      # Score is shown on the result image, together with the class label.
      scores = detection_graph.get_tensor_by_name('detection_scores:0')
      classes = detection_graph.get_tensor_by_name('detection_classes:0')
      num_detections = detection_graph.get_tensor_by_name('num_detections:0')
      # Actual detection.
      (boxes, scores, classes, num_detections) = sess.run(
          [boxes, scores, classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)

      #print(boxes)


      for i, box in enumerate(np.squeeze(boxes)):
          if(np.squeeze(scores)[i] > 0.98):
              print("ymin={}, xmin={}, ymax={}, xmax{}".format(box[0]*height,box[1]*width,box[2]*height,box[3]*width))
      break

      cv2.imshow('object detection', cv2.resize(image_np, (300,300)))
      if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

希望这有帮助

您可以使用
os.listdir()
命令列出给定目录中的所有文件,然后遵循相同的管道

import os
import cv2
path = "./path/to/image/folder"
images = os.listdir(path)

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    for image in images:
      image_path = os.path.join(path, image)
      image_np = cv2.imread(image_path)
      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)
      image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
      # Each box represents a part of the image where a particular object was detected.
      boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
      # Each score represent how level of confidence for each of the objects.
      # Score is shown on the result image, together with the class label.
      scores = detection_graph.get_tensor_by_name('detection_scores:0')
      classes = detection_graph.get_tensor_by_name('detection_classes:0')
      num_detections = detection_graph.get_tensor_by_name('num_detections:0')
      # Actual detection.
      (boxes, scores, classes, num_detections) = sess.run(
          [boxes, scores, classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)

      #print(boxes)


      for i, box in enumerate(np.squeeze(boxes)):
          if(np.squeeze(scores)[i] > 0.98):
              print("ymin={}, xmin={}, ymax={}, xmax{}".format(box[0]*height,box[1]*width,box[2]*height,box[3]*width))
      break

      cv2.imshow('object detection', cv2.resize(image_np, (300,300)))
      if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

希望这有帮助

如果我理解正确,您希望将存储在特定目录中的图像馈送到您的模型,并将预测结果输出?我希望的是,该模型将保持在RAM中加载,并且当我给它一个带有图像路径的命令行参数时,它将给我快速推断,而无需重新加载整个模型如果我理解正确,您想将存储在特定目录中的图像馈送到您的模型并获得预测结果吗?我想要的是,该模型将保持在RAM中加载,并且当我给它一个带有图像路径的命令行参数时,它将快速给我推断,而无需重新加载整个模型否,这没有帮助。我认为这类似于谷歌开源的tensorflow对象检测示例()。我想要的是,模型将保持在RAM中加载,当我给它一个带有图像路径的命令行参数时,它将快速地给我推理,而不重新加载整个模型否,这没有帮助。我认为这类似于谷歌开源的tensorflow对象检测示例()。我想要的是,模型将保持在RAM中加载,当我给它一个带有图像路径的命令行参数时,它将给我快速的推断,而无需重新加载整个模型