Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/319.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在对象检测api中获取neam_类_Python_Object Detection - Fatal编程技术网

Python 在对象检测api中获取neam_类

Python 在对象检测api中获取neam_类,python,object-detection,Python,Object Detection,我需要帮助。 在项目对象检测api上工作。在图像上标识对象名称和可能性的项目 但我需要获得在条件中使用的对象的名称,或者在屏幕上打印它 使用以下程序: image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batc

我需要帮助。 在项目对象检测api上工作。在图像上标识对象名称和可能性的项目 但我需要获得在条件中使用的对象的名称,或者在屏幕上打印它 使用以下程序:

 image = np.asarray(image)
 # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
 input_tensor = tf.convert_to_tensor(image)
 # The model expects a batch of images, so add an axis with `tf.newaxis`.
 input_tensor = input_tensor[tf.newaxis,...]

 # Run inference
 model_fn = model.signatures['serving_default']
 output_dict = model_fn(input_tensor)
#   print(output_dict)
 # All outputs are batches tensors.
 # Convert to numpy arrays, and take index [0] to remove the batch dimension.
 # We're only interested in the first num_detections.
 num_detections = int(output_dict.pop('num_detections'))
 output_dict = {key:value[0, :num_detections].numpy() 
                for key,value in output_dict.items()}
 output_dict['num_detections'] = num_detections

 # detection_classes should be ints.
 output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
 # print(output_dict['detection_classes'])
 # Handle models with masks:
 if 'detection_masks' in output_dict:
   # Reframe the the bbox mask to the image size.
   detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
             output_dict['detection_masks'], output_dict['detection_boxes'],
              image.shape[0], image.shape[1])      
   detection_masks_reframed = tf.cast(detection_masks_reframed > 0.8,
                                      tf.uint8)
   output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
   
   
   
 return output_dict


def show_inference(model, image_np):
 # the array based representation of the image will be used later in order to prepare the
 # result image with boxes and labels on it.
#   image_np = np.array(Image.open(image_path))
 # Actual detection.
 output_dict = run_inference_for_single_image(model, image_np)

#   print(category_index)
 # Visualization of the results of a detection.
 final_img =vis_util.visualize_boxes_and_labels_on_image_array(
         image_np,
         output_dict['detection_boxes'],
         output_dict['detection_classes'],
         output_dict['detection_scores'],
         category_index,
         instance_masks=output_dict.get('detection_masks_reframed', None),
         use_normalized_coordinates=True,
         line_thickness=8)
 return(final_img)```

My thanks to you.