Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Opencv 如何读取检测到的对象矩形中的字符?_Opencv_Tensorflow_Object Detection - Fatal编程技术网

Opencv 如何读取检测到的对象矩形中的字符?

Opencv 如何读取检测到的对象矩形中的字符?,opencv,tensorflow,object-detection,Opencv,Tensorflow,Object Detection,我成功地使用了tensorflow对象检测API,如下图所示 但现在我想读一下绿框中的字符,怎么做?首先需要裁剪出图版的边框,然后才能使用。获取文本。针对您的问题的伪代码可能如下所示: import cv2 import pytesseract original_img = cv2.imread("/path/to/your/img.png") for plate in detected_plates: if plate.confidence > 0.98:

我成功地使用了tensorflow对象检测API,如下图所示


但现在我想读一下绿框中的字符,怎么做?

首先需要裁剪出图版的边框,然后才能使用。获取文本。针对您的问题的伪代码可能如下所示:

import cv2
import pytesseract

original_img = cv2.imread("/path/to/your/img.png")

for plate in detected_plates:
     if plate.confidence > 0.98:
         b_box = plate.bounding_rect  # bounding box in [x, y, w, h]
         img_cropped = original_img[b_box[1]: b_box[1] + b_box[3], b_box[0]: b_box[0] + b_box[2]]
         print(pytesseract.image_to_string(img_cropped))

你可以检查和。除了OCR还有其他方法吗?为什么要重新发明轮子。有很多anpr系统。除了OCR还有其他方法吗?使用OCR有什么问题?您可以使用不同的库进行OCR。但基本概念仍然是一样的。将输入图像转换为ascii文本的技术称为OCR>,代码如下:
# Load image using OpenCV and
# expand image dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
image = cv2.imread(PATH_TO_IMAGE)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image_expanded = np.expand_dims(image_rgb, axis=0)

# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run(
    [detection_boxes, detection_scores, detection_classes, num_detections],
    feed_dict={image_tensor: image_expanded})

# Draw the results of the detection (aka 'visulaize the results')

vis_util.visualize_boxes_and_labels_on_image_array(
    image,
    np.squeeze(boxes),
    np.squeeze(classes).astype(np.int32),
    np.squeeze(scores),
    category_index,
    use_normalized_coordinates=True,
    line_thickness=1,
    min_score_thresh=0.60)

# All the results have been drawn on image. Now display the image.
cv2.imshow('Object detector', image)