Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PyteSeract未提供预期输出_Python_Python 3.x_Opencv_Python Tesseract - Fatal编程技术网

Python PyteSeract未提供预期输出

Python PyteSeract未提供预期输出,python,python-3.x,opencv,python-tesseract,Python,Python 3.x,Opencv,Python Tesseract,我是pyhton的新手,我正在用haar cascade做一个车牌识别系统。我的代码可以很好地检测车牌和轮廓,但pytesseract ocr无法识别字符,并给出奇怪的结果。请帮忙 该方法利用轮廓检测来确定车牌区域,并对车牌区域进行透视变换。然后,它使用自适应阈值来检测数字,并进行中值模糊以消除干扰pytesseract操作的噪声 蓝色框来自原始图像。红色框来自我的车牌识别系统haar cascade。绿色表示车牌的轮廓检测 这是透视变换的输出。我使用了imutils模块来实现这一点

我是pyhton的新手,我正在用haar cascade做一个车牌识别系统。我的代码可以很好地检测车牌和轮廓,但pytesseract ocr无法识别字符,并给出奇怪的结果。请帮忙


该方法利用轮廓检测来确定车牌区域,并对车牌区域进行透视变换。然后,它使用自适应阈值来检测数字,并进行中值模糊以消除干扰pytesseract操作的噪声

蓝色框来自原始图像。红色框来自我的车牌识别系统haar cascade。绿色表示车牌的轮廓检测

这是透视变换的输出。我使用了
imutils
模块来实现这一点

这是自适应阈值和模糊的输出,我使用了
skimage
模块来实现

利用这个,我得到了以下结果:

\fUP1ADN7120

删除所有非大写或非数字的字符会得到:

UP1ADN7120

只错了一个字符

这种方法还不错,但如果你不这样做,你可以做得更好,然后你可以创建一个CNN

代码如下:

import cv2
import numpy as np
import pytesseract
import imutils.perspective
from skimage.filters import threshold_local

plate_cascade = cv2.CascadeClassifier('/usr/local/lib/python3.5/dist-packages/cv2/data/haarcascade_russian_plate_number.xml')

img = cv2.imread('GNA0d.jpg', cv2.IMREAD_COLOR)

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = np.array(gray, dtype='uint8')

plates = plate_cascade.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in plates:
    cv2.rectangle(img, (x,y), (x+w,y+h),(0,0,255),2)
    roiGray = gray[y:y+h, x:x+w]
    roiImg = img[y:y+h, x:x+w]

blur = cv2.GaussianBlur(roiGray, (5, 5), 0)
edges = cv2.Canny(blur, 75, 200)

contours = cv2.findContours(edges.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[1]
contours = sorted(contours, key = cv2.contourArea, reverse = True)[:5]  #sort the contours, only getting the biggest to improve speed

for contour in contours:
    peri = cv2.arcLength(contour, True)
    approx = cv2.approxPolyDP(contour, 0.02 * peri, True)

    #find contours with 4 edges
    if len(approx) == 4:
        screenCnt = approx
        break

orig = roiImg.copy()
cv2.drawContours(roiImg, [screenCnt], -1, (0, 255, 0), 2)

#do a perspective transform
warped = imutils.perspective.four_point_transform(orig, screenCnt.reshape(4, 2))
graywarp = imutils.perspective.four_point_transform(roiGray, screenCnt.reshape(4, 2))

#threshold using adaptive thresholding
T = threshold_local(graywarp, 11, offset = 10, method = "gaussian")
graywarp = (graywarp > T).astype("uint8") * 255

#do a median blur to remove noise
graywarp = cv2.medianBlur(graywarp, 3)

text = pytesseract.image_to_string(graywarp)
print(text)
print("".join([c for c in text if c.isupper() or c.isdigit()]))


cv2.imshow("warped", warped)
cv2.imshow("graywarp", graywarp)
cv2.imshow('img', img)


cv2.waitKey(0)
cv2.destroyAllWindows()

有相当一部分是从他那里得到的,他比我解释得更好。

我的建议是裁剪选定图像位置的轮廓并直接交给Tesseract,您可以获得所需的输出您可以上传原始图像以便我进行测试吗?谢谢。我已经上传了原始图像。谢谢你的帮助。非常感谢。当然,如果JaiAhuja的答案解决了你原来的问题,请考虑把它标记为一个被接受的答案。
import cv2
import numpy as np
import pytesseract
import imutils.perspective
from skimage.filters import threshold_local

plate_cascade = cv2.CascadeClassifier('/usr/local/lib/python3.5/dist-packages/cv2/data/haarcascade_russian_plate_number.xml')

img = cv2.imread('GNA0d.jpg', cv2.IMREAD_COLOR)

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = np.array(gray, dtype='uint8')

plates = plate_cascade.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in plates:
    cv2.rectangle(img, (x,y), (x+w,y+h),(0,0,255),2)
    roiGray = gray[y:y+h, x:x+w]
    roiImg = img[y:y+h, x:x+w]

blur = cv2.GaussianBlur(roiGray, (5, 5), 0)
edges = cv2.Canny(blur, 75, 200)

contours = cv2.findContours(edges.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[1]
contours = sorted(contours, key = cv2.contourArea, reverse = True)[:5]  #sort the contours, only getting the biggest to improve speed

for contour in contours:
    peri = cv2.arcLength(contour, True)
    approx = cv2.approxPolyDP(contour, 0.02 * peri, True)

    #find contours with 4 edges
    if len(approx) == 4:
        screenCnt = approx
        break

orig = roiImg.copy()
cv2.drawContours(roiImg, [screenCnt], -1, (0, 255, 0), 2)

#do a perspective transform
warped = imutils.perspective.four_point_transform(orig, screenCnt.reshape(4, 2))
graywarp = imutils.perspective.four_point_transform(roiGray, screenCnt.reshape(4, 2))

#threshold using adaptive thresholding
T = threshold_local(graywarp, 11, offset = 10, method = "gaussian")
graywarp = (graywarp > T).astype("uint8") * 255

#do a median blur to remove noise
graywarp = cv2.medianBlur(graywarp, 3)

text = pytesseract.image_to_string(graywarp)
print(text)
print("".join([c for c in text if c.isupper() or c.isdigit()]))


cv2.imshow("warped", warped)
cv2.imshow("graywarp", graywarp)
cv2.imshow('img', img)


cv2.waitKey(0)
cv2.destroyAllWindows()