Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/353.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用Opencv检测图像中的白色区域&;python_Python_Opencv_Machine Learning_Image Processing_Computer Vision - Fatal编程技术网

如何使用Opencv检测图像中的白色区域&;python

如何使用Opencv检测图像中的白色区域&;python,python,opencv,machine-learning,image-processing,computer-vision,Python,Opencv,Machine Learning,Image Processing,Computer Vision,我试图提取图像中一个大的白色区域的坐标,如下所示: 这是原始图像: 使用一个小正方形内核,我应用了一个闭合操作来填充小孔,并帮助识别图像中的较大结构,如下所示: import cv2 import numpy as np import imutils original = cv2.imread("Plates\\24.png") original = cv2.resize(original, None, fx=3, fy=3, interpolation=cv2.INT

我试图提取图像中一个大的白色区域的坐标,如下所示: 这是原始图像:

使用一个小正方形内核,我应用了一个闭合操作来填充小孔,并帮助识别图像中的较大结构,如下所示:

import cv2
import numpy as np
import imutils 

original = cv2.imread("Plates\\24.png")
original = cv2.resize(original, None, fx=3, fy=3, interpolation=cv2.INTER_CUBIC)
gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
    
# next, find regions in the image that are light
squareKern = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
light = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, squareKern)
light = cv2.threshold(light, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
生成的图像如下所示:

import cv2
import numpy as np
import imutils 

original = cv2.imread("Plates\\24.png")
original = cv2.resize(original, None, fx=3, fy=3, interpolation=cv2.INTER_CUBIC)
gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
    
# next, find regions in the image that are light
squareKern = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
light = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, squareKern)
light = cv2.threshold(light, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]

下面是另一个例子:

我希望能够做到的是检测板中较大的白色区域,如下所示:

import cv2
import numpy as np
import imutils 

original = cv2.imread("Plates\\24.png")
original = cv2.resize(original, None, fx=3, fy=3, interpolation=cv2.INTER_CUBIC)
gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
    
# next, find regions in the image that are light
squareKern = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
light = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, squareKern)
light = cv2.threshold(light, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]


请记住,对于您提供的一幅图像,许多示例中的等高线都不适用:

对于如何解决这个问题,我提出了两种方法:

方法1 等高线面积比较 如您所见,图像中有3个大轮廓;顶部的矩形和下面的两个矩形,您希望将其作为一个整体进行检测

所以我在你的图像上使用了一个阈值,检测了阈值图像的轮廓,并对第二大轮廓和第三大轮廓(最大的是你想忽略的顶部矩形)进行了索引

以下是阈值图像:

我将两个轮廓堆叠在一起,并检测到两个轮廓的边界框:

import cv2
import numpy as np

img = cv2.imread("image.png")

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(img_gray, 128, 255, cv2.THRESH_BINARY)
    img_blur = cv2.GaussianBlur(thresh, (5, 5), 2)
    img_canny = cv2.Canny(img_blur, 0, 0)
    return img_canny

def get_contours(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    r1, r2 = sorted(contours, key=cv2.contourArea)[-3:-1]
    x, y, w, h = cv2.boundingRect(np.r_[r1, r2])
    cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)

get_contours(img)
cv2.imshow("img_processed", img)
cv2.waitKey(0)
输出:


方法2 阈值掩蔽 由于底部的两个矩形比板的顶部矩形更白,我使用了一个阈值来遮住板的顶部:

我在上面显示的面具上使用了canny边缘检测器

import cv2

def process(img):
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    _, thresh = cv2.threshold(img_gray, 163, 255, cv2.THRESH_BINARY)
    img_canny = cv2.Canny(thresh, 0, 0)
    img_dilate = cv2.dilate(img_canny, None, iterations=7)
    return cv2.erode(img_dilate, None, iterations=7)

def get_contours(img):
    contours, _ = cv2.findContours(process(img), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    x, y, w, h = cv2.boundingRect(max(contours, key=cv2.contourArea))
    cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)

img = cv2.imread("egypt.png")
get_contours(img)
cv2.imshow("img_processed", img)
cv2.waitKey(0)
输出:


当然,如果车牌顶部不比底部亮,此方法可能无法正常工作。

您是否正在执行车牌检测和提取?@KnowledgeGainer,是的。这将帮助您:原始图像中是否包含坐标?我建议培训一个小型CNN模型,类似于yolov4 tiny的东西非常适合解决这个问题。签出此回购协议