C++ 如何确定感兴趣的区域,然后使用OpenCV裁剪图像

C++ 如何确定感兴趣的区域,然后使用OpenCV裁剪图像,c++,python,opencv,image-processing,numpy,C++,Python,Opencv,Image Processing,Numpy,我问了一个类似的问题,但更多的是集中在tesseract上 我有一个样本图像如下。我想让白色的正方形成为我感兴趣的区域,然后裁剪出那个部分(正方形),并用它创建一个新的图像。我将处理不同的图像,这样正方形在所有图像中不会总是在同一位置。所以我需要检测正方形的边缘 我可以用什么样的预处理方法来获得结果?鉴于文本是唯一的大斑点,而其他所有东西都仅仅比一个像素大,一个简单的形态学打开就足够了 你可以这样做 或 之后,图像中只剩下白色矩形。您可以通过opencvs FindOntours、opencv

我问了一个类似的问题,但更多的是集中在tesseract上

我有一个样本图像如下。我想让白色的正方形成为我感兴趣的区域,然后裁剪出那个部分(正方形),并用它创建一个新的图像。我将处理不同的图像,这样正方形在所有图像中不会总是在同一位置。所以我需要检测正方形的边缘


我可以用什么样的预处理方法来获得结果?

鉴于文本是唯一的大斑点,而其他所有东西都仅仅比一个像素大,一个简单的形态学打开就足够了

你可以这样做 或

之后,图像中只剩下白色矩形。您可以通过opencvs FindOntours、opencv的CvBlobs库或imagemagick-crop函数找到它

这是你的图像,有两个侵蚀步骤,然后是两个扩张步骤:
您只需将此图像插入opencv findContours函数,如中所示,即可使用测试图像获取位置。我可以通过一个简单的操作去除所有噪音

在那之后,
Mat
上进行一次简单的迭代来查找角点像素
是很简单的,我在上一节中谈到了这一点。出于测试目的,我们可以在这些点之间绘制绿线,以显示原始图像中我们感兴趣的区域:

最后,我在原始图像中设置ROI并裁剪出该部分

最终结果显示在下图中:

我编写了一个示例代码,使用OpenCV的C++接口执行此任务。我对您将此代码翻译成Python的技能很有信心。如果你做不到,忘记代码,坚持我分享的答案

#include <cv.h>
#include <highgui.h>

int main(int argc, char* argv[])
{
    cv::Mat img = cv::imread(argv[1]);
    std::cout << "Original image size: " << img.size() << std::endl;

    // Convert RGB Mat to GRAY
    cv::Mat gray;
    cv::cvtColor(img, gray, CV_BGR2GRAY);
    std::cout << "Gray image size: " << gray.size() << std::endl;

    // Erode image to remove unwanted noises
    int erosion_size = 5;
    cv::Mat element = cv::getStructuringElement(cv::MORPH_CROSS,
                                       cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1),
                                       cv::Point(erosion_size, erosion_size) );
    cv::erode(gray, gray, element);

    // Scan the image searching for points and store them in a vector
    std::vector<cv::Point> points;
    cv::Mat_<uchar>::iterator it = gray.begin<uchar>();
    cv::Mat_<uchar>::iterator end = gray.end<uchar>();
    for (; it != end; it++)
    {
        if (*it) 
            points.push_back(it.pos()); 
    }

    // From the points, figure out the size of the ROI
    int left, right, top, bottom;
    for (int i = 0; i < points.size(); i++)
    {
        if (i == 0) // initialize corner values
        {
            left = right = points[i].x;
            top = bottom = points[i].y;
        }

        if (points[i].x < left)
            left = points[i].x;

        if (points[i].x > right)
            right = points[i].x;

        if (points[i].y < top)
            top = points[i].y;

        if (points[i].y > bottom)
            bottom = points[i].y;
    }
    std::vector<cv::Point> box_points;
    box_points.push_back(cv::Point(left, top));
    box_points.push_back(cv::Point(left, bottom));
    box_points.push_back(cv::Point(right, bottom));
    box_points.push_back(cv::Point(right, top));

    // Compute minimal bounding box for the ROI
    // Note: for some unknown reason, width/height of the box are switched.
    cv::RotatedRect box = cv::minAreaRect(cv::Mat(box_points));
    std::cout << "box w:" << box.size.width << " h:" << box.size.height << std::endl;

    // Draw bounding box in the original image (debugging purposes)
    //cv::Point2f vertices[4];
    //box.points(vertices);
    //for (int i = 0; i < 4; ++i)
    //{
    //    cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 1, CV_AA);
    //}
    //cv::imshow("Original", img);
    //cv::waitKey(0);

    // Set the ROI to the area defined by the box
    // Note: because the width/height of the box are switched, 
    // they were switched manually in the code below:
    cv::Rect roi;
    roi.x = box.center.x - (box.size.height / 2);
    roi.y = box.center.y - (box.size.width / 2);
    roi.width = box.size.height;
    roi.height = box.size.width;
    std::cout << "roi @ " << roi.x << "," << roi.y << " " << roi.width << "x" << roi.height << std::endl;

    // Crop the original image to the defined ROI
    cv::Mat crop = img(roi);

    // Display cropped ROI
    cv::imshow("Cropped ROI", crop);
    cv::waitKey(0);

    return 0;
}
#包括
#包括
int main(int argc,char*argv[])
{
cv::Mat img=cv::imread(argv[1]);
标准::cout


这将用于检测正方形,但是,您提供的结果图像会使文本模糊。我希望正方形内的文本保持原样,因为最后我想将该文本输入OCR。如果您有矩形(甚至没有填充),请告诉我是否有更好的方法来实现我尝试的目标使用findContours函数()可以很容易地找到矩形的轮廓(可能会找到多个轮廓,只需取最大的一个),然后用白色填充它。您将填充矩形,所以现在只需使用按位and()在这个图片和原始图片上。谢谢你的回应。我现在正试图把你提供的C代码转换成JavaCV:)再次感谢。不必感谢我,只是投票表决。考虑点击它附近的复选框来选择它作为官方答案。通过做这些事情,你将有助于未来的访问者。这条线是什么?
if(*it)
do/代表什么?我真的很困惑。我在那行代码上放了一个if/else块。例如
if(*it){std::你能打印像素坐标吗,我说的是像素的颜色。
#objective:
#1)compress large images to less than 1000x1000
#2)identify region of interests
#3)save rois in top to bottom order
import cv2
import os

def get_contour_precedence(contour, cols):
    tolerance_factor = 10
    origin = cv2.boundingRect(contour)
    return ((origin[1] // tolerance_factor) * tolerance_factor) * cols + origin[0]

# Load image, grayscale, Gaussian blur, adaptive threshold
image = cv2.imread('./images/sample_0.jpg')

#compress the image if image size is >than 1000x1000
height, width, color = image.shape #unpacking tuple (height, width, colour) returned by image.shape
while(width > 1000):
    height = height/2
    width = width/2
print(int(height), int(width))
height = int(height)
width = int(width)
image = cv2.resize(image, (width, height))

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (9,9), 0)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV,11,30)
# Dilate to combine adjacent text contours
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,9))
ret,thresh3 = cv2.threshold(image,127,255,cv2.THRESH_BINARY_INV)
dilate = cv2.dilate(thresh, kernel, iterations=4)

# Find contours, highlight text areas, and extract ROIs
cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#cnts = cv2.findContours(thresh3, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

cnts = cnts[0] if len(cnts) == 2 else cnts[1]

#ORDER CONTOURS top to bottom
cnts.sort(key=lambda x:get_contour_precedence(x, image.shape[1]))

#delete previous roi images in folder roi to avoid
dir = './roi/'
for f in os.listdir(dir):
    os.remove(os.path.join(dir, f))

ROI_number = 0
for c in cnts:
    area = cv2.contourArea(c)
    if area > 10000:
        x,y,w,h = cv2.boundingRect(c)
        #cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 3)
        cv2.rectangle(image, (x, y), (x + w, y + h), (100,100,100), 1)
        #use below code to write roi when results are good
        ROI = image[y:y+h, x:x+w]
        cv2.imwrite('roi/ROI_{}.jpg'.format(ROI_number), ROI)
        ROI_number += 1

cv2.imshow('thresh', thresh)
cv2.imshow('dilate', dilate)
cv2.imshow('image', image)
cv2.waitKey()