使用opencv和Javacv计算轮廓中的白色区域像素

使用opencv和Javacv计算轮廓中的白色区域像素,opencv,javacv,motion-detection,Opencv,Javacv,Motion Detection,我用JavaCV开发了一个程序来检测运动。到目前为止,我已经完成了处理图像的cvFindContours。下面给出了源代码 public class MotionDetect { public static void main(String args[]) throws Exception, InterruptedException { //FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(new File("D:/pool.av

我用JavaCV开发了一个程序来检测运动。到目前为止,我已经完成了处理图像的cvFindContours。下面给出了源代码

public class MotionDetect {

public static void main(String args[]) throws Exception, InterruptedException {

    //FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(new File("D:/pool.avi"));
    OpenCVFrameGrabber grabber = new OpenCVFrameGrabber("D:/2.avi");
    final CanvasFrame canvas = new CanvasFrame("My Image");
    final CanvasFrame canvas2 = new CanvasFrame("ROI");
    canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
    grabber.start();
    IplImage frame = grabber.grab();
    CvSize imgsize = cvGetSize(frame);
    IplImage grayImage = cvCreateImage(imgsize, IPL_DEPTH_8U, 1);
    IplImage ROIFrame = cvCreateImage(cvSize((265 - 72), (214 - 148)), IPL_DEPTH_8U, 1);
    IplImage colorImage;
    IplImage movingAvg = cvCreateImage(imgsize, IPL_DEPTH_32F, 3);
    IplImage difference = null;
    IplImage temp = null;
    IplImage motionHistory = cvCreateImage(imgsize, IPL_DEPTH_8U, 3);


    CvRect bndRect = cvRect(0, 0, 0, 0);
    CvPoint pt1 = new CvPoint(), pt2 = new CvPoint();
    CvFont font = null;

    //Capture the movie frame by frame.
    int prevX = 0;
    int numPeople = 0;
    char[] wow = new char[65];

    int avgX = 0;

    //Indicates whether this is the first time in the loop of frames.
    boolean first = true;

    //Indicates the contour which was closest to the left boundary before the object
    //entered the region between the buildings.
    int closestToLeft = 0;
    //Same as above, but for the right.
    int closestToRight = 320;


    while (true) {
        colorImage = grabber.grab();
        if (colorImage != null) {
            if (first) {
                difference = cvCloneImage(colorImage);
                temp = cvCloneImage(colorImage);
                cvConvertScale(colorImage, movingAvg, 1.0, 0.0);
                first = false;
                //cvShowImage("My Window1", difference);
            } //else, make a running average of the motion.
            else {
                cvRunningAvg(colorImage, movingAvg, 0.020, null);
            }

            //Convert the scale of the moving average.
            cvConvertScale(movingAvg, temp, 1.0, 0.0);

            //Minus the current frame from the moving average.
            cvAbsDiff(colorImage, temp, difference);

            //Convert the image to grayscale.
            cvCvtColor(difference, grayImage, CV_RGB2GRAY);
            //canvas.showImage(grayImage);
            //Convert the image to black and white.
            cvThreshold(grayImage, grayImage, 70, 255, CV_THRESH_BINARY);

            //Dilate and erode to get people blobs
            cvDilate(grayImage, grayImage, null, 18);
            cvErode(grayImage, grayImage, null, 10);
            canvas.showImage(grayImage);


            ROIFrame = cvCloneImage(grayImage);
            cvSetImageROI(ROIFrame, cvRect(72, 148, (265 - 72), (214 - 148)));
            //cvOr(outFrame, tempFrame, outFrame);
            cvShowImage("ROI Frame", ROIFrame);



            cvRectangle(colorImage, /* the dest image */
                    cvPoint(72, 148), /* top left point */
                    cvPoint(265, 214), /* bottom right point */
                    cvScalar(255, 0, 0, 0), /* the color; blue */
                    1, 8, 0);

            CvMemStorage storage = cvCreateMemStorage(0);
            CvSeq contour = new CvSeq(null);
            cvFindContours(grayImage, storage, contour, Loader.sizeof(CvContour.class),  CV_RETR_CCOMP,  CV_CHAIN_APPROX_SIMPLE);

        }




            //Show the frame.
            cvShowImage("My Window", colorImage);

            //Wait for the user to see it.
            cvWaitKey(10);

        }

        //If this is the first time, initialize the images.

        //Thread.sleep(50);
    }

   }
} 

在这个代码框架中,我需要计算白色轮廓面积或像素数??。。是否有任何方法可以继续使用函数
cvContourArea()
文档

在代码中,在cvFindContours之后,对所有轮廓进行循环,如下所示:

CvSeq* curr_contour = contour;

while (curr_contour != NULL) {
   area = fabs(cvContourArea(curr_contour,CV_WHOLE_SEQ, 0));
   current_contour = current_contour->h_next;
}

不要忘了将该区域存放在某个地方。

非常感谢您的回答。如果我想计算白色像素,我该怎么做,但你在找到轮廓之前对图像进行二值化,对吗?阈值化后,所有内容都将是黑色或白色,因此cvContourArea将计算每个轮廓的面积(包括白色像素)。谢谢。我设法在这个地区做到了。。再次感谢