Android 如何使用contourArea OpenCV计算从最大水滴中找到质心的力矩

Android 如何使用contourArea OpenCV计算从最大水滴中找到质心的力矩,android,android-studio,opencv,opencv3.0,android-studio-3.0,Android,Android Studio,Opencv,Opencv3.0,Android Studio 3.0,我想检测黄色对象,并在检测到的最大黄色对象上绘制质心位置 我按以下顺序执行步骤: 使用cvtColor()方法将输入rgbaframe转换为hsv 使用inRange()方法在HSV中执行颜色分割,将其仅绑定到黄色颜色范围并返回二进制阈值掩码 我执行形态学操作(特别是MORPH_CLOSE)以执行膨胀,然后对掩模进行腐蚀以去除任何噪声 我执行高斯模糊来平滑遮罩 我使用canny算法进行边缘检测,使边缘更加明显,为下一步的轮廓检测做准备。(我开始怀疑这个步骤是否有用?) 我应用findContou

我想检测黄色对象,并在检测到的最大黄色对象上绘制质心位置

我按以下顺序执行步骤:

  • 使用cvtColor()方法将输入rgbaframe转换为hsv
  • 使用inRange()方法在HSV中执行颜色分割,将其仅绑定到黄色颜色范围并返回二进制阈值掩码
  • 我执行形态学操作(特别是MORPH_CLOSE)以执行膨胀,然后对掩模进行腐蚀以去除任何噪声
  • 我执行高斯模糊来平滑遮罩
  • 我使用canny算法进行边缘检测,使边缘更加明显,为下一步的轮廓检测做准备。(我开始怀疑这个步骤是否有用?)
  • 我应用findContour()算法来查找图像中的轮廓以及层次结构
  • 在这里,我打算使用feature2d.FeatureDetection(SIMPLEBLOB)&在blob区域中作为参数进行检测,但是似乎没有支持Android的实现,因此我不得不绕过限制,使用Imgproc.contourArea()找到最大的blob

    有没有办法做到这一点

  • 我将先前从FindConteurs()方法获得的轮廓作为参数传递给Imgproc.moments,以计算检测到的对象的质心位置

    然而,我想提请大家注意,当前的实现将计算检测到的每个轮廓(黄色对象)中的所有质心*请参阅/参考图1、2,以查看输出到框架上的内容返回给用户

    我想实现的是找到一种方法来使用最大轮廓的轮廓(通过largestContourArea),并将该信息作为参数传递到ImgprocMoments()中,以便我只计算检测到的最大轮廓(对象)的质心,因此,我应该只看到屏幕上任何特定时间点绘制的1个质心位置

    我尝试过几种方法,例如将最大对象的轮廓作为参数传递到Imgproc.moments()中,但由于数据类型不同,也不起作用/如果起作用,则输出不符合要求,在对象周长内或沿对象周长绘制多个质心点,而不是最大轮廓对象中心的一个点

    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
    
         InputFrame = inputFrame.rgba();
    
         Core.transpose(InputFrame,mat1); //transpose mat1(src) to mat2(dst), sorta like a Clone!
         Imgproc.resize(mat1,mat2,InputFrame.size(),0,0,0);    // params:(Mat src, Mat dst, Size dsize, fx, fy, interpolation)   Extract the dimensions of the new Screen Orientation, obtain the new orientation's surface width & height.  Try to resize to fit to screen.
         Core.flip(mat2,InputFrame,-1);   // mat3 now get updated, no longer is the Origi inputFrame.rgba BUT RATHER the transposed, resized, flipped version of inputFrame.rgba().
    
         int rowWidth = InputFrame.rows();
         int colWidth = InputFrame.cols();
    
         Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGBA2RGB);
         Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGB2HSV);
    
    
         Lower_Yellow = new Scalar(21,150,150);    //HSV color scale  H to adjust color, S to control color variation, V is indicator of amt of light required to be shine on object to be seen.
         Upper_Yellow = new Scalar(31,255,360);    //HSV color scale
    
    
         Core.inRange(InputFrame,Lower_Yellow, Upper_Yellow, maskForYellow);
    
    
         final Size kernelSize = new Size(5, 5);  //must be odd num size & greater than 1.
         final Point anchor = new Point(-1, -1);   //default (-1,-1) means that the anchor is at the center of the structuring element.
         final int iterations = 1;   //number of times dilation is applied.  https://docs.opencv.org/3.4/d4/d76/tutorial_js_morphological_ops.html
    
         Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, kernelSize);
    
         Imgproc.morphologyEx(maskForYellow, yellowMaskMorphed, Imgproc.MORPH_CLOSE, kernel, anchor, iterations);   //dilate first to remove then erode.  White regions becomes more pronounced, erode away black regions
    
    
    
         Mat mIntermediateMat = new Mat();
         Imgproc.GaussianBlur(yellowMaskMorphed,mIntermediateMat,new Size(9,9),0,0);   //better result than kernel size (3,3, maybe cos reference area wider, bigger, can decide better whether inrange / out of range.
         Imgproc.Canny(mIntermediateMat, mIntermediateMat, 5, 120);   //try adjust threshold   //https://stackoverflow.com/questions/25125670/best-value-for-threshold-in-canny
    
         List<MatOfPoint> contours = new ArrayList<>();
         Mat hierarchy = new Mat();
         Imgproc.findContours(mIntermediateMat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));   
    
         byte[] arr = new byte[100];
         //List<double>hierarchyHolder = new ArrayList<>();
         int cols = hierarchy.cols();
         int rows = hierarchy.rows();
         for (int i=0; i < rows; i++) {
             for (int j = 0; j < cols; j++) {
                //hierarchyHolder.add(hierarchy.get(i,j));
                //hierarchy.get(i,j) is a double[] type, not byte.
                Log.d("hierarchy"," " + hierarchy.get(i,j).toString());   
    
            }
        }
    
    
         double maxArea1 = 0;
         int maxAreaIndex1 = 0;
         //MatOfPoint max_contours = new MatOfPoint();
         Rect r = null;
         ArrayList<Rect> rect_array = new ArrayList<Rect>();
    
         for(int i=0; i < contours.size(); i++) {
             //if(Imgproc.contourArea(contours.get(i)) > 300) {   //Size of Mat contour @ that particular point in ArrayList of Points.
             double contourArea1 = Imgproc.contourArea(contours.get(i));    
            //Size of Mat contour @ that particular point in ArrayList of Points.
                 if (maxArea1 < contourArea1){
                     maxArea1 = contourArea1;
                     maxAreaIndex1 = i;
                 }
                 //maxArea1 = Imgproc.contourArea(contours.get(i));  //assigned but nvr used
                 //max_contours = contours.get(i);
                 r = Imgproc.boundingRect(contours.get(maxAreaIndex1));    
                 rect_array.add(r);  //will only have 1 r in the array eventually, cos we will only take the one w largestContourArea.
         }
    
    
         Imgproc.cvtColor(InputFrame, InputFrame, Imgproc.COLOR_HSV2RGB);
    
    
         if (rect_array.size() > 0) {   //if got more than 1 rect found in rect_array, draw them out!
    
             Iterator<Rect> it2 = rect_array.iterator();    //only got 1 though, this method much faster than drawContour, wont lag. =D
             while (it2.hasNext()) {
                 Rect obj = it2.next();
                 //if
                 Imgproc.rectangle(InputFrame, obj.br(), obj.tl(),
                         new Scalar(0, 255, 0), 1);
             }
    
         }
    
    
     //========= Compute CENTROID POS! WHAT WE WANT TO SHOW ON SCREEN EVENTUALLY!====================== 
    
         List<Moments> mu = new ArrayList<>(contours.size());    //HUMoments
         for (int i = 0; i < contours.size(); i++) {
             mu.add(Imgproc.moments(contours.get(i)));
         }
    
         List<Point> mc = new ArrayList<>(contours.size());   //the Circle centre Point!
         for (int i = 0; i < contours.size(); i++) {
             //add 1e-5 to avoid division by zero
             mc.add(new Point(mu.get(i).m10 / (mu.get(i).m00 + 1e-5), mu.get(i).m01 / (mu.get(i).m00 + 1e-5)));
         }
    
    
         for (int i = 0; i < contours.size(); i++) {
             Scalar color = new Scalar(150, 150, 150);
    
             Imgproc.circle(InputFrame, mc.get(i), 20, color, -1);   //just to plot the small central point as a dot on the detected ImgObject.
         }
    
    CameraFrame上的公共Mat(CameraBridgeViewBase.CvCameraViewFrame inputFrame){
    InputFrame=InputFrame.rgba();
    转置(InputFrame,mat1);//将mat1(src)转置到mat2(dst),有点像克隆!
    Imgproc.resize(mat1,mat2,InputFrame.size(),0,0,0);//参数:(Mat src,Mat dst,size dsize,fx,fy,interpolation)提取新屏幕方向的尺寸,获得新方向的表面宽度和高度。尝试调整大小以适应屏幕。
    Core.flip(mat2,InputFrame,-1);//mat3现在得到更新,不再是Origi InputFrame.rgba,而是InputFrame.rgba()的转置、调整大小、翻转版本。
    int rowWidth=InputFrame.rows();
    int colWidth=InputFrame.cols();
    Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGBA2RGB);
    Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGB2HSV);
    Lower_Yellow=新标量(21150150);//HSV色标H用于调整颜色,S用于控制颜色变化,V是指示要看到的对象上需要照射的光量的指示器。
    上_黄=新标量(31255360);//HSV色标
    Core.inRange(输入框,下部为黄色,上部为黄色,maskForYellow);
    最终大小kernelSize=新大小(5,5);//必须是奇数大小&大于1。
    final Point anchor=new Point(-1,-1);//默认值(-1,-1)表示锚点位于结构元素的中心。
    final int iterations=1;//应用扩展的次数。https://docs.opencv.org/3.4/d4/d76/tutorial_js_morphological_ops.html
    Mat kernel=Imgproc.getStructuringElement(Imgproc.morp_ELLIPSE,kernelSize);
    Imgproc.morphologyEx(maskForYellow,yellowmask变形,Imgproc.MORPH_CLOSE,kernel,anchor,iterations);//首先扩张以去除然后侵蚀。白色区域变得更明显,侵蚀黑色区域
    Mat mIntermediateMat=新Mat();
    Imgproc.GaussianBlur(yellowMaskMorphed,mIntermediateMat,新大小(9,9),0,0);//比内核大小更好的结果(3,3,可能因为参考区域更宽,更大,可以更好地决定是否在范围内/超出范围。
    Imgproc.Canny(mIntermediateMat,mIntermediateMat,5120);//尝试调整阈值//https://stackoverflow.com/questions/25125670/best-value-for-threshold-in-canny
    列表等高线=新的ArrayList();
    Mat层次结构=新Mat();
    Imgproc.findContours(最小中间值、等高线、层次、Imgproc.RETR\u外部、Imgproc.CHAIN\u近似值、新点(0,0));
    字节[]arr=新字节[100];
    //ListhierarchyHolder=新的ArrayList();
    int cols=hierarchy.cols();
    int rows=hierarchy.rows();
    对于(int i=0;i300){//Mat contour的大小@点阵列列表中的特定点。
    double contourArea 1=Imgproc.contourArea(contours.get(i));
    //点阵列列表中特定点的垫轮廓大小。
    if(最大面积1<轮廓面积1){
    最大面积1=等高面积1;
    maxAreaIndex1=i;
    }
    //最大面积1=Img
    
    //========= Compute CENTROID POS! WHAT WE WANT TO SHOW ON SCREEN EVENTUALLY!======================
            List<Moments> mu = new ArrayList<>(contours.size());
            mu.add(Imgproc.moments(contours.get(maxAreaContourIndex1)));    //Just adding that 1 Single Largest Contour (largest ContourArea) to arryalist to be computed for MOMENTS to compute CENTROID POS!
    
            List<Point> mc = new ArrayList<>(contours.size());   //the Circle centre Point!
            //add 1e-5 to avoid division by zero
            mc.add(new Point(mu.get(0).m10 / (mu.get(0).m00 + 1e-5), mu.get(0).m01 / (mu.get(0).m00 + 1e-5)));   //index 0 cos there shld only be 1 contour now, the largest one only!
            //notice that it only adds 1 point, the centroid point. Hence only 1 point in the mc list<Point>, so ltr reference that point w an index 0!
    
            Scalar color = new Scalar(150, 150, 150);
    
            Imgproc.circle(InputFrame, mc.get(0), 15, color, -1);   //just to plot the small central point as a dot on the detected ImgObject.