C# Emgu CV-如何获得图像中所有出现的图案

C# Emgu CV-如何获得图像中所有出现的图案,c#,algorithm,image-processing,comparison,emgucv,C#,Algorithm,Image Processing,Comparison,Emgucv,Hi已经有功能解决方案,但有一个问题: // The screenshot will be stored in this bitmap. Bitmap capture = new Bitmap(rec.Width, rec.Height, PixelFormat.Format24bppRgb); using (Graphics g = Graphics.FromImage(capture)) {

Hi已经有功能解决方案,但有一个问题:

            // The screenshot will be stored in this bitmap.
            Bitmap capture = new Bitmap(rec.Width, rec.Height, PixelFormat.Format24bppRgb);
            using (Graphics g = Graphics.FromImage(capture))
            {
                g.CopyFromScreen(rec.Location, new System.Drawing.Point(0, 0), rec.Size);
            }

            MCvSURFParams surfParam = new MCvSURFParams(500, false);
            SURFDetector surfDetector = new SURFDetector(surfParam);

            // Template image 
            Image<Gray, Byte> modelImage = new Image<Gray, byte>("template.jpg");
            // Extract features from the object image
            ImageFeature[] modelFeatures = surfDetector.DetectFeatures(modelImage, null);

            // Prepare current frame
            Image<Gray, Byte> observedImage = new Image<Gray, byte>(capture);
            ImageFeature[] imageFeatures = surfDetector.DetectFeatures(observedImage, null);


            // Create a SURF Tracker using k-d Tree
            Features2DTracker tracker = new Features2DTracker(modelFeatures);

            Features2DTracker.MatchedImageFeature[] matchedFeatures = tracker.MatchFeature(imageFeatures, 2);
            matchedFeatures = Features2DTracker.VoteForUniqueness(matchedFeatures, 0.8);
            matchedFeatures = Features2DTracker.VoteForSizeAndOrientation(matchedFeatures, 1.5, 20);
            HomographyMatrix homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(matchedFeatures);

            // Merge the object image and the observed image into one image for display
            Image<Gray, Byte> res = modelImage.ConcateVertical(observedImage);

            #region draw lines between the matched features

            foreach (Features2DTracker.MatchedImageFeature matchedFeature in matchedFeatures)
            {
                PointF p = matchedFeature.ObservedFeature.KeyPoint.Point;
                p.Y += modelImage.Height;
                res.Draw(new LineSegment2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point, p), new Gray(0), 1);
            }

            #endregion

            #region draw the project region on the image

            if (homography != null)
            {
                // draw a rectangle along the projected model
                Rectangle rect = modelImage.ROI;
                PointF[] pts = new PointF[] { 
                    new PointF(rect.Left, rect.Bottom),
                    new PointF(rect.Right, rect.Bottom),
                    new PointF(rect.Right, rect.Top),
                    new PointF(rect.Left, rect.Top)
                };

                homography.ProjectPoints(pts);

                for (int i = 0; i < pts.Length; i++)
                    pts[i].Y += modelImage.Height;

                res.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Gray(255.0), 2);
            }

            #endregion

            pictureBoxScreen.Image = res.ToBitmap();
//屏幕截图将存储在此位图中。
位图捕获=新位图(记录宽度、记录高度、像素格式、格式24bpprgb);
使用(Graphics g=Graphics.FromImage(capture))
{
g、 CopyFromScreen(记录位置、新系统图点(0,0)、记录大小);
}
MCvSURFParams surfParam=新的MCvSURFParams(500,错误);
SURFDetector SURFDetector=新的SURFDetector(surfParam);
//模板图像
图像模型图像=新图像(“template.jpg”);
//从对象图像中提取特征
ImageFeature[]modelFeatures=surfDetector.DetectFeatures(modelImage,null);
//准备当前帧
图像观察图像=新图像(捕获);
ImageFeature[]imageFeatures=surfDetector.DetectFeatures(ObserveImage,null);
//使用k-d树创建冲浪跟踪器
Features2DTracker tracker=新的Features2DTracker(modelFeatures);
Features2DTracker.MatchedImageFeature[]matchedFeatures=tracker.MatchFeature(imageFeatures,2);
matchedFeatures=Features2DTracker.voteForUnique(matchedFeatures,0.8);
matchedFeatures=特征2DTracker.VoteForSizeAndOrientation(matchedFeatures,1.5,20);
同形矩阵单应=Features2DTracker.GetHomographyMatrix从matchedFeatures(matchedFeatures)开始;
//将对象图像和观察到的图像合并为一个图像以供显示
Image res=modelImage.ConcateVertical(观察图像);
#区域在匹配的特征之间绘制线
foreach(功能2dTracker.MatchedImageFeature matchedFeatures中的功能matchedFeatures)
{
PointF p=匹配的Feature.ObservedFeature.KeyPoint.Point;
p、 Y+=模型图像高度;
res.Draw(新线段2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point,p),新灰色(0),1);
}
#端区
#区域在图像上绘制项目区域
if(单应性!=null)
{
//沿投影模型绘制一个矩形
矩形rect=modelImage.ROI;
PointF[]pts=新的PointF[]{
新的点F(左直,下直),
新的点F(右直,下直),
新的点F(右直,上直),
新的点F(矩形左、矩形上)
};
单应性.投影点(pts);
对于(int i=0;i
结果是:

我的问题是,函数
homography.ProjectPoints(pts)
仅获取模式的第一个匹配项(上图中的白色矩形)


我如何投影模板的所有匹配项,如何获得图像中模板矩形的匹配项,我在我的硕士论文中遇到了一个类似于你的问题。基本上你有两个选择:

  • 使用聚类,如或点密度,如(它取决于两个参数,但可以在二维R^2空间中使其无阈值)
  • 使用多重稳健模型拟合估计技术,如。在这种更先进的技术中,您可以对共享单应性的点进行聚类,而不是在欧几里德空间中彼此接近的聚类点
  • 一旦将匹配划分为“簇”,就可以估计属于相应簇的匹配之间的同音字