Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/wordpress/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ 从特征匹配/单应性中过滤出误报–;OpenCV_C++_Opencv_Computer Vision_Opencv3.0 - Fatal编程技术网

C++ 从特征匹配/单应性中过滤出误报–;OpenCV

C++ 从特征匹配/单应性中过滤出误报–;OpenCV,c++,opencv,computer-vision,opencv3.0,C++,Opencv,Computer Vision,Opencv3.0,我有一个输入图片的程序,谁的目标是确定某个物体(本质上是一个图像)是否包含在这张图片中。如果是这样,它会尝试估计它的位置。当对象在图片中时,此功能非常有效。然而,当我把足够复杂的东西放到图片中时,我会得到很多误报 我想知道是否有什么好方法可以过滤掉这些误报。希望在计算上不会太贵 我的程序基于。除了我使用的是BRISK而不是SURF,所以我不需要contrib的东西 我如何找到匹配项 descriptorMatcher->match(descImg1, descImg2, matches,

我有一个输入图片的程序,谁的目标是确定某个物体(本质上是一个图像)是否包含在这张图片中。如果是这样,它会尝试估计它的位置。当对象在图片中时,此功能非常有效。然而,当我把足够复杂的东西放到图片中时,我会得到很多误报

我想知道是否有什么好方法可以过滤掉这些误报。希望在计算上不会太贵

我的程序基于。除了我使用的是
BRISK
而不是
SURF
,所以我不需要contrib的东西

我如何找到匹配项

descriptorMatcher->match(descImg1, descImg2, matches, Mat());
double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descImg1.rows; i++ )
{ double dist = matches[i].distance;
  if( dist < min_dist ) min_dist = dist;
  if( dist > max_dist ) max_dist = dist;
}

std::vector< DMatch > good_matches;

for( int i = 0; i < descImg1.rows; i++ )
{ if( matches[i].distance < 4*min_dist )
 { good_matches.push_back( matches[i]); }
}
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows ); obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);
好的匹配

descriptorMatcher->match(descImg1, descImg2, matches, Mat());
double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descImg1.rows; i++ )
{ double dist = matches[i].distance;
  if( dist < min_dist ) min_dist = dist;
  if( dist > max_dist ) max_dist = dist;
}

std::vector< DMatch > good_matches;

for( int i = 0; i < descImg1.rows; i++ )
{ if( matches[i].distance < 4*min_dist )
 { good_matches.push_back( matches[i]); }
}
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows ); obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);
double max_dist=0;双最小距离=100;
//--快速计算关键点之间的最大和最小距离
对于(int i=0;i最大距离)最大距离=距离;
}
标准::矢量良好匹配;
对于(int i=0;i
单应性

std::vector<Point2f> obj;
std::vector<Point2f> scene;

for( int i = 0; i < good_matches.size(); i++ )
{
  //-- Get the keypoints from the good matches
  obj.push_back( keyImg1[ good_matches[i].queryIdx ].pt );
  scene.push_back( keyImg2[ good_matches[i].trainIdx ].pt );
}

Mat H = findHomography( obj, scene, FM_RANSAC );
std::vector obj;
矢量场景;
for(int i=0;i
对象角点

descriptorMatcher->match(descImg1, descImg2, matches, Mat());
double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descImg1.rows; i++ )
{ double dist = matches[i].distance;
  if( dist < min_dist ) min_dist = dist;
  if( dist > max_dist ) max_dist = dist;
}

std::vector< DMatch > good_matches;

for( int i = 0; i < descImg1.rows; i++ )
{ if( matches[i].distance < 4*min_dist )
 { good_matches.push_back( matches[i]); }
}
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img1.cols, 0 );
obj_corners[2] = cvPoint( img1.cols, img1.rows ); obj_corners[3] = cvPoint( 0, img1.rows );
std::vector<Point2f> scene_corners(4);

perspectiveTransform( obj_corners, scene_corners, H);
std::向量对象角点(4);
obj_角点[0]=cvPoint(0,0);obj_角点[1]=cvPoint(img1.cols,0);
obj_角点[2]=cvPoint(img1.cols,img1.rows);obj_角点[3]=cvPoint(0,img1.rows);
std::矢量场景_角(4);
透视变换(obj_角、场景_角、H);

您无法完全消除误报。这就是为什么RANCAC算法被用来寻找单应性。但是,您可以检查估计的单应性是否“良好”。有关详细信息,请参阅。如果估计的单应性是错误的,你可以放弃它,并假设没有找到任何对象。由于您需要至少4个对应点来估计单应性,您可以拒绝使用预定义阈值(如6)以下的较少点估计的单应性。这可能会过滤掉所有错误估计的同音字:

int minInliers = 6; //can be any value > 4
double reprojectionError = 3; // default value, you can change it to some lower to get more reliable estimation.
Mat mask;    
Mat H = findHomography( obj, scene, FM_RANSAC, reprojectionError, mask );
int inliers = 0;
for (int i=0; i< mask.rows; ++i)
{
    if(mask[i] == 1) inliers++;
}
if(inliers > minInliers)
{
    //homography is good
}
int minInliers=6//可以是任何大于4的值
双重投影错误=3;//默认值,您可以将其更改为较低的值以获得更可靠的估计。
垫罩;
Mat H=findHomography(obj、场景、调频、重射器、遮罩);
int-inliers=0;
对于(int i=0;i最小内联线)
{
//单应是好的
}
您还可以测试原始SIFT论文中提出的方法,以获得更好的匹配。您需要找到距离每个查询点最近的两个描述符,然后检查它们之间的距离比是否小于阈值(David Lowe建议为0.8)。检查详细信息:

descriptorMatcher->knnMatch( descImg1, descImg2, knn_matches, 2 );
//-- Filter matches using the Lowe's ratio test
const float ratio_thresh = 0.8f;
std::vector<DMatch> good_matches;
for (size_t i = 0; i < knn_matches.size(); i++)
{
    if (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance)
    {
        good_matches.push_back(knn_matches[i][0]);
    }
}
descriptorMatcher->knnMatch(descImg1,descImg2,knn_匹配,2);
//--使用Lowe比率测试筛选匹配项
恒浮比_阈值=0.8f;
向量良好匹配;
对于(size_t i=0;i
嘿,谢谢你的建议!你确定这是检查内联线的正确方法吗?它总是与我的匹配项出现相同的数字:(你实际上必须查看
mask[i]==1
检查
中索引
i
处的点是否匹配
场景
是内联线。
掩码
实际上始终与
场景
具有相同的行数……感谢@AntersBear发现此点。答案已更正。