C++ OpenCV Sift/Surf/Orb:drawMatch函数工作不正常
我使用Sift/Surf和ORB,但有时我对drawMatch函数有问题 下面是错误: OpenCV错误:drawMatches文件/home/OpenCV-2.4.6.1/modules/features2d/src/draw.cpp第208行中的断言失败(i2>=0&&i2C++ OpenCV Sift/Surf/Orb:drawMatch函数工作不正常,c++,opencv,point-of-interest,C++,Opencv,Point Of Interest,我使用Sift/Surf和ORB,但有时我对drawMatch函数有问题 下面是错误: OpenCV错误:drawMatches文件/home/OpenCV-2.4.6.1/modules/features2d/src/draw.cpp第208行中的断言失败(i2>=0&&i2=0&&i2
drawMatchPoints(img1,keypoints_img1,img2,keypoints_img2,matches);
我试着将img 1、关键点img1与img2和关键点img2颠倒如下:
drawMatchPoints(img2,keypoints_img2,img1,keypoints_img1,matches);
对应于正在执行单应的我的函数:
void drawMatchPoints(cv::Mat image1,std::vector<KeyPoint> keypoints_img1,
cv::Mat image2,std::vector<KeyPoint> keypoints_img2,std::vector<cv::DMatch> matches){
cv::Mat img_matches;
drawMatches( image1, keypoints_img1, image2, keypoints_img2,
matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
std::cout << "Number of good matching " << (int)matches.size() << "\n" << endl;
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_img1[ matches[i].queryIdx ].pt );
scene.push_back( keypoints_img2[matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
std::cout << "Size of homography " << *H.size << std::endl ;
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( image1.cols, 0 );
obj_corners[2] = cvPoint( image1.cols, image1.rows ); obj_corners[3] = cvPoint( 0, image1.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( image1.cols, 0), scene_corners[1] + Point2f( image1.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( image1.cols, 0), scene_corners[2] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( image1.cols, 0), scene_corners[3] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( image1.cols, 0), scene_corners[0] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
cv::imshow( "Good Matches & Object detection", img_matches );
cv::waitKey(5000);
匹配部分:
std::cout << "Type of matcher : " << type_of_matcher << std::endl;
if (type_of_matcher=="FLANN" || type_of_matcher=="BF"){
std::vector<KeyPoint> keypoints_img1 = keyfeatures.compute_Keypoints(img1);
std::vector<KeyPoint> keypoints_img2 = keyfeatures.compute_Keypoints(img2);
cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1);
cv::Mat descriptor_img2 = keyfeatures.compute_Descriptors(img2);
std::cout << "Size keyPoint1 " << keypoints_img1.size() << "\n" << std::endl;
std::cout << "Size keyPoint2 " << keypoints_img2.size() << "\n" << std::endl;
//Flann with sift or surf
if (type_of_matcher=="FLANN"){
Debug::info("USING Matcher FLANN");
fLmatcher.match(descriptor_img1,descriptor_img2,matches);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptor_img1.rows; i++ ){
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptor_img1.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
std::cout << "Size of good match : " << (int)good_matches.size() << std::endl;
//-- Draw only "good" matches
if (!good_matches.empty()){
drawMatchPoints(img1,keypoints_img1,img2,keypoints_img2,good_matches);
}
else {
Debug::error("Flann Matcher : Pas de match");
cv::Mat img_matches;
drawMatches( img1, keypoints_img1, img2, keypoints_img2,
matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
cv::imshow( "No match", img_matches );
cv::waitKey(5000);
}
}
//BruteForce with sift or surf
else if (type_of_matcher=="BF"){
Debug::info("USING Matcher Brute Force");
bFmatcher.match(descriptor_img1,descriptor_img2,matches);
if (!matches.empty()){
std::nth_element(matches.begin(),//Initial position
matches.begin()+24, //Position of the sorted element
matches.end());//End position
matches.erase(matches.begin()+25,matches.end());
drawMatchPoints(img1,keypoints_img1,img2,keypoints_img2,matches);
//drawMatchPoints(img2,keypoints_img2,img1,keypoints_img1,matches);
}
else {
Debug::error("Brute Force matcher : Pas de match");
cv::Mat img_matches;
drawMatches( img1, keypoints_img1, img2, keypoints_img2,
matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
cv::imshow( "No match", img_matches );
cv::waitKey(5000);
}
}
而不是
cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1);
我想当我匹配的时候有冲突。。。
但是我不知道为什么我不应该在我的.h上写它,在我的函数上做一个局部参数
谢谢大家! 对于像我这样的人来说,他搜索了这个,但找不到解决方案 断言失败(i2>=0&&i2
对我来说,这是因为我在调用drawMatches之前丢弃了一些关键点,但在计算描述符之后,即调用了DescriptorExtractor#compute。这意味着drawMatches通过描述符引用旧的关键点,而我更改了这些关键点。最终的结果是,一些关键点的idx很大,但关键点的大小很小,因此出现了错误。据我所知,没有cv::drawMatchPoints()这样的东西。但是,有cv::DrawMatchs()。你能提供更多的信息和代码吗?由于cv::drawMatches()使用匹配数据显示实际匹配,因此两个图像中的关键点数量的差异不应该造成问题。从和#L189的源代码中可以看到,所有剩余的关键点都是使用cv::drawKeypoints()绘制的。至于不知道的部分,在调用cv::drawMatches()之前,您实际上知道每个图像的每个关键点向量的大小(否则您将无法调用它;)。作为另一种解决方案(尽管仍然没有解释手头的问题),您可以检查两个关键点向量的大小,并在必要时交换它们的位置。同样的问题可以在上看到,似乎交换应该解决了这个问题。这就是为什么我在第一条评论中还要求提供更多的代码,特别是匹配过程。对不起,我已经编辑了我的文章。实际上,drawMatchPoint是我创建的一个函数,它包含函数cv::drawMatches()。但我试图交换这两个参数,但都不起作用。我也有同样的错误。关于answer.opencv的链接,我已经看过了。。。感谢嗯,你用的是哪种火柴?另外,您能否实际显示创建匹配向量的确切代码?我只是用我自己的代码检查了一下,我用ORB把航空影像拼接在一起,然后对每张影像和其他影像进行交叉匹配。我将我的ORB设置为每个图像检测600个特征。有些返回600,但有些返回更少(例如569),这在您的情况下会自动下降,但是没有发生这样的错误,并且都按计划进行了。更好的输出提示:您可以使用findHomography()生成的掩码和drawMatches()中的RANSAC来实际显示经过平移的点及其对应的匹配。显示结果应该始终是您的最后一步,而不是第一步。
class keyFeatures{
public:
...
std::vector<keyPoint> keypoints;
...
cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1,keypoints_img1);
cv::Mat descriptor_img1 = keyfeatures.compute_Descriptors(img1);