Opencv 如何在SURF、SIFT和ORB匹配结果上应用RANSAC
我在做图像处理。我想匹配2D功能,我在SURF、SIFT、ORB上做了很多测试。Opencv 如何在SURF、SIFT和ORB匹配结果上应用RANSAC,opencv,feature-extraction,orb,ransac,feature-descriptor,Opencv,Feature Extraction,Orb,Ransac,Feature Descriptor,我在做图像处理。我想匹配2D功能,我在SURF、SIFT、ORB上做了很多测试。如何在OpenCV中的SURF/SIFT/ORB上应用RANSAC?OpenCV具有功能cv::findHomography,可以选择使用RANSAC查找与两幅图像相关的单应矩阵。您可以看到此函数的一个示例 具体来说,您感兴趣的代码部分是: FlannBasedMatcher matcher; std::vector< DMatch > matches; matcher.match( descriptor
如何在OpenCV中的SURF/SIFT/ORB上应用RANSAC?OpenCV具有功能
cv::findHomography
,可以选择使用RANSAC查找与两幅图像相关的单应矩阵。您可以看到此函数的一个示例
具体来说,您感兴趣的代码部分是:
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
FlannBasedMatcher匹配器;
标准::向量匹配;
匹配(描述符\对象、描述符\场景、匹配);
for(int i=0;i
然后可以使用函数cv::perspectiveTransform
根据单应矩阵扭曲图像
除了
cv\u RANSAC
之外的cv::findHomography
的其他选项包括使用每个点的0
和使用最小中值法的cv\u LMEDS
。更多信息可以在OpenCV摄像机校准文档中找到。这里是python实现,使用skipage
在获得的SIFT/SURF关键点上应用ransac
,使用ProjectiveTransform
或AffineTransform
(即单应)模型。此实现首先对获得的关键点执行Lowe比率测试,然后对Lowe比率测试中过滤的关键点执行ransac
import cv2
from skimage.measure import ransac
from skimage.transform import ProjectiveTransform, AffineTransform
import numpy as np
def siftMatching(img1, img2):
# Input : image1 and image2 in opencv format
# Output : corresponding keypoints for source and target images
# Output Format : Numpy matrix of shape: [No. of Correspondences X 2]
surf = cv2.xfeatures2d.SURF_create(100)
# surf = cv2.xfeatures2d.SIFT_create()
kp1, des1 = surf.detectAndCompute(img1, None)
kp2, des2 = surf.detectAndCompute(img2, None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Lowe's Ratio test
good = []
for m, n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1, 2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1, 2)
# Ransac
model, inliers = ransac(
(src_pts, dst_pts),
AffineTransform, min_samples=4,
residual_threshold=8, max_trials=10000
)
n_inliers = np.sum(inliers)
inlier_keypoints_left = [cv2.KeyPoint(point[0], point[1], 1) for point in src_pts[inliers]]
inlier_keypoints_right = [cv2.KeyPoint(point[0], point[1], 1) for point in dst_pts[inliers]]
placeholder_matches = [cv2.DMatch(idx, idx, 1) for idx in range(n_inliers)]
image3 = cv2.drawMatches(img1, inlier_keypoints_left, img2, inlier_keypoints_right, placeholder_matches, None)
cv2.imshow('Matches', image3)
cv2.waitKey(0)
src_pts = np.float32([ inlier_keypoints_left[m.queryIdx].pt for m in placeholder_matches ]).reshape(-1, 2)
dst_pts = np.float32([ inlier_keypoints_right[m.trainIdx].pt for m in placeholder_matches ]).reshape(-1, 2)
return src_pts, dst_pts
导入cv2
从skiliage.measureimportransac
从skimage.transform导入项目transform,仿射变换
将numpy作为np导入
def筛选匹配(img1、img2):
#输入:opencv格式的image1和image2
#输出:源图像和目标图像的对应关键点
#输出格式:形状的Numpy矩阵:[对应数量X 2]
surf=cv2.xfeature2d.surf_create(100)
#surf=cv2.xfeature2d.SIFT_create()
kp1,des1=表面检测和计算(img1,无)
kp2,des2=表面检测和计算(img2,无)
法兰索引KDTREE=0
索引参数=dict(算法=FLANN\u索引树,树=5)
搜索参数=dict(检查=50)
flann=cv2.FlannBasedMatcher(索引参数、搜索参数)
匹配=法兰N.knnMatch(des1、des2、k=2)
#洛韦比检验
好的=[]
对于匹配中的m,n:
如果m.距离<0.7*n.距离:
好。追加(m)
src_pts=np.float32([kp1[m.queryIdx].pt代表m处于良好状态])。重塑(-1,2)
dst_pts=np.float32([kp2[m.trainIdx].pt代表m处于良好状态])。重塑(-1,2)
#兰萨克
模型,内联线=ransac(
(src_pts,dst_pts),
仿射变换,最小样本数=4,
剩余试验阈值=8,最大试验次数=10000
)
n_inliers=np.sum(inliers)
inlier_keypoints_left=[cv2.KeyPoint(点[0],点[1],1)表示src_pts[inliers]中的点]
inlier_keypoints_right=[cv2.KeyPoint(点[0],点[1],1)表示dst中的点[inliers]]
占位符_matches=[cv2.DMatch(idx,idx,1)用于范围内的idx(n_内联)]
image3=cv2.drawMatches(img1,内部关键点左,img2,内部关键点右,占位符匹配,无)
cv2.imshow(“匹配”,图像3)
cv2.等待键(0)
src_pts=np.float32([inlier_keypoints_left[m.queryIdx].pt for m in placeholder_matches])。重塑(-1,2)
dst_pts=np.float32([inlier_keypoints_right[m.trainIdx].pt代表占位符_匹配中的m])。重塑(-1,2)
返回src_pts,dst_pts