Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python SIFT特征匹配点坐标_Python_Opencv_Feature Detection_Sift - Fatal编程技术网

Python SIFT特征匹配点坐标

Python SIFT特征匹配点坐标,python,opencv,feature-detection,sift,Python,Opencv,Feature Detection,Sift,我想用基于FLANN的匹配器打印检测特征关键点 算法:。 搜索工作正常,并在教程中以红色(全部)和绿色(良好)显示关键点。 我只想打印第二幅图像(场景)的坐标(x,y),在这里命名为“kp2”,但它不起作用。 这是我的密码: import numpy as np import cv2 from matplotlib import pyplot as plt img1 = cv2.imread('img1.jpg',0) # queryImage img2 = cv2.imrea

我想用基于FLANN的匹配器打印检测特征关键点 算法:。 搜索工作正常,并在教程中以红色(全部)和绿色(良好)显示关键点。 我只想打印第二幅图像(场景)的坐标(x,y),在这里命名为“kp2”,但它不起作用。 这是我的密码:

import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('img1.jpg',0)          # queryImage
img2 = cv2.imread('img2.jpg',0) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50)   # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]

# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]
        print(i,kp2[i].pt)

draw_params = dict(matchColor = (0,255,0),
                   singlePointColor = (255,0,0),
                   matchesMask = matchesMask,
                   flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
匹配关键点的数量良好,但坐标错误打印(i,kp2[i].pt)。我检查了原始图像。 我做错了什么,如果是的话,我必须放哪些行来只打印匹配的关键点坐标。 谢谢大家。

更新:

我发现了一个有用的资源


我使用这两个图像进行测试:

,

匹配结果如下:

一些结果:

0 (42.05057144165039, 134.98709106445312) (139.18690490722656, 24.550437927246094)
1 (53.74299621582031, 249.95252990722656) (26.700265884399414, 124.75701904296875)
2 (56.41600799560547, 272.58843994140625) (139.18690490722656, 24.550437927246094)
3 (82.96114349365234, 124.731201171875) (41.35136795043945, 62.25730895996094)
4 (82.96114349365234, 124.731201171875) (41.35136795043945, 62.25730895996094)
5 (82.96114349365234, 124.731201171875) (41.35136795043945, 62.25730895996094)
6 (91.90446472167969, 293.59735107421875) (139.18690490722656, 24.550437927246094)
8 (94.516845703125, 296.0242919921875) (139.18690490722656, 24.550437927246094)
9 (98.97846221923828, 134.186767578125) (49.89073944091797, 67.37061309814453)

代码和解释如下:

#!/usr/bin/python3
# 2017.10.06 22:36:44 CST
# 2017.10.06 23:18:25 CST

"""
Environment:
    OpenCV 3.3  + Python 3.5

Aims:
(1) Detect sift keypoints and compute descriptors.
(2) Use flannmatcher to match descriptors.
(3) Do ratio test and output the matched pairs coordinates, draw some pairs in purple .
(4) Draw matched pairs in blue color, singlepoints in red.
"""
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgname = "android.png"          # query image (large scene)
imgname2 = "android_small.png"   # train image (small object)

## Create SIFT object
sift = cv2.xfeatures2d.SIFT_create()

## Create flann matcher
FLANN_INDEX_KDTREE = 1  # bug: flann enums are missing
flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
#matcher = cv2.FlannBasedMatcher_create()
matcher = cv2.FlannBasedMatcher(flann_params, {})

## Detect and compute
img1 = cv2.imread(imgname)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
kpts1, descs1 = sift.detectAndCompute(gray1,None)

## As up
img2 = cv2.imread(imgname2)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
kpts2, descs2 = sift.detectAndCompute(gray2,None)

## Ratio test
matches = matcher.knnMatch(descs1, descs2, 2)
matchesMask = [[0,0] for i in range(len(matches))]
for i, (m1,m2) in enumerate(matches):
    if m1.distance < 0.7 * m2.distance:
        matchesMask[i] = [1,0]
        ## Notice: How to get the index
        pt1 = kpts1[m1.queryIdx].pt
        pt2 = kpts2[m1.trainIdx].pt
        print(i, pt1,pt2 )
        if i % 5 ==0:
            ## Draw pairs in purple, to make sure the result is ok
            cv2.circle(img1, (int(pt1[0]),int(pt1[1])), 5, (255,0,255), -1)
            cv2.circle(img2, (int(pt2[0]),int(pt2[1])), 5, (255,0,255), -1)


## Draw match in blue, error in red
draw_params = dict(matchColor = (255, 0,0),
                   singlePointColor = (0,0,255),
                   matchesMask = matchesMask,
                   flags = 0)

res = cv2.drawMatchesKnn(img1,kpts1,img2,kpts2,matches,None,**draw_params)
cv2.imshow("Result", res);cv2.waitKey();cv2.destroyAllWindows()
#/usr/bin/python3
#2017.10.06 22:36:44 CST
#2017.10.06 23:18:25 CST
"""
环境:
OpenCV 3.3+Python 3.5
目的:
(1) 检测筛选关键点并计算描述符。
(2) 使用flannmatcher匹配描述符。
(3) 进行比率测试并输出匹配的对坐标,绘制一些紫色的对。
(4) 画出蓝色的配对,红色的单点。
"""
将numpy作为np导入
进口cv2
从matplotlib导入pyplot作为plt
imgname=“android.png”#查询图像(大场景)
imgname2=“android_small.png”#列车图像(小对象)
##创建筛选对象
sift=cv2.xfeature2d.sift_create()
##创建法兰匹配器
FLANN_INDEX_KDTREE=1#错误:缺少FLANN枚举
flann_参数=dict(算法=flann_索引树,树=5)
#matcher=cv2.FlannBasedMatcher_create()
matcher=cv2.FlannBasedMatcher(flann_参数,{})
##检测和计算
img1=cv2.imread(imgname)
灰色1=cv2.CVT颜色(img1,cv2.COLOR\U BGR2GRAY)
kpts1,descs1=筛选、检测和计算(灰色1,无)
##向上
img2=cv2.imread(imgname2)
gray2=cv2.CVT颜色(img2,cv2.COLOR\u BGR2GRAY)
kpts2,descs2=筛选、检测和计算(灰色2,无)
##比率测试
matches=matcher.knnMatch(descs1、descs2、2)
匹配任务=[[0,0]表示范围内的i(len(匹配))]
对于枚举(匹配项)中的i(m1,m2):
如果m1.距离<0.7*m2.距离:
匹配任务[i]=[1,0]
##注意:如何获取索引
pt1=kpts1[m1.queryIdx].pt
pt2=kpts2[m1.trainIdx].pt
打印(i、pt1、pt2)
如果i%5==0:
##用紫色画对,以确保结果是正确的
cv2.圆(img1,(int(pt1[0]),int(pt1[1])),5,(255,0255),-1)
cv2.圆(img2,(int(pt2[0]),int(pt2[1])),5,(255,0255),-1)
##以蓝色绘制匹配,以红色绘制错误
绘制参数=dict(匹配颜色=(255,0,0),
单点颜色=(0,0255),
matchesMask=matchesMask,
标志=0)
res=cv2.drawMatchesKnn(img1,kpts1,img2,kpts2,matches,None,**绘图参数)
cv2.imshow(“结果”,res);cv2.waitKey();cv2.destroyAllWindows()
更新:

我发现了一个有用的资源


我使用这两个图像进行测试:

,

匹配结果如下:

一些结果:

0 (42.05057144165039, 134.98709106445312) (139.18690490722656, 24.550437927246094)
1 (53.74299621582031, 249.95252990722656) (26.700265884399414, 124.75701904296875)
2 (56.41600799560547, 272.58843994140625) (139.18690490722656, 24.550437927246094)
3 (82.96114349365234, 124.731201171875) (41.35136795043945, 62.25730895996094)
4 (82.96114349365234, 124.731201171875) (41.35136795043945, 62.25730895996094)
5 (82.96114349365234, 124.731201171875) (41.35136795043945, 62.25730895996094)
6 (91.90446472167969, 293.59735107421875) (139.18690490722656, 24.550437927246094)
8 (94.516845703125, 296.0242919921875) (139.18690490722656, 24.550437927246094)
9 (98.97846221923828, 134.186767578125) (49.89073944091797, 67.37061309814453)

代码和解释如下:

#!/usr/bin/python3
# 2017.10.06 22:36:44 CST
# 2017.10.06 23:18:25 CST

"""
Environment:
    OpenCV 3.3  + Python 3.5

Aims:
(1) Detect sift keypoints and compute descriptors.
(2) Use flannmatcher to match descriptors.
(3) Do ratio test and output the matched pairs coordinates, draw some pairs in purple .
(4) Draw matched pairs in blue color, singlepoints in red.
"""
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgname = "android.png"          # query image (large scene)
imgname2 = "android_small.png"   # train image (small object)

## Create SIFT object
sift = cv2.xfeatures2d.SIFT_create()

## Create flann matcher
FLANN_INDEX_KDTREE = 1  # bug: flann enums are missing
flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
#matcher = cv2.FlannBasedMatcher_create()
matcher = cv2.FlannBasedMatcher(flann_params, {})

## Detect and compute
img1 = cv2.imread(imgname)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
kpts1, descs1 = sift.detectAndCompute(gray1,None)

## As up
img2 = cv2.imread(imgname2)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
kpts2, descs2 = sift.detectAndCompute(gray2,None)

## Ratio test
matches = matcher.knnMatch(descs1, descs2, 2)
matchesMask = [[0,0] for i in range(len(matches))]
for i, (m1,m2) in enumerate(matches):
    if m1.distance < 0.7 * m2.distance:
        matchesMask[i] = [1,0]
        ## Notice: How to get the index
        pt1 = kpts1[m1.queryIdx].pt
        pt2 = kpts2[m1.trainIdx].pt
        print(i, pt1,pt2 )
        if i % 5 ==0:
            ## Draw pairs in purple, to make sure the result is ok
            cv2.circle(img1, (int(pt1[0]),int(pt1[1])), 5, (255,0,255), -1)
            cv2.circle(img2, (int(pt2[0]),int(pt2[1])), 5, (255,0,255), -1)


## Draw match in blue, error in red
draw_params = dict(matchColor = (255, 0,0),
                   singlePointColor = (0,0,255),
                   matchesMask = matchesMask,
                   flags = 0)

res = cv2.drawMatchesKnn(img1,kpts1,img2,kpts2,matches,None,**draw_params)
cv2.imshow("Result", res);cv2.waitKey();cv2.destroyAllWindows()
#/usr/bin/python3
#2017.10.06 22:36:44 CST
#2017.10.06 23:18:25 CST
"""
环境:
OpenCV 3.3+Python 3.5
目的:
(1) 检测筛选关键点并计算描述符。
(2) 使用flannmatcher匹配描述符。
(3) 进行比率测试并输出匹配的对坐标,绘制一些紫色的对。
(4) 画出蓝色的配对,红色的单点。
"""
将numpy作为np导入
进口cv2
从matplotlib导入pyplot作为plt
imgname=“android.png”#查询图像(大场景)
imgname2=“android_small.png”#列车图像(小对象)
##创建筛选对象
sift=cv2.xfeature2d.sift_create()
##创建法兰匹配器
FLANN_INDEX_KDTREE=1#错误:缺少FLANN枚举
flann_参数=dict(算法=flann_索引树,树=5)
#matcher=cv2.FlannBasedMatcher_create()
matcher=cv2.FlannBasedMatcher(flann_参数,{})
##检测和计算
img1=cv2.imread(imgname)
灰色1=cv2.CVT颜色(img1,cv2.COLOR\U BGR2GRAY)
kpts1,descs1=筛选、检测和计算(灰色1,无)
##向上
img2=cv2.imread(imgname2)
gray2=cv2.CVT颜色(img2,cv2.COLOR\u BGR2GRAY)
kpts2,descs2=筛选、检测和计算(灰色2,无)
##比率测试
matches=matcher.knnMatch(descs1、descs2、2)
匹配任务=[[0,0]表示范围内的i(len(匹配))]
对于枚举(匹配项)中的i(m1,m2):
如果m1.距离<0.7*m2.距离:
匹配任务[i]=[1,0]
##注意:如何获取索引
pt1=kpts1[m1.queryIdx].pt
pt2=kpts2[m1.trainIdx].pt
打印(i、pt1、pt2)
如果i%5==0:
##用紫色画对,以确保结果是正确的
cv2.圆(img1,(int(pt1[0]),int(pt1[1])),5,(255,0255),-1)
cv2.圆(img2,(int(pt2[0]),int(pt2[1])),5,(255,0255),-1)
##以蓝色绘制匹配,以红色绘制错误
绘制参数=dict(匹配颜色=(255,0,0),
单点颜色=(0,0255),
matchesMask=matchesMask,
标志=0)
res=cv2.drawMatchesKnn(img1,kpts1,img2,kpts2,matches,None,**绘图参数)
cv2.imshow(“结果”,res);cv2.waitKey();cv2.destroyAllWindows()

我发现了一个问题。我改变了这一点:

# ratio test as per Lowe's paper for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]
        good.append(m)


dst_pt = [ kp2[m.trainIdx].pt for m in good ] print(dst_pt)
#根据Lowe的论文对枚举(匹配项)中的i,(m,n)进行比率测试:
如果m.距离<0.7*n.距离:
匹配任务[i]=[1,0]
好。追加(m)
dst_pt=[kp2[m.trainIdx].m的pt处于良好状态]打印(dst_pt)

对你来说,Sunreef的优点是绿色的(就像教程一样)。我有原始图像,用油漆(是的油漆)检查坐标点。绘制显示像素坐标。我有大约10点要检查

我发现了一个问题。我改变了这一点:

# ratio test as per Lowe's paper for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]
        good.append(m)


dst_pt = [ kp2[m.trainIdx].pt for m in good ] print(dst_pt)
#根据Lowe的论文对枚举(匹配项)中的i,(m,n)进行比率测试:
如果m.距离<0.7*n.距离:
匹配任务[i]=[1,0]
好。追加(m)
dst_pt=[kp2[m.trainIdx].m的pt处于良好状态]打印(dst_pt)
给你的