Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/video/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python OpenCV:从用户定义的关键点提取冲浪特征_Python_Opencv_Surf - Fatal编程技术网

Python OpenCV:从用户定义的关键点提取冲浪特征

Python OpenCV:从用户定义的关键点提取冲浪特征,python,opencv,surf,Python,Opencv,Surf,我想从我指定的关键点计算冲浪特征。我正在使用OpenCV的Python包装器。下面是我试图使用的代码,但我在任何地方都找不到一个有效的示例 surf = cv2.SURF() keypoints, descriptors = surf.detect(np.asarray(image[:,:]),None,useProvidedKeypoints = True) 如何指定此函数要使用的关键点 类似的、未回答的问题: < P>如果正确理解Python绑定的源代码,在Python绑定中从未使用C

我想从我指定的关键点计算冲浪特征。我正在使用OpenCV的Python包装器。下面是我试图使用的代码,但我在任何地方都找不到一个有效的示例

surf = cv2.SURF()
keypoints, descriptors = surf.detect(np.asarray(image[:,:]),None,useProvidedKeypoints = True)
如何指定此函数要使用的关键点

类似的、未回答的问题:


< P>如果正确理解Python绑定的源代码,在Python绑定中从未使用C++接口中存在的“KePosits”参数。因此,我冒着风险说,不可能用当前的绑定来做您正试图做的事情。一个可能的解决方案是编写自己的绑定。我知道这不是你希望的答案…

关于如何使用前面提到的
Mahotas实现这一点的示例:

import mahotas
from mahotas.features import surf
import numpy as np


def process_image(imagename):
    '''Process an image and returns descriptors and keypoints location'''
    # Load the images
    f = mahotas.imread(imagename, as_grey=True)
    f = f.astype(np.uint8)

    spoints = surf.dense(f, spacing=12, include_interest_point=True)
    # spoints includes both the detection information (such as the position
    # and the scale) as well as the descriptor (i.e., what the area around
    # the point looks like). We only want to use the descriptor for
    # clustering. The descriptor starts at position 5:
    desc = spoints[:, 5:]
    kp = spoints[:, :2]

    return kp, desc

尝试使用cv2.DescriptorMatcher\u来创建它。

例如,在下面的代码中,我正在使用pylab,但您可以得到消息;)

它使用GFTT计算关键点,然后使用SURF描述符和蛮力匹配。 每个代码部分的输出显示为标题


输出是这样的

(对于本例,我将作弊并使用相同的图像获取关键点和描述符)

图1:1000、图2:1000中的关键点 图像1中的描述符大小:(1000,64),图像2中的描述符大小:(1000,64) 比赛:1000场 距离:最小:0.000

距离:平均值:0.000

距离:最大值:0.000

# threshold: half the mean
thres_dist = (sum(dist) / len(dist)) * 0.5 + 0.5

# keep only the reasonable matches
sel_matches = [m for m in matches if m.distance < thres_dist]

print '#selected matches:', len(sel_matches)

输出是这样的

我开始怀疑同样的事实。。。我已经开始研究如何使用Python库进行冲浪,比如编写自己的自定义函数绑定应该不会太难。mahotas的作者:mahotas可以做你想做的事。你最终成功了吗?我成功了,我甚至在这里发布了答案,但我只是注意到它因为某种原因被删除了。奇怪。不管怎样,你可以用它来做这件事,或者可以查看一些同时发布的其他答案。
img1 = gray
img2 = gray
detector = cv2.FeatureDetector_create("GFTT")
descriptor = cv2.DescriptorExtractor_create("SURF")
matcher = pt1=(int(k1[m.queryIdx].pt[0]),int(k1[m.queryIdx].pt[1]))("FlannBased")

# detect keypoints
kp1 = detector.detect(img1)
kp2 = detector.detect(img2)

print '#keypoints in image1: %d, image2: %d' % (len(kp1), len(kp2))
# descriptors
k1, d1 = descriptor.compute(img1, kp1)
k2, d2 = descriptor.compute(img2, kp2)

print '#Descriptors size in image1: %s, image2: %s' % ((d1.shape), (d2.shape))
# match the keypoints
matches = matcher.match(d1,d2)

# visualize the matches
print '#matches:', len(matches)
dist = [m.distance for m in matches]

print 'distance: min: %.3f' % min(dist)
print 'distance: mean: %.3f' % (sum(dist) / len(dist))
print 'distance: max: %.3f' % max(dist)
# threshold: half the mean
thres_dist = (sum(dist) / len(dist)) * 0.5 + 0.5

# keep only the reasonable matches
sel_matches = [m for m in matches if m.distance < thres_dist]

print '#selected matches:', len(sel_matches)
#Plot
h1, w1 = img1.shape[:2]
h2, w2 = img2.shape[:2]
view = zeros((max(h1, h2), w1 + w2, 3), uint8)
view[:h1, :w1, 0] = img1
view[:h2, w1:, 0] = img2
view[:, :, 1] = view[:, :, 0]
view[:, :, 2] = view[:, :, 0]

for m in sel_matches:
    # draw the keypoints
    # print m.queryIdx, m.trainIdx, m.distance
    color = tuple([random.randint(0, 255) for _ in xrange(3)])
    pt1=(int(k1[m.queryIdx].pt[0]),int(k1[m.queryIdx].pt[1]))
    pt2=(int(k2[m.queryIdx].pt[0]+w1),int(k2[m.queryIdx].pt[1]))
    cv2.line(view,pt1,pt2,color)