Python 如何使用opencv orb进行图像对齐?
我有两个图像一个是输入图像,另一个是模板图像。对于图像对齐,我使用ORB和单应矩阵。但输出图像并没有以完美的方式显示。 我的代码是Python 如何使用opencv orb进行图像对齐?,python,python-3.x,numpy,opencv,Python,Python 3.x,Numpy,Opencv,我有两个图像一个是输入图像,另一个是模板图像。对于图像对齐,我使用ORB和单应矩阵。但输出图像并没有以完美的方式显示。 我的代码是 def get(img1,img2): # Convert to grayscale. img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) height, width = img2.shape
def get(img1,img2):
# Convert to grayscale.
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
height, width = img2.shape
# Create ORB detector with 5000 features.
orb_detector = cv2.ORB_create(1000)
# Find keypoints and descriptors.
# The first arg is the image, second arg is the mask
# (which is not reqiured in this case).
kp1, d1 = orb_detector.detectAndCompute(img1, None)
kp2, d2 = orb_detector.detectAndCompute(img2, None)
# Match features between the two images.
# We create a Brute Force matcher with
# Hamming distance as measurement mode.
matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True)
# Match the two sets of descriptors.
matches = matcher.match(d1, d2)
# Sort matches on the basis of their Hamming distance.
matches.sort(key = lambda x: x.distance)
# print(len(matches))
# Take the top 90 % matches forward.
matches = matches[:int(len(matches)*90)]
no_of_matches = len(matches)
# Define empty matrices of shape no_of_matches * 2.
p1 = np.zeros((no_of_matches, 2))
p2 = np.zeros((no_of_matches, 2))
for i in range(len(matches)):
p1[i, :] = kp1[matches[i].queryIdx].pt
p2[i, :] = kp2[matches[i].trainIdx].pt
# Find the homography matrix.
homography, mask = cv2.findHomography(p1, p2, cv2.RANSAC)
# Use this matrix to transform the
# colored image wrt the reference image.
transformed_img = cv2.warpPerspective(img1,
homography, (width, height))
# Save the output.
# cv2.imwrite(r"output.jpg", transformed_img)
return transformed_img
if __name__ == "__main__":
img1_color = cv2.imread(r"1.jpg") # Image to be aligned.
img2_color = cv2.imread(r"reference.jpg") # Reference image.
image_transform=get(img1_color, img2_color)
plt.figure(figsize=(15,10))
plt.imshow(image_transform)
当我使用相同的文档时,这意味着一个模板(参考图像)和另一个是输入图像(任何倾斜图像)。该代码工作得更好。但当我拍摄不同的图像时,它与文档不同。这意味着在这类图像中有一些不同的东西。并运行代码,结果并不理想。我的问题是,对于不同的图像,我应该怎么做才能使我的图像与模板图像完全对齐?
matches=matches[:int(len(matches)*90)]
毫无意义。你是说matches=matches[:90]
?还有,为什么这些图像会完全对齐?这不是同一页的图片好的,我看到了,如果同一个图像出现任何错误,我会通知您如果模板和输入图像不相同,这意味着以前的图像在这种情况下应该怎么做。如果我使用空白模板(目标图像),请告诉我是否有与此相关的帖子。对于这种情况,我将如何调整代码中的图像?