Python cv2.triangulatePoints不是很精确吗?

Python cv2.triangulatePoints不是很精确吗?,python,opencv,math,computer-vision,triangulation,Python,Opencv,Math,Computer Vision,Triangulation,摘要 我正在尝试从两张图片中对点进行三角剖分,但根本得不到准确的结果 详细信息 以下是我正在做的: 在真实世界坐标中测量我的16个对象点 确定每个图像16个对象点的像素坐标 使用cv2.solvePnP()获取每个摄像头的TVEC和rvecs 使用cv2.projectPoints验证tvecs和rvecs是否将给定的3D点重新投影到正确的图像坐标(它确实有效)。例如: img_point_right = cv2.projectPoints(np.array([[0,0,39]], np.

摘要

我正在尝试从两张图片中对点进行三角剖分,但根本得不到准确的结果

详细信息

以下是我正在做的:

  • 在真实世界坐标中测量我的16个对象点

  • 确定每个图像16个对象点的像素坐标

  • 使用cv2.solvePnP()获取每个摄像头的TVEC和rvecs

  • 使用cv2.projectPoints验证tvecs和rvecs是否将给定的3D点重新投影到正确的图像坐标(它确实有效)。例如:

    img_point_right = cv2.projectPoints(np.array([[0,0,39]], np.float), 
                                        right_rvecs, 
                                        right_tvecs,
                                        right_intrinsics,
                                        right_distortion)
    
    left_undist = cv2.undistortPoints(left_points, 
                                       cameraMatrix=left_intrinsics,
                                       distCoeffs=left_distortion)
    
    # Transpose to get into OpenCV's 2xN format.
    left_points_t = np.array(left_undist[0]).transpose()
    right_points_t = np.array(right_undist[0]).transpose()
    # Note, I take the 0th index of each points matrix to get rid of the extra dimension, 
    # although it doesn't affect the output.
    
    triangulation = cv2.triangulatePoints(left_projection, right_projection, left_points_t, right_points_t)
    homog_points = triangulation.transpose()
    
    euclid_points = cv2.convertPointsFromHomogeneous(tri_homog)
    
  • 验证后,使用以下公式获得旋转矩阵:

    left_rotation, jacobian = cv2.Rodrigues(left_rvecs)
    right_rotation, jacobian = cv2.Rodrigues(right_rvecs)
    
    RT = np.zeros((3,4))
    RT[:3, :3] = left_rotation
    RT[:3, 3] = left_translation.transpose()
    left_projection = np.dot(left_intrinsics, RT)
    
    RT = np.zeros((3,4))
    RT[:3, :3] = right_rotation
    RT[:3, 3] = right_translation.transpose()
    right_projection = np.dot(right_intrinsics, RT)
    
    然后是投影矩阵:

    left_rotation, jacobian = cv2.Rodrigues(left_rvecs)
    right_rotation, jacobian = cv2.Rodrigues(right_rvecs)
    
    RT = np.zeros((3,4))
    RT[:3, :3] = left_rotation
    RT[:3, 3] = left_translation.transpose()
    left_projection = np.dot(left_intrinsics, RT)
    
    RT = np.zeros((3,4))
    RT[:3, :3] = right_rotation
    RT[:3, 3] = right_translation.transpose()
    right_projection = np.dot(right_intrinsics, RT)
    
  • 在进行三角剖分之前,请使用cv2.UndortPoints取消对点的扭曲。例如:

    img_point_right = cv2.projectPoints(np.array([[0,0,39]], np.float), 
                                        right_rvecs, 
                                        right_tvecs,
                                        right_intrinsics,
                                        right_distortion)
    
    left_undist = cv2.undistortPoints(left_points, 
                                       cameraMatrix=left_intrinsics,
                                       distCoeffs=left_distortion)
    
    # Transpose to get into OpenCV's 2xN format.
    left_points_t = np.array(left_undist[0]).transpose()
    right_points_t = np.array(right_undist[0]).transpose()
    # Note, I take the 0th index of each points matrix to get rid of the extra dimension, 
    # although it doesn't affect the output.
    
    triangulation = cv2.triangulatePoints(left_projection, right_projection, left_points_t, right_points_t)
    homog_points = triangulation.transpose()
    
    euclid_points = cv2.convertPointsFromHomogeneous(tri_homog)
    
  • 对这些点进行三角测量。例如:

    img_point_right = cv2.projectPoints(np.array([[0,0,39]], np.float), 
                                        right_rvecs, 
                                        right_tvecs,
                                        right_intrinsics,
                                        right_distortion)
    
    left_undist = cv2.undistortPoints(left_points, 
                                       cameraMatrix=left_intrinsics,
                                       distCoeffs=left_distortion)
    
    # Transpose to get into OpenCV's 2xN format.
    left_points_t = np.array(left_undist[0]).transpose()
    right_points_t = np.array(right_undist[0]).transpose()
    # Note, I take the 0th index of each points matrix to get rid of the extra dimension, 
    # although it doesn't affect the output.
    
    triangulation = cv2.triangulatePoints(left_projection, right_projection, left_points_t, right_points_t)
    homog_points = triangulation.transpose()
    
    euclid_points = cv2.convertPointsFromHomogeneous(tri_homog)
    
  • 不幸的是,当我得到最后一步的输出时,我的点甚至没有一个正的Z方向,尽管我试图重现的3D点有一个正的Z位置

    作为参考,正Z是向前的,正Y是向下的,正X是向右的

    例如,3D点
    (0,0,39)
    -想象你前面39英尺的一个点-给出
    (4.47,-8.77,-44.81)的三角测量输出

    问题

    这是对点进行三角测量的有效方法吗

    如果是这样的话,cv2.triangulatePoints是否不是一种很好的方法,通过它可以对点进行三角剖分,以及对备选方案有何建议


    谢谢您的帮助。

    事实证明,如果在调用
    三角点
    函数之前我没有调用
    无畸变点
    函数,那么我会得到合理的结果。这是因为
    不失真点
    在执行不失真时使用内在参数对点进行规格化,但是我仍然使用解释内在参数的投影矩阵调用
    三角形点

    但是,我可以通过不失真点,然后使用使用单位矩阵作为内在矩阵构建的投影矩阵调用
    三角形点
    来获得更好的结果


    问题解决了

    前一天我和你有同样的问题。结果表明,如果传递
    P
    矩阵,则
    undistortPoints
    将按预期工作,因此它将以像素为单位返回结果(否则它将假定
    P
    为标识并返回标准化):
    left\u undist=cv2.undistortPoints(左切点,cameraMatrix=左切点,DistCoefs=左切点,R=左切点)

    这样,你就不需要弄乱本质,结果也是一样的

    另外,确保在传递给三角点的参数中使用float

    projMat1 = mtx1 @ cv2.hconcat([np.eye(3), np.zeros((3,1))]) # Cam1 is the origin
    projMat2 = mtx2 @ cv2.hconcat([R, T]) # R, T from stereoCalibrate
    
    # points1 is a (N, 1, 2) float32 from cornerSubPix
    points1u = cv2.undistortPoints(points1, mtx1, dist1, None, mtx1)
    points2u = cv2.undistortPoints(points2, mtx2, dist2, None, mtx2)
    
    points4d = cv2.triangulatePoints(projMat1, projMat2, points1u, points2u)
    points3d = (points4d[:3, :]/points4d[3, :]).T