Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/ssh/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Opencv solvepnp 当对象点以ecef世界坐标表示时,tvec的输出是什么?_Opencv Solvepnp - Fatal编程技术网

Opencv solvepnp 当对象点以ecef世界坐标表示时,tvec的输出是什么?

Opencv solvepnp 当对象点以ecef世界坐标表示时,tvec的输出是什么?,opencv-solvepnp,Opencv Solvepnp,我试图获得飞机上摄像机的摄像机姿态估计。摄像机指向下方,我将图像中的点与ECEF表示的世界坐标关联起来 当我运行solvePNP时,我会得到一个rvec和一个tvec。根据我的理解,rvec是我的相机相对于世界坐标系的旋转矢量,tvec是从世界原点到相机的平移。所以我的主要问题是,这是否意味着tvec是我相机在世界帧中的XYZ位置?如果我在XYZ中启动[0,0,0]并移动到相机原点,那是我在XYZ坐标系中的相机位置还是我误解了什么 我问这个问题的主要原因是,为了得到我的相机在世界坐标系中的实际位

我试图获得飞机上摄像机的摄像机姿态估计。摄像机指向下方,我将图像中的点与ECEF表示的世界坐标关联起来

当我运行solvePNP时,我会得到一个rvec和一个tvec。根据我的理解,rvec是我的相机相对于世界坐标系的旋转矢量,tvec是从世界原点到相机的平移。所以我的主要问题是,这是否意味着tvec是我相机在世界帧中的XYZ位置?如果我在XYZ中启动[0,0,0]并移动到相机原点,那是我在XYZ坐标系中的相机位置还是我误解了什么

我问这个问题的主要原因是,为了得到我的相机在世界坐标系中的实际位置,我必须使用以下等式:

cam_world_pos = np.dot(-r_mat.T,t_vec)
完整代码如下

object_points[:,:,0] = np.array([[511943.715162702, -4878626.96826019, 4062794.60137282],
[511940.291626567, -4878673.71821000, 4062739.26743133],
[511832.963975293, -4878650.96219925, 4062779.84268338],
[511832.736349870, -4878657.65071521, 4062771.89324488],
[511781.395685648, -4878656.50934662, 4062779.67897580],
[511681.546443719, -4878743.79853387, 4062688.05274913],
[511830.212307712, -4878660.65753048, 4062768.62262194],
[511891.605259940, -4878698.05477778, 4062716.33230839],
[511671.583398601, -4878750.47505608, 4062681.33520162],
[511829.009431822, -4878622.29096120, 4062814.53569581],
[511933.847068750, -4878671.90386977, 4062742.23819396],
[511665.756515402, -4878772.22726829, 4062656.11730890],
[511820.451260988, -4878629.31561360, 4062807.22788101],
[511784.352076247, -4878654.28205995, 4062781.96571512]])

image_points[:,:,0] = np.array([[457.251949864898, 278.044548653805],
[561.563021437094, 395.562922992176],
[324.136638716142, 487.040270938409],
[339.336493099835, 503.664106701581],
[249.253927264257, 570.528444117084],
[284.916025242008, 918.030704517594],
[342.023402026407, 514.466220034638],
[536.326205396700, 520.598672959005],
[283.069716399754, 947.250674705873],
[249.200363292437, 424.003883259338],
[547.194442129373, 399.446296325373],
[325.818913664043, 1008.20369490127],
[252.238475231599, 452.579518673907],
[248.976782258982, 561.867810808243]])

distCoeffs = np.array([-0.4832617131991951, 0.2909008797638061, -0.0123677302055096, -0.01059311004963829, 0.005726971649077564])

K = np.array([[2304.69, 0.0, 526.54],
             [0.0, 2303.17, 504.61],
             [0.0, 0.0, 1.0]])

_, r_vec, t = cv2.solvePnP(object_points, image_points, K, distCoeffs)

r_vec = r_vec.reshape((3,1))
t = t.reshape((3,1))
print('rotation vector:')
print(r_vec)

# convert rotation vector into R matrix
r_mat, jacobian = cv2.Rodrigues(r_vec)
print('rotation matrix')
print(r_mat)

% Camera location
cam_worl_pos = np.dot(-r_mat.T, t)
print('Given K and DCM')
print(cam_worl_pos)






 
object_points[:,:,0] = np.array([[511943.715162702, -4878626.96826019, 4062794.60137282],
[511940.291626567, -4878673.71821000, 4062739.26743133],
[511832.963975293, -4878650.96219925, 4062779.84268338],
[511832.736349870, -4878657.65071521, 4062771.89324488],
[511781.395685648, -4878656.50934662, 4062779.67897580],
[511681.546443719, -4878743.79853387, 4062688.05274913],
[511830.212307712, -4878660.65753048, 4062768.62262194],
[511891.605259940, -4878698.05477778, 4062716.33230839],
[511671.583398601, -4878750.47505608, 4062681.33520162],
[511829.009431822, -4878622.29096120, 4062814.53569581],
[511933.847068750, -4878671.90386977, 4062742.23819396],
[511665.756515402, -4878772.22726829, 4062656.11730890],
[511820.451260988, -4878629.31561360, 4062807.22788101],
[511784.352076247, -4878654.28205995, 4062781.96571512]])

image_points[:,:,0] = np.array([[457.251949864898, 278.044548653805],
[561.563021437094, 395.562922992176],
[324.136638716142, 487.040270938409],
[339.336493099835, 503.664106701581],
[249.253927264257, 570.528444117084],
[284.916025242008, 918.030704517594],
[342.023402026407, 514.466220034638],
[536.326205396700, 520.598672959005],
[283.069716399754, 947.250674705873],
[249.200363292437, 424.003883259338],
[547.194442129373, 399.446296325373],
[325.818913664043, 1008.20369490127],
[252.238475231599, 452.579518673907],
[248.976782258982, 561.867810808243]])

distCoeffs = np.array([-0.4832617131991951, 0.2909008797638061, -0.0123677302055096, -0.01059311004963829, 0.005726971649077564])

K = np.array([[2304.69, 0.0, 526.54],
             [0.0, 2303.17, 504.61],
             [0.0, 0.0, 1.0]])

_, r_vec, t = cv2.solvePnP(object_points, image_points, K, distCoeffs)

r_vec = r_vec.reshape((3,1))
t = t.reshape((3,1))
print('rotation vector:')
print(r_vec)

# convert rotation vector into R matrix
r_mat, jacobian = cv2.Rodrigues(r_vec)
print('rotation matrix')
print(r_mat)

% Camera location
cam_worl_pos = np.dot(-r_mat.T, t)
print('Given K and DCM')
print(cam_worl_pos)