Python 3.x Python OpenCV中头部姿势的提升是如何工作的?
我试图估计单个图像的头部姿势,主要遵循以下指南: 人脸检测效果很好——如果我绘制图像和检测到的地标,它们会很好地对齐 我根据图像估计相机矩阵,并假设没有镜头失真:Python 3.x Python OpenCV中头部姿势的提升是如何工作的?,python-3.x,opencv,face-detection,opencv-python,pose-estimation,Python 3.x,Opencv,Face Detection,Opencv Python,Pose Estimation,我试图估计单个图像的头部姿势,主要遵循以下指南: 人脸检测效果很好——如果我绘制图像和检测到的地标,它们会很好地对齐 我根据图像估计相机矩阵,并假设没有镜头失真: size = image.shape focal_length = size[1] center = (size[1]/2, size[0]/2) camera_matrix = np.array([[focal_length, 0, center[0]],
size = image.shape
focal_length = size[1]
center = (size[1]/2, size[0]/2)
camera_matrix = np.array([[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype="double")
dist_coeffs = np.zeros((4, 1)) # Assuming no lens distortion
我试图通过使用solvePNP将图像中的点与三维模型中的点进行匹配来获得头部姿势:
# 3D-model points to which the points extracted from an image are matched:
model_points = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye corner
(225.0, 170.0, -135.0), # Right eye corner
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
])
image_points = np.array([
shape[30], # Nose tip
shape[8], # Chin
shape[36], # Left eye left corner
shape[45], # Right eye right corne
shape[48], # Left Mouth corner
shape[54] # Right mouth corner
], dtype="double")
success, rotation_vec, translation_vec) = \
cv2.solvePnP(model_points, image_points, camera_matrix, dist_coeffs)
最后,我从旋转中得到欧拉角:
rotation_mat, _ = cv2.Rodrigues(rotation_vec)
pose_mat = cv2.hconcat((rotation_mat, translation_vec))
_, _, _, _, _, _, angles = cv2.decomposeProjectionMatrix(pose_mat)
现在,方位角是我所期望的——如果我向左看,中间是零,而右边是正,则是负的。< /P>
<海拔】然而,奇怪的是,如果我看中间,它有一个恒定的值,但是符号是随机的——从图像到图像(值在170左右)。
当我向上看时,符号是正的,值越小,
当我向下看时,符号为负数,值越低
有人能给我解释一下这个输出吗?好吧,看来我找到了一个解决方案——模型点(我在几个关于这个主题的博客中发现)似乎是错误的。代码似乎与模型和图像点的组合一起工作(不知道为什么是反复试验):
model_points = np.float32([[6.825897, 6.760612, 4.402142],
[1.330353, 7.122144, 6.903745],
[-1.330353, 7.122144, 6.903745],
[-6.825897, 6.760612, 4.402142],
[5.311432, 5.485328, 3.987654],
[1.789930, 5.393625, 4.413414],
[-1.789930, 5.393625, 4.413414],
[-5.311432, 5.485328, 3.987654],
[2.005628, 1.409845, 6.165652],
[-2.005628, 1.409845, 6.165652],
[2.774015, -2.080775, 5.048531],
[-2.774015, -2.080775, 5.048531],
[0.000000, -3.116408, 6.097667],
[0.000000, -7.415691, 4.070434]])
image_points = np.float32([shape[17], shape[21], shape[22], shape[26],
shape[36], shape[39], shape[42], shape[45],
shape[31], shape[35], shape[48], shape[54],
shape[57], shape[8]])