C++ 确定相机姿势?
我试图根据在场景中找到的基准标记来确定相机的姿势 基准: 当前流程:C++ 确定相机姿势?,c++,opencv,computer-vision,camera-calibration,perspectivecamera,C++,Opencv,Computer Vision,Camera Calibration,Perspectivecamera,我试图根据在场景中找到的基准标记来确定相机的姿势 基准: 当前流程: 使用SIFT进行特征检测 使用SIFT进行描述符提取 使用法兰进行匹配 使用CV_RANSAC查找单应性 识别基准点的角点 使用perspectiveTransform()识别场景中基准点的角点 在拐角处画线(即证明它在场景中找到了基准点 运行摄像机校准 负载校准结果(cameraMatrix和畸变效率) 现在我正试图弄清楚相机的姿势。 我试图使用: 无效解算PNP(常数矩阵和对象点、常数矩阵和 imagePoints、con
- 对象点是基准角
- imagePoints是场景中的基准角点
- cameraMatrix来自校准
- 距离系数来自校准
- rvec和tvec应从该函数返回给我
OrbFeatureDetector detector; //Orb seems more accurate than SIFT
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(marker_im, keypoints1);
detector.detect(scene_im, keypoints2);
Mat display_marker_im, display_scene_im;
drawKeypoints(marker_im, keypoints1, display_marker_im, Scalar(0,0,255));
drawKeypoints(scene_im, keypoints2, display_scene_im, Scalar(0,0,255));
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( marker_im, keypoints1, descriptors1 );
extractor.compute( scene_im, keypoints2, descriptors2 );
BFMatcher matcher; //BF seems to match better than FLANN
vector< DMatch > matches;
matcher.match( descriptors1, descriptors2, matches );
Mat img_matches;
drawMatches( marker_im, keypoints1, scene_im, keypoints2,
matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
vector<Point2f> obj, scene;
for (int i = 0; i < matches.size(); i++) {
obj.push_back(keypoints1[matches[i].queryIdx].pt);
scene.push_back(keypoints2[matches[i].trainIdx].pt);
}
Mat H;
H = findHomography(obj, scene, CV_RANSAC);
//Get corners of fiducial
vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0);
obj_corners[1] = cvPoint(marker_im.cols, 0);
obj_corners[2] = cvPoint(marker_im.cols, marker_im.rows);
obj_corners[3] = cvPoint(0, marker_im.rows);
vector<Point2f> scene_corners(4);
perspectiveTransform(obj_corners, scene_corners, H);
FileStorage fs2("cal.xml", FileStorage::READ);
Mat cameraMatrix, distCoeffs;
fs2["Camera_Matrix"] >> cameraMatrix;
fs2["Distortion_Coefficients"] >> distCoeffs;
Mat rvec, tvec;
//same points as object_corners, just adding z-axis (0)
vector<Point3f> objp(4);
objp[0] = cvPoint3D32f(0,0,0);
objp[1] = cvPoint3D32f(gray.cols, 0, 0);
objp[2] = cvPoint3D32f(gray.cols, gray.rows, 0);
objp[3] = cvPoint3D32f(0, gray.rows, 0);
solvePnPRansac(objp, scene_corners, cameraMatrix, distCoeffs, rvec, tvec );
Mat rotation, viewMatrix(4, 4, CV_64F);
Rodrigues(rvec, rotation);
for(int row=0; row<3; ++row)
{
for(int col=0; col<3; ++col)
{
viewMatrix.at<double>(row, col) = rotation.at<double>(row, col);
}
viewMatrix.at<double>(row, 3) = tvec.at<double>(row, 0);
}
viewMatrix.at<double>(3, 3) = 1.0f;
cout << "rotation: " << rotation << endl;
cout << "viewMatrix: " << viewMatrix << endl;
OrbFeatureDetector;//Orb似乎比SIFT更精确
向量关键点1,关键点2;
检测器。检测(标记器、关键点1);
检测器。检测(场景\ im,关键点2);
Mat display\u marker\u im,display\u scene\u im;
drawKeypoints(marker_im,keypoints1,display_marker_im,Scalar(0,0255));
drawKeypoints(场景\ im,关键点2,显示\场景\ im,标量(0,0255));
SiftDescriptorExtractor-extractor;
Mat描述符1,描述符2;
compute(marker_im,keypoints1,descriptors1);
计算(场景、关键点2、描述符2);
BFMatcher matcher;//BF似乎比FLANN更匹配
向量匹配;
匹配(描述符1,描述符2,匹配);
Mat img_匹配;
绘图匹配(标记点、关键点1、场景点、关键点2、,
匹配,img_匹配,标量::all(-1),标量::all(-1),
vector(),DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
矢量对象,场景;
for(int i=0;i>cameraMatrix;
fs2[“失真系数”]>>失真系数;
Mat rvec,tvec;
//与对象_角点相同的点,只需添加z轴(0)
矢量objp(4);
objp[0]=cvPoint3D32f(0,0,0);
objp[1]=cvPoint3D32f(gray.cols,0,0);
objp[2]=cvPoint3D32f(gray.cols,gray.rows,0);
objp[3]=cvPoint3D32f(0,gray.rows,0);
solvePnPRansac(objp、场景角、cameraMatrix、Discoefs、rvec、tvec);
垫旋转,视图矩阵(4,4,CV_64F);
罗德里格斯(rvec,轮换);
对于(int row=0;row好,因此solvePnP()
为您提供从模型帧(即立方体)到相机帧(称为视图矩阵)的传递矩阵
输入参数:
objectPoints
–对象坐标空间中的对象点阵列,3xN/Nx3 1通道或1xN/Nx1 3通道,其中N是点的数量。std::vector
也可以传递到此处。这些点是3D的,但因为它们位于(基准标记的)模式坐标系中,则装备是平面的,因此每个输入对象点的Z坐标为0
imagePoints
–相应图像点的数组,2xN/Nx2 1通道或1xN/Nx1 2通道,其中N是点的数量。std::vector
也可以在此处传递
intrinsics
:摄像机矩阵(焦距、主点)
失真
:失真系数,如果为空,则假设失真系数为零
rvec
:输出旋转矢量
tvec
:输出翻译向量
视图矩阵的构建如下所示:
cv::Mat rvec, tvec;
cv::solvePnP(objectPoints, imagePoints, intrinsics, distortion, rvec, tvec);
cv::Mat rotation, viewMatrix(4, 4, CV_64F);
cv::Rodrigues(rvec, rotation);
for(int row=0; row<3; ++row)
{
for(int col=0; col<3; ++col)
{
viewMatrix.at<double>(row, col) = rotation.at<double>(row, col);
}
viewMatrix.at<double>(row, 3) = tvec.at<double>(row, 0);
}
viewMatrix.at<double>(3, 3) = 1.0f;
cv::Mat rvec,tvec;
cv::solvePnP(对象点、图像点、本质、失真、rvec、tvec);
cv::Mat旋转,viewMatrix(4,4,cv_64F);
cv::罗德里格斯(rvec,轮换);
对于(int row=0;row)查找校准棋盘的外部参数。我试图根据基准标记查找相机姿势。或者我误解了你?Kornel-如果我想查找(xyz)将相机相对于基准点进行定位,我将取基准点xyz,根据旋转矩阵旋转,然后平移?抱歉回答您的问题太晚了。因此,对于姿势估计,您应该有一个具有已知3D几何体和图像平面位置的校准模式(例如,您的基准模式)。cv::solvePnP()
将从3D-2D点对应中找到所需的对象姿势,因此此函数的输出(rvecs
和tvecs
)将点从模型坐标系带到相机坐标系。如果要查找相机相对于基准图案的相对位置,则应反转变换。另外,在您的情况下,是图案固定点的位置,相机是否在其周围浮动?围绕固定图案浮动相机