C++ OpenCV 3.0:校准不符合预期
当我使用OpenCV 3.0 CalibleCamera时,我得到了意想不到的结果。这是我的算法:C++ OpenCV 3.0:校准不符合预期,c++,opencv,camera-calibration,C++,Opencv,Camera Calibration,当我使用OpenCV 3.0 CalibleCamera时,我得到了意想不到的结果。这是我的算法: 加载30个图像点 在30个对应的世界点上加载(在这种情况下是共面的) 使用点校准相机,只是为了不失真 取消扭曲图像点,但不要使用内部函数(共面世界点,因此内部函数是不可靠的) 使用未变形点查找单应性,转换为世界点(可以这样做,因为它们都是共面的) 使用单应和透视变换将未变形的点映射到世界空间 将原始世界点与映射点进行比较 我得到的点是有噪声的,只有图像的一小部分。一个视图有30个共面点,因此我无法
我得到的点是有噪声的,只有图像的一小部分。一个视图有30个共面点,因此我无法获得相机的内部特性,但应该能够获得畸变系数和单应性,以创建正面平行视图 正如预期的那样,误差因校准标志而异。然而,它的变化与我的预期相反。若我允许所有变量都进行调整,那个么我预计误差会下降。我不是说我期待一个更好的模式;事实上,我期望过度拟合,但这仍然会减少误差。我看到的是,我使用的变量越少,我的错误就越小。最好的结果是使用直单应性 我有两个怀疑的原因,但它们似乎不太可能,我希望在公布它们之前,能听到一个完整的答案。我提取代码只是为了做我所说的。它有点长,但它包括加载点 代码似乎没有bug;我使用了“更好”的观点,它非常有效。我想强调的是,这里的解决方案不能是使用更好的点或执行更好的校准;练习的重点是了解各种校准模型如何响应不同质量的校准数据 有什么想法吗 已添加 说清楚,我知道结果会很糟糕,我也希望如此。我还了解到,当测试未用于训练模型的点时,我可能会学到糟糕的失真参数,这会导致更糟糕的结果。我不明白的是,当使用训练集作为测试集时,失真模型如何会有更多的错误。也就是说,如果cv::CalibleCamera应该选择参数来减少提供的训练点集上的错误,但是它产生的错误比它刚刚为K!选择0时产生的错误更多!,K2。。。K6,P1,P2。不管数据是否糟糕,它至少应该在训练集上做得更好。在我可以说这些数据不适合这个模型之前,我必须确保我已经尽我所能利用现有的数据,在这个阶段我不能这么说 这里是一个示例图像 带有绿色针脚的点已标记。这显然只是一个测试图像。 这里有更多的例子 下面的图片是从上面的大图片中裁剪出来的。中心没有改变。这就是当我仅使用绿色管脚手动标记的点取消扭曲,并允许K1(仅K1)从0变化时发生的情况: 以前 之后 我会把它归结为一个bug,但是当我使用一个更大的点集来覆盖更多的屏幕时,即使是从一个平面上,它也可以相当好地工作。这看起来糟透了。然而,这个错误并不像你从图片上看到的那样严重
// Load image points
std::vector<cv::Point2f> im_points;
im_points.push_back(cv::Point2f(1206, 1454));
im_points.push_back(cv::Point2f(1245, 1443));
im_points.push_back(cv::Point2f(1284, 1429));
im_points.push_back(cv::Point2f(1315, 1456));
im_points.push_back(cv::Point2f(1352, 1443));
im_points.push_back(cv::Point2f(1383, 1431));
im_points.push_back(cv::Point2f(1431, 1458));
im_points.push_back(cv::Point2f(1463, 1445));
im_points.push_back(cv::Point2f(1489, 1432));
im_points.push_back(cv::Point2f(1550, 1461));
im_points.push_back(cv::Point2f(1574, 1447));
im_points.push_back(cv::Point2f(1597, 1434));
im_points.push_back(cv::Point2f(1673, 1463));
im_points.push_back(cv::Point2f(1691, 1449));
im_points.push_back(cv::Point2f(1708, 1436));
im_points.push_back(cv::Point2f(1798, 1464));
im_points.push_back(cv::Point2f(1809, 1451));
im_points.push_back(cv::Point2f(1819, 1438));
im_points.push_back(cv::Point2f(1925, 1467));
im_points.push_back(cv::Point2f(1929, 1454));
im_points.push_back(cv::Point2f(1935, 1440));
im_points.push_back(cv::Point2f(2054, 1470));
im_points.push_back(cv::Point2f(2052, 1456));
im_points.push_back(cv::Point2f(2051, 1443));
im_points.push_back(cv::Point2f(2182, 1474));
im_points.push_back(cv::Point2f(2171, 1459));
im_points.push_back(cv::Point2f(2164, 1446));
im_points.push_back(cv::Point2f(2306, 1474));
im_points.push_back(cv::Point2f(2292, 1462));
im_points.push_back(cv::Point2f(2278, 1449));
// Create corresponding world / object points
std::vector<cv::Point3f> world_points;
for (int i = 0; i < 30; i++) {
world_points.push_back(cv::Point3f(5 * (i / 3), 4 * (i % 3), 0.0f));
}
// Perform calibration
// Flags are set out so they can be commented out and "freed" easily
int calibration_flags = 0
| cv::CALIB_FIX_K1
| cv::CALIB_FIX_K2
| cv::CALIB_FIX_K3
| cv::CALIB_FIX_K4
| cv::CALIB_FIX_K5
| cv::CALIB_FIX_K6
| cv::CALIB_ZERO_TANGENT_DIST
| 0;
// Initialise matrix
cv::Mat intrinsic_matrix = cv::Mat(3, 3, CV_64F);
intrinsic_matrix.ptr<float>(0)[0] = 1;
intrinsic_matrix.ptr<float>(1)[1] = 1;
cv::Mat distortion_coeffs = cv::Mat::zeros(5, 1, CV_64F);
// Rotation and translation vectors
std::vector<cv::Mat> undistort_rvecs;
std::vector<cv::Mat> undistort_tvecs;
// Wrap in an outer vector for calibration
std::vector<std::vector<cv::Point2f>>im_points_v(1, im_points);
std::vector<std::vector<cv::Point3f>>w_points_v(1, world_points);
// Calibrate; only 1 plane, so intrinsics can't be trusted
cv::Size image_size(4000, 3000);
calibrateCamera(w_points_v, im_points_v,
image_size, intrinsic_matrix, distortion_coeffs,
undistort_rvecs, undistort_tvecs, calibration_flags);
// Undistort im_points
std::vector<cv::Point2f> ud_points;
cv::undistortPoints(im_points, ud_points, intrinsic_matrix, distortion_coeffs);
// ud_points have been "unintrinsiced", but we don't know the intrinsics, so reverse that
double fx = intrinsic_matrix.at<double>(0, 0);
double fy = intrinsic_matrix.at<double>(1, 1);
double cx = intrinsic_matrix.at<double>(0, 2);
double cy = intrinsic_matrix.at<double>(1, 2);
for (std::vector<cv::Point2f>::iterator iter = ud_points.begin(); iter != ud_points.end(); iter++) {
iter->x = iter->x * fx + cx;
iter->y = iter->y * fy + cy;
}
// Find a homography mapping the undistorted points to the known world points, ground plane
cv::Mat homography = cv::findHomography(ud_points, world_points);
// Transform the undistorted image points to the world points (2d only, but z is constant)
std::vector<cv::Point2f> estimated_world_points;
std::cout << "homography" << homography << std::endl;
cv::perspectiveTransform(ud_points, estimated_world_points, homography);
// Work out error
double sum_sq_error = 0;
for (int i = 0; i < 30; i++) {
double err_x = estimated_world_points.at(i).x - world_points.at(i).x;
double err_y = estimated_world_points.at(i).y - world_points.at(i).y;
sum_sq_error += err_x*err_x + err_y*err_y;
}
std::cout << "Sum squared error is: " << sum_sq_error << std::endl;
//加载图像点
std::向量im_点;
im_points.push_back(cv::Point2f(12061454));
im_points.push_back(cv::Point2f(12451443));
im_points.push_back(cv::Point2f(12841429));
im_points.push_back(cv::Point2f(13151456));
im_points.push_back(cv::Point2f(13521443));
im_points.push_back(cv::Point2f(13831431));
im_points.push_back(cv::Point2f(143111458));
im_points.push_back(cv::Point2f(14631445));
im_points.push_back(cv::Point2f(14891432));
im_points.push_back(cv::Point2f(15501461));
im_points.push_back(cv::Point2f(15741447));
im_points.push_back(cv::Point2f(15971434));
im_points.push_back(cv::Point2f(16731463));
im_points.push_back(cv::Point2f(16911449));
im_points.push_back(cv::Point2f(17081436));
im_points.push_back(cv::Point2f(17981464));
im_points.push_back(cv::Point2f(18091451));
im_points.push_back(cv::Point2f(18191438));
im_points.push_back(cv::Point2f(19251467));
im_points.push_back(cv::Point2f(19291454));
im_points.push_back(cv::Point2f(19351440));
im_points.push_back(cv::Point2f(20541470));
im_points.push_back(cv::Point2f(20521456));
im_points.push_back(cv::Point2f(20511443));
im_points.push_back(cv::Point2f(21821474));
im_points.push_back(cv::Point2f(217111459));
im_points.push_back(cv::Point2f(21641446));
im_points.push_back(cv::Point2f(23061474));
im_points.push_back(cv::Point2f(22921462));
im_points.push_back(cv::Point2f(22781449));
//创建相应的世界/对象点
std::矢量世界_点;
对于(int i=0;i<30;i++){
世界点。向后推(cv::点3F(5*(i/3),4*(i%3),0.0f));
}
//进行校准
//设置标志是为了方便注释和“释放”
int校准_标志=0
|cv::CALIB_FIX_K1
|cv::CALIB_FIX_K2
|cv::CALIB_FIX_K3
|cv::CALIB_FIX_K4
|cv::CALIB_FIX_K5
|cv::CALIB_FIX_K6
|cv::校准零点切线距离
| 0;
//初始化矩阵
cv::Mat固有矩阵=cv::Mat(3,3,cv_64F);
本征矩阵ptr(0)[0]=1;
本征矩阵ptr(1)[1]=1;
cv::Mat畸变系数=cv::Mat::零(5,1,cv_64F);
//旋转和平移向量
std::矢量不失真;
std::矢量不失真;
//包裹在外向量中进行校准