C++ Opencv:向后扭曲

C++ Opencv:向后扭曲,c++,opencv,distortion,C++,Opencv,Distortion,我有cameraMatrix和distCoeff来消除图像或点向量的失真。现在我想把它们扭曲回去 是否可以使用Opencv? 我记得我在stackoverflow上读到一些关于它的东西,但现在找不到 编辑:我找到了这样做的方法。它也在opencv开发者专区中(在此) 但我的结果并不完全正确。或多或少有2-4个像素的误差。我的代码中可能有错误,因为在答案中,我链接了单元测试中的所有内容。可能是从浮动到双精度的类型转换,或者是我看不到的其他类型 以下是我的测试用例: #include <ope

我有
cameraMatrix
distCoeff
来消除图像或点向量的失真。现在我想把它们扭曲回去

是否可以使用Opencv? 我记得我在stackoverflow上读到一些关于它的东西,但现在找不到

编辑:我找到了这样做的方法。它也在opencv开发者专区中(在此)

但我的结果并不完全正确。或多或少有2-4个像素的误差。我的代码中可能有错误,因为在答案中,我链接了单元测试中的所有内容。可能是从浮动到双精度的类型转换,或者是我看不到的其他类型

以下是我的测试用例:

#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include <iostream>

using namespace cv;
using namespace std;

void distortPoints(const std::vector<cv::Point2d> & src, std::vector<cv::Point2d> & dst,
                         const cv::Mat & cameraMatrix, const cv::Mat & distorsionMatrix)
{

  dst.clear();
  double fx = cameraMatrix.at<double>(0,0);
  double fy = cameraMatrix.at<double>(1,1);
  double ux = cameraMatrix.at<double>(0,2);
  double uy = cameraMatrix.at<double>(1,2);

  double k1 = distorsionMatrix.at<double>(0, 0);
  double k2 = distorsionMatrix.at<double>(0, 1);
  double p1 = distorsionMatrix.at<double>(0, 2);
  double p2 = distorsionMatrix.at<double>(0, 3);
  double k3 = distorsionMatrix.at<double>(0, 4);

  for (unsigned int i = 0; i < src.size(); i++)
  {
    const cv::Point2d & p = src[i];
    double x = p.x;
    double y = p.y;
    double xCorrected, yCorrected;
    //Step 1 : correct distorsion
    {
      double r2 = x*x + y*y;
      //radial distorsion
      xCorrected = x * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2);
      yCorrected = y * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2);

      //tangential distorsion
      //The "Learning OpenCV" book is wrong here !!!
      //False equations from the "Learning OpenCv" book below :
      //xCorrected = xCorrected + (2. * p1 * y + p2 * (r2 + 2. * x * x));
      //yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x);
      //Correct formulae found at : http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
      xCorrected = xCorrected + (2. * p1 * x * y + p2 * (r2 + 2. * x * x));
      yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x * y);
    }
    //Step 2 : ideal coordinates => actual coordinates
    {
      xCorrected = xCorrected * fx + ux;
      yCorrected = yCorrected * fy + uy;
    }
    dst.push_back(cv::Point2d(xCorrected, yCorrected));
  }

}

int main(int /*argc*/, char** /*argv*/) {

    cout << "OpenCV version: " << CV_MAJOR_VERSION << " " << CV_MINOR_VERSION << endl; // 2 4

    Mat cameraMatrix = (Mat_<double>(3,3) << 1600, 0, 789, 0, 1600, 650, 0, 0, 1);
    Mat distorsion   = (Mat_<double>(5,1) << -0.48, 0, 0, 0, 0);

    cout << "camera matrix: " << cameraMatrix << endl;
    cout << "distorsion coefficent: " << distorsion << endl;

    // the starting points
    std::vector<Point2f> original_pts;
    original_pts.push_back( Point2f(23, 358) );
    original_pts.push_back( Point2f(8,  357) );
    original_pts.push_back( Point2f(12, 342) );
    original_pts.push_back( Point2f(27, 343) );
    original_pts.push_back( Point2f(7,  350) );
    original_pts.push_back( Point2f(-8, 349) );
    original_pts.push_back( Point2f(-4, 333) );
    original_pts.push_back( Point2f(12, 334) );
    Mat original_m = Mat(original_pts);

    // undistort
    Mat undistorted_m;
    undistortPoints(original_m, undistorted_m, 
                    cameraMatrix, distorsion);

    cout << "undistort points" << undistorted_m << endl;

    // back to array
    vector< cv::Point2d > undistorted_points;
    for(int i=0; i<original_pts.size(); ++i) {
        Point2d p;
        p.x = undistorted_m.at<float>(i, 0);
        p.y = undistorted_m.at<float>(i, 1);
        undistorted_points.push_back( p );

        // NOTE THAT HERE THERE IS AN APPROXIMATION
        // WHAT IS IT? STD::COUT? CASTING TO FLOAT?
        cout << undistorted_points[i] << endl;
    }

    vector< cv::Point2d > redistorted_points;
    distortPoints(undistorted_points, redistorted_points, cameraMatrix, distorsion);

    cout << redistorted_points << endl;

    for(int i=0; i<original_pts.size(); ++i) {
        cout << original_pts[i] << endl;
        cout << redistorted_points[i] << endl;

        Point2d o;
        o.x = original_pts[i].x;
        o.y = original_pts[i].y;
        Point2d dist = redistorted_points[i] - o;

        double norm = sqrt(dist.dot(dist));
        std::cout << "distance = " << norm << std::endl;

        cout << endl;
    }

    return 0;
}

您提到的问题的答案之一中链接的
initundistortyMap
确实满足了您的需求。由于它在
Remap
中用于构建完整的未失真图像,因此它为目标图像(未失真)中的每个位置提供了在失真图像中查找相应像素的位置,以便他们可以使用其颜色。所以它实际上是一个
f(未失真)=失真的
贴图

但是,使用此映射只允许输入整数位置和图像矩形内的位置。 谢天谢地,文档给出了正确的答案

这主要是你所拥有的,只是你缺少了一个初步的步骤。 这是我的版本(它是C#,但应该是相同的):

公共点F扭曲(点F点)
{

//相对于相对坐标,您可以使用

cv::Mat rVec(3,1,cv::DataType::type);//旋转向量
(0)处的rVec=0;
(1)处的rVec=0;
(2)处的rVec=0;
cv::Mat tVec(3,1,cv::DataType::type);//转换向量
tVec.at(0)=0;
tVec.at(1)=0;
tVec.at(2)=0;
cv::projectPoints(点、rVec、tVec、cameraMatrix、Distcoffs、结果);

PS:在3中,他们添加了一个扭曲函数。

如果你将所有扭曲系数乘以-1,你可以将它们传递到不扭曲或不扭曲点,基本上你将应用反向扭曲,这将使扭曲恢复。

OCV相机模型(请参阅)描述一个3D点如何首先映射到一个不寻常的理想针孔相机坐标,然后“扭曲”坐标,以便对实际相机的图像进行建模

使用OpenCV畸变系数(=布朗畸变系数),以下两种操作易于计算:

  • 从无失真图像(即未失真图像)中的给定像素坐标计算原始相机图像中的像素坐标。恐怕没有明确的OpenCV函数。但Joan Charmant答案中的代码正是这样做的
  • 从原始相机图像计算无失真图像。这可以使用
    cv::undistort(…)
    cv::initundistortyMap(…)
    cv::remap(…)
    的组合来完成
但是,以下两个操作在强制性方面要复杂得多:

  • 从原始相机图像中的像素坐标计算无失真图像中的像素坐标。这可以使用
    cv::undistortPoints(..)
    完成
  • 从无失真图像计算原始相机图像
这听起来可能违反直觉。更详细的解释:

对于无失真图像中的给定像素坐标,很容易计算原始图像中的对应坐标(即“扭曲”坐标)

反过来做要困难得多;基本上需要将上面的所有代码行组合成一个大的向量方程,并求解u和v。我认为对于使用所有5个失真系数的一般情况,只能通过数值进行。也就是说(不看代码)可能是
cv::undistortPoints(…)
所做的

但是,使用失真系数,我们可以计算一个不失真贴图(
cv::InitUnder失真矫正贴图(..)
),它从无失真图像坐标映射到原始相机图像坐标。不失真贴图中的每个条目都包含一个(浮点)原始相机图像中的像素位置。换句话说,无失真贴图从无失真图像指向原始相机图像。因此贴图完全按照上述公式计算


然后可以应用贴图从原始图像中获取新的无失真图像(
cv::remap(…)
)。
cv::undistort()
在没有显式计算无失真贴图的情况下执行此操作。

一旦扭曲坐标,则此问题没有分析解决方案。对于此特定模型,至少在分析上无法回溯。这是径向扭曲模型的本质,其定义方式允许以简单的分析方式扭曲但反之亦然。为了做到这一点,我们必须求解第7次多项式,因为它被证明没有解析解

然而,径向相机模型在任何方面都不是特别或神圣的,它只是一个简单的规则,即根据您拍摄照片的镜头将像素向外或向内拉伸到光学中心。离光学中心越近,像素接收到的失真越少。有许多其他方法来定义径向失真模型,可以ld不仅提供了相似的失真质量,而且还提供了定义失真反比的简单方法。但这样做意味着您需要自己为此类模型找到最佳参数

例如,在我的具体案例中,我发现一个简单的sigmoid函数(偏移和缩放)能够近似我现有的径向模型参数,MSE积分误差小于或等于1E-06,即使模型之间的比较似乎很尖锐。我不认为本机径向模型产生的效果更好
    OpenCV version: 2 4
camera matrix: [1600, 0, 789;
  0, 1600, 650;
  0, 0, 1]
distorsion coefficent: [-0.48; 0; 0; 0; 0]
undistort points[-0.59175861, -0.22557901; -0.61276215, -0.22988389; -0.61078846, -0.24211435; -0.58972651, -0.23759322; -0.61597037, -0.23630577; -0.63910204, -0.24136727; -0.63765121, -0.25489968; -0.61291695, -0.24926868]
[-0.591759, -0.225579]
[-0.612762, -0.229884]
[-0.610788, -0.242114]
[-0.589727, -0.237593]
[-0.61597, -0.236306]
[-0.639102, -0.241367]
[-0.637651, -0.2549]
[-0.612917, -0.249269]
[24.45809095301274, 358.5558144841519; 10.15042938413364, 357.806737955385; 14.23419751024494, 342.8856229036298; 28.51642501095819, 343.610956960508; 9.353743900129871, 350.9029663678638; -4.488033489615646, 350.326357275197; -0.3050714463695385, 334.477016554487; 14.41516474594289, 334.9822130217053]
[23, 358]
[24.4581, 358.556]
distance = 1.56044

[8, 357]
[10.1504, 357.807]
distance = 2.29677

[12, 342]
[14.2342, 342.886]
distance = 2.40332

[27, 343]
[28.5164, 343.611]
distance = 1.63487

[7, 350]
[9.35374, 350.903]
distance = 2.521

[-8, 349]
[-4.48803, 350.326]
distance = 3.75408

[-4, 333]
[-0.305071, 334.477]
distance = 3.97921

[12, 334]
[14.4152, 334.982]
distance = 2.60725
public PointF Distort(PointF point)
{
    // To relative coordinates <- this is the step you are missing.
    double x = (point.X - cx) / fx;
    double y = (point.Y - cy) / fy;

    double r2 = x*x + y*y;

    // Radial distorsion
    double xDistort = x * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);
    double yDistort = y * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);

    // Tangential distorsion
    xDistort = xDistort + (2 * p1 * x * y + p2 * (r2 + 2 * x * x));
    yDistort = yDistort + (p1 * (r2 + 2 * y * y) + 2 * p2 * x * y);

    // Back to absolute coordinates.
    xDistort = xDistort * fx + cx;
    yDistort = yDistort * fy + cy;

    return new PointF((float)xDistort, (float)yDistort);
}
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
rVec.at<double>(0) = 0;
rVec.at<double>(1) = 0;
rVec.at<double>(2) =0;
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) =0;
tVec.at<double>(1) = 0;
tVec.at<double>(2) = 0;

cv::projectPoints(points,rVec,tVec, cameraMatrix, distCoeffs,result);
x = (u - cx) / fx; // u and v are distortion free
y = (v - cy) / fy;

rr = x*x + y*y
distortion = 1 + rr  * (k1 + rr * (k2 + rr * k3))
# I ommit the tangential parameters for clarity

u_ = fx * distortion * x + cx
v_ = fy * distortion * y + cy
// u_ and v_ are coordinates in the original camera image
distort = (r, alpha) -> 2/(1 + exp(-alpha*r)) - 1
undistort = (d, alpha) -> -ln((d + 1)/(d - 1))/alpha
img_distored = cv2.remap(img_rect, mapx, mapy, cv2.INTER_LINEAR)
X, Y = np.meshgrid(range(w), range(h)
pnts_distorted = np.merge(X, Y).reshape(w*h, 2)
pnts_rectified = cv2.undistortPoints(pnts_distorted, cameraMatrix, distort, R=rotation, P=pose)
mapx = pnts_rectified[:,:,0]
mapy = pnts_rectified[:,:,1]
//Step 2 : ideal coordinates => actual coordinates
      xCorrected = xCorrected * fx + ux;
      yCorrected = yCorrected * fy + uy;