Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 多视点相机设置的校准_Python_Opencv_Matrix_Camera Calibration_3d Reconstruction - Fatal编程技术网

Python 多视点相机设置的校准

Python 多视点相机设置的校准,python,opencv,matrix,camera-calibration,3d-reconstruction,Python,Opencv,Matrix,Camera Calibration,3d Reconstruction,我试图写一个程序,自动校准我的多摄像头设置。目前,我正在使用模拟图像进行校准 为了进行校准,我首先要计算摄像机矩阵。为此,我使用了一些棋盘的模拟图像。要进行实际计算,我使用OpenCV。这一步似乎工作得很好,尽管实际校准(而不是检测棋盘)确实需要非常长的时间 在此之后,我需要得到基本矩阵。我在某种程度上遵循Tomás Svoboda和其他人的论文,该论文建议使用改进的激光指针来获得大量的点相关性。因为我现在正在模拟这个,所以我有一个非常明亮的发光点,在所有图像中都可以看到,并且很容易从背景中区分

我试图写一个程序,自动校准我的多摄像头设置。目前,我正在使用模拟图像进行校准

为了进行校准,我首先要计算摄像机矩阵。为此,我使用了一些棋盘的模拟图像。要进行实际计算,我使用OpenCV。这一步似乎工作得很好,尽管实际校准(而不是检测棋盘)确实需要非常长的时间

在此之后,我需要得到基本矩阵。我在某种程度上遵循Tomás Svoboda和其他人的论文,该论文建议使用改进的激光指针来获得大量的点相关性。因为我现在正在模拟这个,所以我有一个非常明亮的发光点,在所有图像中都可以看到,并且很容易从背景中区分出来。我使用一些OpenCV和论文中的一些方法来提取点位置。所有这些似乎都很有效。我的问题来了

我使用一些OpenCV函数首先计算基本矩阵,然后计算基本矩阵。虽然我得到了一些矩阵,但也有一些问题

在计算了基本矩阵的误差后,我得到了一个奇怪的结果,第一台相机的误差比其他相机大得多。然而,这可能只是我的图像的一个问题

第二个问题是我的基本矩阵和相机的提取姿势。起初,我得到了一些不完美的结果,但看起来仍然像我预期的那样。然而在这一点上,我没有对基本矩阵施加内部约束。在执行了这些约束之后,我的结果突然完全不能代表我的期望。我不知道我是理解错了什么,还是只是实施了错的东西

我的最后一个问题是我的点的标准化。因为我想确保我的基本矩阵是正确的,所以我实现了本文中建议的图像不失真和点规范化。图像不失真效果很好,尽管它实际上没有做那么多。但在实现点规范化之后,我遇到了与以前相同的问题。我相机的姿势从像样突然变成完全错误

在实践中,这意味着两个摄像头相互重叠,而最后一个摄像头具有相同的协调但倒置

我不知道我在这里做错了什么,因此任何帮助都将不胜感激:)

将cv2作为cv导入
将numpy作为np导入
从PIL导入图像
导入scipy.ndimage
摄像机=4台
棋盘=(7,7)
标准=(cv.TERM\U标准\U EPS+cv.TERM\U标准\U MAX\U ITER,30,0.001)
棋盘图像=50
#使用旧矩阵作为第一个猜测,尝试加快速度
cameraOldAsGuess=真
校准图像=100
数据=[]
平均值=[]
stDev=[]
差异=[]
点数=[]
def printProgressBar(迭代,总计,前缀=“”,后缀=“”,小数=1,长度=100,填充=1█', printEnd=“\r”):
百分比=(“{0:。”+str(小数)+“f}”)。格式(100*(迭代/浮点(总数)))
filledLength=int(长度*迭代//总计)
条=填充*填充长度+'-'*(长度-填充长度)
打印(f'\r{prefix}{124;{bar}{124;{percent}%{suffix}',end=printEnd)
#在完成后打印新行
如果迭代=总计:
打印()
objpoints=[]
imgpoints=[]
objp=np.0((1,棋盘[0]*棋盘[1],3),np.32)
objp[0,:,:2]=np.mgrid[0:chessboard[0],0:chessboard[1]].T.reformate(-1,2)
上一个形状=无
对于范围内的c(摄像机):
打印(f“计算摄像机{c}的摄像机矩阵”)
path=f“数据\\校准\\摄像机\矩阵\\棋盘格图像\\摄像机{c}\\”#不需要变量!
img,灰色=无,无
#摄像机校准
对于范围内的im(棋盘图像):
printProgressBar(即时消息,棋盘图像-1,“查找棋盘”)
p=path+f“{im+1:04}.png”
img=cv.imread(p)
灰色=cv.CVT颜色(img,cv.COLOR\u bgr2灰色)
已找到,角点=cv.findChessboardCorners(灰色,棋盘,cv.CALIB\u CB\u自适应\u阈值+cv.CALIB\u CB\u快速\u检查+cv.CALIB\u CB\u标准化\u图像)
如果发现:
objpoints.append(objp)
拐角2=等速拐角子像素(灰色,拐角,(11,11),(-1,-1),标准)
imgpoints.append(corners2)
打印(“计算相机矩阵”)
初始猜测,初始距离=无,无
如果cameraOldAsGuess:
初始_guess=np.loadtxt(f“数据/校准/摄像机矩阵/结果数据/摄像机{c}/cmatrix.txt”)
初始_dist=np.loadtxt(f“数据/校准/摄像机矩阵/结果_数据/摄像机{c}/distco.txt”)
h、 w=图像形状[:2]
ret、mtx、dist、rvecs、tvecs=cv.CalibleCamera(objpoints、imgpoints、gray.shape[:-1],初始值,初始值)
np.savetxt(f“数据/校准/摄像机矩阵/结果摄像机数据/摄像机{c}/cmatrix.txt”,mtx)
np.savetxt(f“数据/校准/摄像机矩阵/结果数据/摄像机{c}/distco.txt”,dist)
打印(f“照相机矩阵{c}:”)
打印(mtx)
#计算平均值、标准差和差分图像以区分点
对于范围内的c(摄像机):
打印(f“计算照相机{c}的遮罩和差分图像”)
data.append([])
差异附加([]))
cm=np.loadtxt(f“数据/校准/摄像机矩阵/结果数据/摄像机{c}/cmatrix.txt”)
dist=np.loadtxt(f“数据/校准/摄像机矩阵/结果数据/摄像机{c}/distco.txt”)
对于范围内的im(校准图像):
image=np.asarray(image.open(f“data/calibration/basical_matrix/calibration_images/camera{c}/{im+1:04}.png”))
未失真=cv.未失真(图像、厘米、距离)
数据[c]。追加(未失真)
printProgressBar(im,校准图像-1,“打开图像”)
y、 x=数据[c][0]。形状[0],数据[c][0]。形状[1]
平均附加(np.Zero((y,x)))
stDev.append(np.zeros((y,x)))
i=0
对于范围内的im(校准图像):
图像=数据[c][im][:,:,0]
平均值[c]
import cv2 as cv
import numpy as np
from PIL import Image
import scipy.ndimage


cameras = 4

chessboard = (7, 7)
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
chessboard_images = 50
# use the old matrices as first guess, to try and speed things up
cameraOldAsGuess = True

calibration_images = 100
data = []
mean = []
stDev = []
diff = []
points = []

def printProgressBar (iteration, total, prefix = '', suffix = '', decimals = 1, length = 100, fill = '█', printEnd = "\r"):
    percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total)))
    filledLength = int(length * iteration // total)
    bar = fill * filledLength + '-' * (length - filledLength)
    print(f'\r{prefix} |{bar}| {percent}% {suffix}', end = printEnd)
    # Print New Line on Complete
    if iteration == total: 
        print()

objpoints = []
imgpoints = []

objp = np.zeros((1, chessboard[0] * chessboard[1], 3), np.float32)
objp[0, :, :2] = np.mgrid[0:chessboard[0], 0:chessboard[1]].T.reshape(-1, 2)
prev_img_shape = None

for c in range(cameras):

    print(f"calculating camera matrix for camera {c}")

    path = f"data\\calibration\\camera_matrix\\chessbord_images\\camera{c}\\" # no need for varaible !
    img, gray = None, None

    # camera calibration
    for im in range(chessboard_images):
        printProgressBar(im, chessboard_images-1, "finding chessboard")
        p = path + f"{im+1:04}.png"
        img = cv.imread(p)
        gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
        found, corners = cv.findChessboardCorners(gray, chessboard, cv.CALIB_CB_ADAPTIVE_THRESH + cv.CALIB_CB_FAST_CHECK + cv.CALIB_CB_NORMALIZE_IMAGE)
        
        if found:
            objpoints.append(objp)
            corners2 = cv.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
            imgpoints.append(corners2)
    
    print("calculating camera matrix")

    initial_guess, initial_dist = None, None
    if cameraOldAsGuess:
        initial_guess = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/cmatrix.txt")
        initial_dist = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/distco.txt")

    h, w = img.shape[:2]

    ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], initial_guess, initial_dist)

    np.savetxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/cmatrix.txt", mtx)
    np.savetxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/distco.txt", dist)

    print(f"camera matrix {c} :")
    print(mtx)

# calculating the mean, standard deviation and differnce images to distinguish the points
for c in range(cameras):
    print(f"calculating mask and difference images for camera {c}")
    data.append([])
    diff.append([])
    cm = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/cmatrix.txt")
    dist = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/distco.txt")
    for im in range(calibration_images):
        image = np.asarray(Image.open(f"data/calibration/fundamental_matrix/calibration_images/camera{c}/{im+1:04}.png"))
        undistorted = cv.undistort(image, cm, dist)
        data[c].append(undistorted)
        printProgressBar(im, calibration_images-1, "opening images    ")
    y, x = data[c][0].shape[0], data[c][0].shape[1]
    mean.append(np.zeros((y, x)))
    stDev.append(np.zeros((y, x)))
    i = 0
    for im in range(calibration_images):
        image = data[c][im][:,:,0]
        mean[c] = mean[c] + image
        i += 1
        printProgressBar(i, calibration_images, "mean              ")
    mean[c] = mean[c] // calibration_images
    i = 0
    for im in range(calibration_images):
        image = data[c][im][:,:,0]
        d = image - mean[c]
        dsq = np.square(d)
        stDev[c] = stDev[c] + dsq
        i += 1
        printProgressBar(i, calibration_images, "standard deviation")
    stDev[c] = np.sqrt(stDev[c] // calibration_images)
    i = 0
    for im in range(calibration_images):
        diff[c].append(np.zeros((y, x)))
        image = data[c][im][:,:,0]
        dM = image - mean[c]
        d = dM * stDev[c]
        diff[c][im] = d
        thresh = (3/5) * diff[c][im].max()
        diff[c][im][diff[c][im] < thresh] = 0
        #diff[c][im] = scipy.ndimage.zoom(diff[c][im], 2, order=5)
        i += 1
        printProgressBar(i, calibration_images, "difference images ")

print("saving images")

for c in range(cameras):
    i = 0
    meanImg = Image.fromarray(np.uint8(mean[c]))
    stDevImg = Image.fromarray(np.uint8(stDev[c]))
    meanImg.save(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/mask_images/mean.png")
    stDevImg.save(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/mask_images/stdev.png")
    for im in range(calibration_images):
        diffImg = Image.fromarray(np.uint8(diff[c][im]))
        diffImg.save(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/difference_images/{im+1:04}.png")
        i += 1
        printProgressBar(i, calibration_images, f"camera {c}          ")

print("detecting and saving points")
# using OpenCV to calculate the centers of the points
c_col, c_row, mult_factor = 1920, 1080, 2 / 3000

for c in range(cameras):
    points.append([])
    i = 0
    for im in range(calibration_images):
        image = cv.imread(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/difference_images/{im+1:04}.png")
        cm = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/cmatrix.txt")
        dist = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/distco.txt")
        image = cv.undistort(image, cm, dist)
        width, height = image.shape[1], image.shape[0]
        c_col, c_row = width/2, height/2
        mult_factor = 2 / (width+height)
        im_bw = cv.cvtColor(image, cv.COLOR_RGB2GRAY)
        blurred = cv.GaussianBlur(im_bw, (5, 5), 0)
        (im_bw, thresh) = cv.threshold(blurred, 40, 255, 0)
        contours, hierarchy = cv.findContours(thresh, cv.RETR_TREE, cv.CHAIN_APPROX_SIMPLE)
        cX, cY = 0, 0
        for cnt in contours:
            M = cv.moments(cnt)
            if(M["m00"] != 0):
                cX = int(M["m10"] / M["m00"])
                cY = int(M["m01"] / M["m00"])
        points[c].append(((cX)*mult_factor, (cY) * mult_factor))
        i += 1
        printProgressBar(i, calibration_images, f"camera {c}          ")
    # creating an image which shows all the points
    pointsImage = np.zeros((1080, 1920))
    for p in points[c]:
        pointsImage[(int)((p[1] / mult_factor)), (int)((p[0] / mult_factor))] = 255

    pointsNp = np.array(points[c])
    np.savetxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/extracted_points/points.txt", pointsNp)

    img = Image.fromarray(np.uint8(pointsImage))
    img.save(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/extracted_points/points.png")

print("calculating fundamental matrices with camera 0 as the base camera")
# calculating the fundamental matrix
basePoints = np.loadtxt("data/calibration/fundamental_matrix/resulting_data/camera0/extracted_points/points.txt")

for c in range(1, cameras):
    print(f"caculating fundamental matrix for camera {c}")
    cameraPoints = np.loadtxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/extracted_points/points.txt")
    f, mask = cv.findFundamentalMat(basePoints, cameraPoints)
    np.savetxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/fmatrix.txt", f)
    print(f"fundamental matrix {c} :")
    print(f)

print("calculating error for fundamental matrices")
basePoints = np.loadtxt("data/calibration/fundamental_matrix/resulting_data/camera0/extracted_points/points.txt")

for c in range(1, cameras):
    errors = []
    error = 0
    fm = np.loadtxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/fmatrix.txt")
    cameraPoints = np.loadtxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/extracted_points/points.txt")
    for im in range(calibration_images):
        basePoint = np.append(basePoints[im], 1)
        cameraPoint = np.append(cameraPoints[im], 1)
        err = basePoint.T.dot(fm).dot(cameraPoint)
        errors.append(err)
        error += err
    error /= calibration_images
    errors.append(error)
    print(f"fundamental matrix error for camera {c} is {error}")
    np.savetxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/errors.txt", errors)

print("caculating essential matrices with camera 0 as the base camera")
# calculating the essential matrix
for c in range(1, cameras):
    print(f"caculating essential matrix for camera {c}")
    cm = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/cmatrix.txt")
    fm = np.loadtxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/fmatrix.txt")
    em = cm.T.dot(fm).dot(cm)
    print(f"essential matrix {c} :")
    print(em)
    #print(f"enforcing internal constrain of essentail matrix for camera {c}")
    #u, s, v = np.linalg.svd(em)
    #em = u * np.diag([1, 1, 0]) * v
    #print(f"enforced essential matrix {c} :")
    #print(em)
    np.savetxt(f"data/calibration/essential_matrix_and_pose/resulting_data/camera{c}/ematrix.txt", em)


print("calculating pose of cameras")
# calculating the pose of the camera
basePoints = np.loadtxt("data/calibration/fundamental_matrix/resulting_data/camera0/extracted_points/points.txt")

for c in range(1, cameras):
    print(f"caculating pose for camera {c}")
    cameraPoints = np.loadtxt(f"data/calibration/fundamental_matrix/resulting_data/camera{c}/extracted_points/points.txt")
    cm = np.loadtxt(f"data/calibration/camera_matrix/resulting_data/camera{c}/cmatrix.txt")
    em = np.loadtxt(f"data/calibration/essential_matrix_and_pose/resulting_data/camera{c}/ematrix.txt")
    _, r, t, mask = cv.recoverPose(em, basePoints, cameraPoints)
    print(t)
    print(r)