Image 三次到等矩形投影算法

Image 三次到等矩形投影算法,image,algorithm,image-processing,graphics,geometry,Image,Algorithm,Image Processing,Graphics,Geometry,我有一个立方体贴图纹理,它定义了一个环境,但是我需要将它传递给一个仅适用于纬度/经度贴图的程序。我真的不知道该怎么翻译。有人帮忙吗 换句话说,我需要从这里来: 对此(我认为图像在x轴上有一个传统的-90°旋转): 更新:我得到了预测的正式名称。顺便说一句,我发现反向投影投影光栅图像的一般步骤如下: for each pixel of the destination image: calculate the corresponding unit vector in 3-dimensio

我有一个立方体贴图纹理,它定义了一个环境,但是我需要将它传递给一个仅适用于纬度/经度贴图的程序。我真的不知道该怎么翻译。有人帮忙吗

换句话说,我需要从这里来:

对此(我认为图像在x轴上有一个传统的-90°旋转):


更新:我得到了预测的正式名称。顺便说一句,我发现反向投影

投影光栅图像的一般步骤如下:

for each pixel of the destination image:
    calculate the corresponding unit vector in 3-dimensional space
    calculate the x,y coordinate for that vector in the source image
    sample the source image at that coordinate and assign the value to the destination pixel
最后一步是简单的插值。我们将重点介绍另外两个步骤

给定纬度和经度的单位矢量为(+z朝向北极,+x朝向本初子午线):

假设立方体在原点周围为+/-1个单位(即2x2总尺寸)。
一旦我们有了单位向量,我们就可以通过查看具有最大绝对值的元素来找到它所在立方体的面。例如,如果单位向量为,则y元素的绝对值最大。它是负数,因此该点将在立方体的-y面上找到。通过除以y幅值来规范化其他两个坐标,以获得该面内的位置。因此,点将位于-y面上的x=0.2879,z=0.8939。

因此,我找到了一个解决方案,它混合了wikipedia中的球坐标和OpenGL 4.1规范中的第3.8.10节(加上一些技巧)。因此,假设立方体图像具有高度
h_o
和宽度
w_o
,则等矩形将具有高度
h=w_o/3
和宽度
w=2*h
。现在,对于每个像素
(x,y)0,项目名称更改为。同样的优点,在C和C++中都有更好的工作例子。

现在也可在C


我碰巧解决了与你描述的完全相同的问题

我写了这个C++的小库,叫做“”,你可以在这里找到算法的详细解释:

请从github查找源代码:


它是根据麻省理工学院许可证发布的,免费使用

我想与大家分享一下这个转换的MATLAB实现。我还借用了OpenGL 4.1规范第3.8.10章()以及Paul Bourke的网站()。确保查看副标题下的内容:与6个立方体环境地图和一个球形地图进行转换

我还使用了上面Sambatyon的帖子作为灵感。它最初是从Python到MATLAB的一个端口,但我编写了代码,使其完全矢量化(即,对于
循环,没有
)。我还将立方体图像分割成6个单独的图像,因为我正在构建的应用程序具有这种格式的立方体图像。此外,代码检查没有错误,并且这假设所有立方体图像的大小相同(
nxn
)。这还假设图像为RGB格式。如果您想对单色图像执行此操作,只需注释掉需要访问多个通道的代码行。我们走

function [out] = cubic2equi(top, bottom, left, right, front, back)

% Height and width of equirectangular image
height = size(top, 1);
width = 2*height;

% Flags to denote what side of the cube we are facing
% Z-axis is coming out towards you
% X-axis is going out to the right
% Y-axis is going upwards
% Assuming that the front of the cube is towards the
% negative X-axis
FACE_Z_POS = 1; % Left
FACE_Z_NEG = 2; % Right
FACE_Y_POS = 3; % Top
FACE_Y_NEG = 4; % Bottom
FACE_X_NEG = 5; % Front 
FACE_X_POS = 6; % Back

% Place in a cell array
stackedImages{FACE_Z_POS} = left;
stackedImages{FACE_Z_NEG} = right;
stackedImages{FACE_Y_POS} = top;
stackedImages{FACE_Y_NEG} = bottom;
stackedImages{FACE_X_NEG} = front;
stackedImages{FACE_X_POS} = back;

% Place in 3 3D matrices - Each matrix corresponds to a colour channel
imagesRed = uint8(zeros(height, height, 6));
imagesGreen = uint8(zeros(height, height, 6));
imagesBlue = uint8(zeros(height, height, 6));

% Place each channel into their corresponding matrices
for i = 1 : 6
    im = stackedImages{i};
    imagesRed(:,:,i) = im(:,:,1);
    imagesGreen(:,:,i) = im(:,:,2);
    imagesBlue(:,:,i) = im(:,:,3);
end

% For each co-ordinate in the normalized image...
[X, Y] = meshgrid(1:width, 1:height);

% Obtain the spherical co-ordinates
Y = 2*Y/height - 1;
X = 2*X/width - 1;
sphereTheta = X*pi;
spherePhi = (pi/2)*Y;

texX = cos(spherePhi).*cos(sphereTheta);
texY = sin(spherePhi);
texZ = cos(spherePhi).*sin(sphereTheta);

% Figure out which face we are facing for each co-ordinate
% First figure out the greatest absolute magnitude for each point
comp = cat(3, texX, texY, texZ);
[~,ind] = max(abs(comp), [], 3);
maxVal = zeros(size(ind));
% Copy those values - signs and all
maxVal(ind == 1) = texX(ind == 1);
maxVal(ind == 2) = texY(ind == 2);
maxVal(ind == 3) = texZ(ind == 3);

% Set each location in our equirectangular image, figure out which
% side we are facing
getFace = -1*ones(size(maxVal));

% Back
ind = abs(maxVal - texX) < 0.00001 & texX < 0;
getFace(ind) = FACE_X_POS;

% Front
ind = abs(maxVal - texX) < 0.00001 & texX >= 0;
getFace(ind) = FACE_X_NEG;

% Top
ind = abs(maxVal - texY) < 0.00001 & texY < 0;
getFace(ind) = FACE_Y_POS;

% Bottom
ind = abs(maxVal - texY) < 0.00001 & texY >= 0;
getFace(ind) = FACE_Y_NEG;

% Left
ind = abs(maxVal - texZ) < 0.00001 & texZ < 0;
getFace(ind) = FACE_Z_POS;

% Right
ind = abs(maxVal - texZ) < 0.00001 & texZ >= 0;
getFace(ind) = FACE_Z_NEG;

% Determine the co-ordinates along which image to sample
% based on which side we are facing
rawX = -1*ones(size(maxVal));
rawY = rawX;
rawZ = rawX;

% Back
ind = getFace == FACE_X_POS;
rawX(ind) = -texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);

% Front
ind = getFace == FACE_X_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);

% Top
ind = getFace == FACE_Y_POS;
rawX(ind) = texZ(ind);
rawY(ind) = texX(ind);
rawZ(ind) = texY(ind);

% Bottom
ind = getFace == FACE_Y_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = -texX(ind);
rawZ(ind) = texY(ind);

% Left
ind = getFace == FACE_Z_POS;
rawX(ind) = texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);

% Right
ind = getFace == FACE_Z_NEG;
rawX(ind) = -texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);

% Concatenate all for later
rawCoords = cat(3, rawX, rawY, rawZ);

% Finally determine co-ordinates (normalized)
cubeCoordsX = ((rawCoords(:,:,1) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoordsY = ((rawCoords(:,:,2) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoords = cat(3, cubeCoordsX, cubeCoordsY);

% Now obtain where we need to sample the image
normalizedX = round(cubeCoords(:,:,1) * height);
normalizedY = round(cubeCoords(:,:,2) * height);

% Just in case.... cap between [1, height] to ensure
% no out of bounds behaviour
normalizedX(normalizedX < 1) = 1;
normalizedX(normalizedX > height) = height;
normalizedY(normalizedY < 1) = 1;
normalizedY(normalizedY > height) = height;

% Place into a stacked matrix
normalizedCoords = cat(3, normalizedX, normalizedY);

% Output image allocation
out = uint8(zeros([size(maxVal) 3]));

% Obtain column-major indices on where to sample from the
% input images
% getFace will contain which image we need to sample from
% based on the co-ordinates within the equirectangular image
ind = sub2ind([height height 6], normalizedCoords(:,:,2), ...
    normalizedCoords(:,:,1), getFace);

% Do this for each channel
out(:,:,1) = imagesRed(ind);
out(:,:,2) = imagesGreen(ind);
out(:,:,3) = imagesBlue(ind);
function[out]=cubic2equi(上、下、左、右、前、后)
%等矩形图像的高度和宽度
高度=尺寸(顶部,1);
宽度=2*高度;
%标志表示我们面对立方体的哪一侧
%Z轴正朝着你走过来
%X轴向右转
%Y轴向上
%假设立方体的正面朝向
%负X轴
面_Z_位置=1;%左边
面_Z_NEG=2;%赖特
面Y位置=3;%顶部
正面Y反面=4;%底部
正面X反面=5;%正面
面X位置=6;%返回
%放置在单元格数组中
stackedImages{FACE_Z_POS}=左;
stackedImages{FACE_Z_NEG}=右;
堆栈图像{FACE_Y_POS}=顶部;
stackedImages{FACE_Y_NEG}=底部;
堆栈图像{FACE_X_NEG}=前面;
堆叠图像{FACE_X_POS}=背面;
%放置在3个3D矩阵中-每个矩阵对应一个颜色通道
imagesRed=uint8(零(高度,高度,6));
imagesGreen=uint8(零(高度,高度,6));
imagesBlue=uint8(零(高度,高度,6));
%将每个通道放入相应的矩阵中
对于i=1:6
im=堆栈图像{i};
imagesRed(:,:,i)=im(:,:,1);
imagesGreen(:,:,i)=im(:,:,2);
imagesBlue(:,:,i)=im(:,:,3);
结束
%对于归一化图像中的每个坐标。。。
[X,Y]=网格网格(1:宽度,1:高度);
%获得球坐标
Y=2*Y/高度-1;
X=2*X/宽度-1;
sphereTheta=X*pi;
spherePhi=(pi/2)*Y;
texX=cos(spherePhi)。*cos(sphereTheta);
texY=sin(spherePhi);
texZ=cos(spherePhi)。*sin(sphereTheta);
%找出每个坐标对应的面
%首先计算出每个点的最大绝对震级
comp=类别(3,texX,texY,texZ);
[~,ind]=最大值(绝对绝对值(comp),[],3);
maxVal=零(大小(ind));
%复制这些值-符号和所有
maxVal(ind==1)=texX(ind==1);
maxVal(ind==2)=texY(ind==2);
maxVal(ind==3)=texZ(ind==3);
%在等矩形图像中设置每个位置,找出哪个位置
%我们所面对的一面
getFace=-1*one(大小(maxVal));
%背
ind=abs(maxVal-texX)<0.00001&texX<0;
getFace(ind)=面X位置;
%正面
ind=abs(maxVal-texX)<0.00001&texX>=0;
getFace(ind)=FACE_X_NEG;
%顶
ind=abs(maxVal-texY)<0.00001&texY<0;
getFace(ind)=FACE_Y_POS;
%底部
ind=abs(maxVal-texY)<0.00001&texY>=0;
getFace(ind)=FACE_Y_NEG;
%左
ind=abs(maxVal-texZ)<0.00001&texZ<0;
getFace(ind)=FACE_Z_POS;
%对
ind=abs(maxVal-texZ)<0.00001&texZ>=0;
getFace(ind)=FACE_Z_NEG;
%确定要采样图像的坐标
%基于我们面对的是哪一方
rawX=-1*个(大小(maxVal));
rawY=rawX;
rawZ=rawX;
%背
ind=getFace==FACE\X\u POS;
rawX(ind)=-texZ(ind);
拉维(ind)=texY(ind);
rawZ(ind)=texX(ind);
%正面
ind=getFace==FACE\X\u NEG;
rawX(ind)=texZ(ind);
拉维(ind)=texY(ind);
rawZ(ind)=texX(ind);
%顶
ind=getFace==FACE\u Y\u POS;
rawX(ind)=texZ(ind);
罗威(印第安纳州)=
import math

# from wikipedia
def spherical_coordinates(x, y):
    return (math.pi*((y/h) - 0.5), 2*math.pi*x/(2*h), 1.0)

# from wikipedia
def texture_coordinates(theta, phi, rho):
    return (rho * math.sin(theta) * math.cos(phi),
            rho * math.sin(theta) * math.sin(phi),
            rho * math.cos(theta))

FACE_X_POS = 0
FACE_X_NEG = 1
FACE_Y_POS = 2
FACE_Y_NEG = 3
FACE_Z_POS = 4
FACE_Z_NEG = 5

# from opengl specification
def get_face(x, y, z):
    largest_magnitude = max(x, y, z)
    if largest_magnitude - abs(x) < 0.00001:
        return FACE_X_POS if x < 0 else FACE_X_NEG
    elif largest_magnitude - abs(y) < 0.00001:
        return FACE_Y_POS if y < 0 else FACE_Y_NEG
    elif largest_magnitude - abs(z) < 0.00001:
        return FACE_Z_POS if z < 0 else FACE_Z_NEG

# from opengl specification
def raw_face_coordinates(face, x, y, z):
    if face == FACE_X_POS:
        return (-z, -y, x)
    elif face == FACE_X_NEG:
        return (-z, y, -x)
    elif face == FACE_Y_POS:
        return (-x, -z, -y)
    elif face == FACE_Y_NEG:
        return (-x, z, -y)
    elif face == FACE_Z_POS:
        return (-x, y, -z)
    elif face == FACE_Z_NEG:
        return (-x, -y, z)

# computes the topmost leftmost coordinate of the face in the cube map
def face_origin_coordinates(face):
    if face == FACE_X_POS:
        return (2*h, h)
    elif face == FACE_X_NEG:
        return (0, 2*h)
    elif face == FACE_Y_POS:
        return (h, h)
    elif face == FACE_Y_NEG:
        return (h, 3*h)
    elif face == FACE_Z_POS:
        return (h, 0)
    elif face == FACE_Z_NEG:
        return (h, 2*h)

# from opengl specification
def raw_coordinates(xc, yc, ma):
    return ((xc/abs(ma) + 1) / 2, (yc/abs(ma) + 1) / 2)


def normalized_coordinates(face, x, y):
    face_coords = face_origin_coordinates(face)
    normalized_x = int(math.floor(x * h + 0.5))
    normalized_y = int(math.floor(y * h + 0.5))
    # eliminates black pixels
    if normalized_x == h:
      --normalized_x
    if normalized_y == h:
      --normalized_y
    return (face_coords[0] + normalized_x, face_coords[1] + normalized_y)

def find_corresponding_pixel(x, y):
    spherical = spherical_coordinates(x, y)
    texture_coords = texture_coordinates(spherical[0], spherical[1], spherical[2])
    face = get_face(texture_coords[0], texture_coords[1], texture_coords[2])

    raw_face_coords = raw_face_coordinates(face, texture_coords[0], texture_coords[1], texture_coords[2])
    cube_coords = raw_coordinates(raw_face_coords[0], raw_face_coords[1], raw_face_coords[2])
    # this fixes some faces being rotated 90°
    if face in [FACE_X_NEG, FACE_X_POS]:
      cube_coords = (cube_coords[1], cube_coords[0])
    return normalized_coordinates(face, cube_coords[0], cube_coords[1])    
function [out] = cubic2equi(top, bottom, left, right, front, back)

% Height and width of equirectangular image
height = size(top, 1);
width = 2*height;

% Flags to denote what side of the cube we are facing
% Z-axis is coming out towards you
% X-axis is going out to the right
% Y-axis is going upwards
% Assuming that the front of the cube is towards the
% negative X-axis
FACE_Z_POS = 1; % Left
FACE_Z_NEG = 2; % Right
FACE_Y_POS = 3; % Top
FACE_Y_NEG = 4; % Bottom
FACE_X_NEG = 5; % Front 
FACE_X_POS = 6; % Back

% Place in a cell array
stackedImages{FACE_Z_POS} = left;
stackedImages{FACE_Z_NEG} = right;
stackedImages{FACE_Y_POS} = top;
stackedImages{FACE_Y_NEG} = bottom;
stackedImages{FACE_X_NEG} = front;
stackedImages{FACE_X_POS} = back;

% Place in 3 3D matrices - Each matrix corresponds to a colour channel
imagesRed = uint8(zeros(height, height, 6));
imagesGreen = uint8(zeros(height, height, 6));
imagesBlue = uint8(zeros(height, height, 6));

% Place each channel into their corresponding matrices
for i = 1 : 6
    im = stackedImages{i};
    imagesRed(:,:,i) = im(:,:,1);
    imagesGreen(:,:,i) = im(:,:,2);
    imagesBlue(:,:,i) = im(:,:,3);
end

% For each co-ordinate in the normalized image...
[X, Y] = meshgrid(1:width, 1:height);

% Obtain the spherical co-ordinates
Y = 2*Y/height - 1;
X = 2*X/width - 1;
sphereTheta = X*pi;
spherePhi = (pi/2)*Y;

texX = cos(spherePhi).*cos(sphereTheta);
texY = sin(spherePhi);
texZ = cos(spherePhi).*sin(sphereTheta);

% Figure out which face we are facing for each co-ordinate
% First figure out the greatest absolute magnitude for each point
comp = cat(3, texX, texY, texZ);
[~,ind] = max(abs(comp), [], 3);
maxVal = zeros(size(ind));
% Copy those values - signs and all
maxVal(ind == 1) = texX(ind == 1);
maxVal(ind == 2) = texY(ind == 2);
maxVal(ind == 3) = texZ(ind == 3);

% Set each location in our equirectangular image, figure out which
% side we are facing
getFace = -1*ones(size(maxVal));

% Back
ind = abs(maxVal - texX) < 0.00001 & texX < 0;
getFace(ind) = FACE_X_POS;

% Front
ind = abs(maxVal - texX) < 0.00001 & texX >= 0;
getFace(ind) = FACE_X_NEG;

% Top
ind = abs(maxVal - texY) < 0.00001 & texY < 0;
getFace(ind) = FACE_Y_POS;

% Bottom
ind = abs(maxVal - texY) < 0.00001 & texY >= 0;
getFace(ind) = FACE_Y_NEG;

% Left
ind = abs(maxVal - texZ) < 0.00001 & texZ < 0;
getFace(ind) = FACE_Z_POS;

% Right
ind = abs(maxVal - texZ) < 0.00001 & texZ >= 0;
getFace(ind) = FACE_Z_NEG;

% Determine the co-ordinates along which image to sample
% based on which side we are facing
rawX = -1*ones(size(maxVal));
rawY = rawX;
rawZ = rawX;

% Back
ind = getFace == FACE_X_POS;
rawX(ind) = -texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);

% Front
ind = getFace == FACE_X_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);

% Top
ind = getFace == FACE_Y_POS;
rawX(ind) = texZ(ind);
rawY(ind) = texX(ind);
rawZ(ind) = texY(ind);

% Bottom
ind = getFace == FACE_Y_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = -texX(ind);
rawZ(ind) = texY(ind);

% Left
ind = getFace == FACE_Z_POS;
rawX(ind) = texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);

% Right
ind = getFace == FACE_Z_NEG;
rawX(ind) = -texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);

% Concatenate all for later
rawCoords = cat(3, rawX, rawY, rawZ);

% Finally determine co-ordinates (normalized)
cubeCoordsX = ((rawCoords(:,:,1) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoordsY = ((rawCoords(:,:,2) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoords = cat(3, cubeCoordsX, cubeCoordsY);

% Now obtain where we need to sample the image
normalizedX = round(cubeCoords(:,:,1) * height);
normalizedY = round(cubeCoords(:,:,2) * height);

% Just in case.... cap between [1, height] to ensure
% no out of bounds behaviour
normalizedX(normalizedX < 1) = 1;
normalizedX(normalizedX > height) = height;
normalizedY(normalizedY < 1) = 1;
normalizedY(normalizedY > height) = height;

% Place into a stacked matrix
normalizedCoords = cat(3, normalizedX, normalizedY);

% Output image allocation
out = uint8(zeros([size(maxVal) 3]));

% Obtain column-major indices on where to sample from the
% input images
% getFace will contain which image we need to sample from
% based on the co-ordinates within the equirectangular image
ind = sub2ind([height height 6], normalizedCoords(:,:,2), ...
    normalizedCoords(:,:,1), getFace);

% Do this for each channel
out(:,:,1) = imagesRed(ind);
out(:,:,2) = imagesGreen(ind);
out(:,:,3) = imagesBlue(ind);
def spherical_coordinates(x, y):
    return (math.pi*((y/h) - 0.5), 2*math.pi*x/(2*h), 1.0)