Glsl 使用Glium中的UniformBuffer将任意大小的对象传递给片段着色器

Glsl 使用Glium中的UniformBuffer将任意大小的对象传递给片段着色器,glsl,rust,glium,Glsl,Rust,Glium,我的问题是在试验一系列不同的技术时提出的,这些技术我都没有太多经验。可悲的是,我甚至不知道我是否犯了愚蠢的逻辑错误,我是否使用了glium板条箱,我是否把GLSL搞砸了,等等。不管怎样,我设法从头开始了一个新的生锈项目,努力寻找一个显示我的问题的最小示例,问题至少会在我的计算机上重现 不过,这个最小的例子最终很难解释,所以我首先做了一个更简单的例子,它做了我想要它做的事情,尽管是通过对位进行黑客攻击并限制为128个元素(在GLSLuvec4中是32位的四倍)。从这一点出发,到我的问题出现的版本的

我的问题是在试验一系列不同的技术时提出的,这些技术我都没有太多经验。可悲的是,我甚至不知道我是否犯了愚蠢的逻辑错误,我是否使用了
glium
板条箱,我是否把
GLSL
搞砸了,等等。不管怎样,我设法从头开始了一个新的生锈项目,努力寻找一个显示我的问题的最小示例,问题至少会在我的计算机上重现

不过,这个最小的例子最终很难解释,所以我首先做了一个更简单的例子,它做了我想要它做的事情,尽管是通过对位进行黑客攻击并限制为128个元素(在
GLSL
uvec4
中是32位的四倍)。从这一点出发,到我的问题出现的版本的步骤相当简单

工作版本,具有简单的
统一
和位移位
该程序在屏幕上创建一个矩形,纹理坐标在水平方向上从
0.0
128.0
。该程序包含一个用于矩形的顶点着色器和一个片段着色器,该着色器使用纹理坐标在矩形上绘制垂直条纹:如果纹理坐标(钳制到
uint
)为奇数,则绘制一种颜色,当纹理坐标为偶数时,则绘制另一种颜色

// GLIUM, the crate I'll use to do "everything OpenGL"
#[macro_use]
extern crate glium;

// A simple struct to hold the vertices with their texture-coordinates.
// Nothing deviating much from the tutorials/crate-documentation.
#[derive(Copy, Clone)]
struct Vertex {
    position: [f32; 2],
    tex_coords: [f32; 2],
}

implement_vertex!(Vertex, position, tex_coords);


// The vertex shader's source. Does nothing special, except passing the
// texture coordinates along to the fragment shader.
const VERTEX_SHADER_SOURCE: &'static str = r#"
    #version 140

    in vec2 position;
    in vec2 tex_coords;
    out vec2 preserved_tex_coords;

    void main() {
        preserved_tex_coords = tex_coords;
        gl_Position = vec4(position, 0.0, 1.0);
    }
"#;

// The fragment shader. uses the texture coordinates to figure out which color to draw.
const FRAGMENT_SHADER_SOURCE: &'static str =  r#"
    #version 140

    in vec2 preserved_tex_coords;
    // FIXME: Hard-coded max number of elements. Replace by uniform buffer object
    uniform uvec4 uniform_data;
    out vec4 color;

    void main() {
        uint tex_x = uint(preserved_tex_coords.x);
        uint offset_in_vec = tex_x / 32u;
        uint uint_to_sample_from = uniform_data[offset_in_vec];
        bool the_bit = bool((uint_to_sample_from >> tex_x) & 1u);
        color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0);
    }
"#;

// Logic deciding whether a certain index corresponds with a 'set' bit on an 'unset' one.
// In this case, for the alternating stripes, a trivial odd/even test.
fn bit_should_be_set_at(idx: usize) -> bool {
    idx % 2 == 0
}

fn main() {
    use glium::DisplayBuild;
    let display = glium::glutin::WindowBuilder::new().build_glium().unwrap();

    // Sets up the vertices for a rectangle from -0.9 till 0.9 in both dimensions.
    // Texture coordinates go from 0.0 till 128.0 horizontally, and from 0.0 till
    // 1.0 vertically.
    let vertices_buffer = glium::VertexBuffer::new(
        &display,
        &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [  0.0, 0.0] },
              Vertex { position: [ 0.9,  0.9], tex_coords: [  0.0, 1.0] },
              Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] },
              Vertex { position: [-0.9,  0.9], tex_coords: [128.0, 1.0] }]).unwrap();
    // The rectangle will be drawn as a simple triangle strip using the vertices above.
    let indices_buffer = glium::IndexBuffer::new(&display,
                                                 glium::index::PrimitiveType::TriangleStrip,
                                                 &vec![0u8, 1u8, 2u8, 3u8]).unwrap();
    // Compiling the shaders defined statically above.
    let shader_program = glium::Program::from_source(&display,
                                                     VERTEX_SHADER_SOURCE,
                                                     FRAGMENT_SHADER_SOURCE,
                                                     None).unwrap();

    // Some hackyy bit-shifting to get the 128 alternating bits set up, in four u32's,
    // which glium manages to send across as an uvec4.
    let mut uniform_data = [0u32; 4];
    for idx in 0..128 {
        let single_u32 = &mut uniform_data[idx / 32];
        *single_u32 = *single_u32 >> 1;
        if bit_should_be_set_at(idx) {
            *single_u32 = *single_u32 | (1 << 31);
        }
    }

    // Trivial main loop repeatedly clearing, drawing rectangle, listening for close event.
    loop {
        use glium::Surface;
        let mut frame = display.draw();
        frame.clear_color(0.0, 0.0, 0.0, 1.0);
        frame.draw(&vertices_buffer, &indices_buffer, &shader_program,
                   &uniform! { uniform_data: uniform_data },
                   &Default::default()).unwrap();
        frame.finish().unwrap();

        for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } }
    }
}
我希望这会产生相同的条纹矩形(或者给出一些错误,或者在我做的事情出错时崩溃)。相反,它显示的是矩形,最右边的四分之一是纯红(即“碎片着色器读取时位似乎已设置”),其余四分之三是深红色(即“碎片着色器读取时位未设置”)

自原始发布以来的更新 我真的是在暗中摸索,所以我认为这可能是内存顺序、endianness、缓冲区过/欠运行等方面的低级错误。我尝试了各种方法,用易于识别的位模式填充“相邻”内存位置(例如,每三组中有一位,每四组中有一位,两组后有两个未设置的位,等等)。这并没有改变输出

获取
uint值[128]
附近内存的一个明显方法是将其放入
数据
结构中,就在
前面(不允许在
后面,因为
数据的
值:[u32]
是动态调整大小的)。如上所述,这不会改变输出。但是,将正确填充的
uvec4
放入
uniform_数据
缓冲区内,并使用类似于第一个示例的
main
函数,确实会产生原始结果。这表明se中的
glium::uniforms::UniformBuffer
确实起作用

因此,我更新了标题,以反映问题似乎存在于其他地方

在Eli的回答之后 @Eli Friedman的回答帮助我朝着一个解决方案前进,但我还没有完全做到这一点

分配和填充四倍于四倍大的缓冲区确实改变了输出,从四分之一填充的矩形变为完全填充的矩形。哦,那不是我想要的。不过,我的着色器现在正在从正确的内存中读取单词。所有这些单词都应该用正确的位模式填充。不过,矩形的任何部分都没有条纹。由于
位应设置为
应每隔一位设置一次,因此我提出了以下假设:

// Nothing changed here...
#[macro_use]
extern crate glium;

#[derive(Copy, Clone)]
struct Vertex {
    position: [f32; 2],
    tex_coords: [f32; 2],
}

implement_vertex!(Vertex, position, tex_coords);


const VERTEX_SHADER_SOURCE: &'static str = r#"
    #version 140

    in vec2 position;
    in vec2 tex_coords;
    out vec2 preserved_tex_coords;

    void main() {
        preserved_tex_coords = tex_coords;
        gl_Position = vec4(position, 0.0, 1.0);
    }
"#;
// ... up to here.

// The updated fragment shader. This one uses an entire uint per stripe, even though only one
// boolean value is stored in each.
const FRAGMENT_SHADER_SOURCE: &'static str =  r#"
    #version 140
    // examples/gpgpu.rs uses
    //     #version 430
    //     buffer layout(std140);
    // but that shader version is not supported by my machine, and the second line is
    // a syntax error in `#version 140`

    in vec2 preserved_tex_coords;

    // Judging from the GLSL standard, this is what I have to write:
    layout(std140) uniform;
    uniform uniform_data {
        // TODO: Still hard-coded max number of elements, but now arbitrary at compile-time.
        uint values[128];
    };
    out vec4 color;

    // This one now becomes much simpler: get the coordinate, clamp to uint, index into
    // uniform using tex_x, cast to bool, choose color.
    void main() {
        uint tex_x = uint(preserved_tex_coords.x);
        bool the_bit = bool(values[tex_x]);
        color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0);
    }
"#;


// Mostly copy-paste from glium documentation: define a Data type, which stores u32s,
// make it implement the right traits
struct Data {
    values: [u32],
}

implement_buffer_content!(Data);
implement_uniform_block!(Data, values);


// Same as before
fn bit_should_be_set_at(idx: usize) -> bool {
    idx % 2 == 0
}

// Mostly the same as before
fn main() {
    use glium::DisplayBuild;
    let display = glium::glutin::WindowBuilder::new().build_glium().unwrap();

    let vertices_buffer = glium::VertexBuffer::new(
        &display,
        &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [  0.0, 0.0] },
              Vertex { position: [ 0.9,  0.9], tex_coords: [  0.0, 1.0] },
              Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] },
              Vertex { position: [-0.9,  0.9], tex_coords: [128.0, 1.0] }]).unwrap();
    let indices_buffer = glium::IndexBuffer::new(&display,
                                                 glium::index::PrimitiveType::TriangleStrip,
                                                 &vec![0u8, 1u8, 2u8, 3u8]).unwrap();
    let shader_program = glium::Program::from_source(&display,
                                                     VERTEX_SHADER_SOURCE,
                                                     FRAGMENT_SHADER_SOURCE,
                                                     None).unwrap();


    // Making the UniformBuffer, with room for 128 4-byte objects (which u32s are).
    let mut buffer: glium::uniforms::UniformBuffer<Data> =
              glium::uniforms::UniformBuffer::empty_unsized(&display, 4 * 128).unwrap();
    {
        // Loop over all elements in the buffer, setting the 'bit'
        let mut mapping = buffer.map();
        for (idx, val) in mapping.values.iter_mut().enumerate() {
            *val = bit_should_be_set_at(idx) as u32;
            // This _is_ actually executed 128 times, as expected.
        }
    }

    // Iterating again, reading the buffer, reveals the alternating 'bits' are really
    // written to the buffer.

    // This loop is similar to the original one, except that it passes the buffer
    // instead of a [u32; 4].
    loop {
        use glium::Surface;
        let mut frame = display.draw();
        frame.clear_color(0.0, 0.0, 0.0, 1.0);
        frame.draw(&vertices_buffer, &indices_buffer, &shader_program,
                   &uniform! { uniform_data: &buffer },
                   &Default::default()).unwrap();
        frame.finish().unwrap();

        for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } }
    }
}
Bits: 1010101010101010101010101010101010101
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: all bits set
为了验证这一假设,我将
位设置为
,以3、4、5、6、7和8的倍数返回
true
。结果与我的假设一致:

Bits: 1001001001001001001001001001001001001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating two unset, one set.

Bits: 1000100010001000100010001000100010001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: all bits set

Bits: 1000010000100001000010000100001000010
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating four unset, one set.

Bits: 1000001000001000001000001000001000001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating two unset, one set.

Bits: 1000000100000010000001000000100000010
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating six unset, one set.

Bits: 1000000010000000100000001000000010000
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then every other bit set.

这个假设有意义吗?不管怎样:问题是在于设置数据(在生锈端),还是在于读取数据(在GLSL端)?

您遇到的问题与制服的分配方式有关<代码>uint值[128]没有你认为的内存布局;它实际上具有与uint4值[128]
相同的内存布局。请参阅第2.15.3.1.2小节。

谢谢,这让我开始解决这个问题。我更新了问题以反映我的进步。