Javascript WebGL基本着色器混淆
我正在学习WebGL着色器,但它们把我弄糊涂了。到目前为止,我得到的是:Javascript WebGL基本着色器混淆,javascript,opengl-es,three.js,webgl,Javascript,Opengl Es,Three.js,Webgl,我正在学习WebGL着色器,但它们把我弄糊涂了。到目前为止,我得到的是: <script type="x-shader/x-vertex" id="vertexshader"> #ifdef GL_ES precision highp float; #endif void main() { gl_Position = projectionMatrix * modelViewMatrix * vec4( positio
<script type="x-shader/x-vertex" id="vertexshader">
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision highp float;
#endif
void main()
{
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
</script>
#国际财务报告准则
高精度浮点;
#恩迪夫
void main()
{
gl_位置=projectionMatrix*modelViewMatrix*vec4(位置,1.0);
}
#国际财务报告准则
高精度浮点;
#恩迪夫
void main()
{
gl_FragColor=vec4(1.0,0.0,1.0,1.0);
}
到目前为止还不错,它编译好了,我得到了一个粉红色的立方体
现在,混乱决定了一切。据我所知,片段着色器用于修改颜色,顶点着色器用于修改形状
我不明白gl_FragColor是为整个对象设置颜色,还是按照某种顺序绘制,我可以操纵着色器中的坐标,使其着色
如果是,它如何知道什么形状和着色顺序
此外,如果我只想使用fragmentshader,为什么需要定义vertexshader?默认gl_位置线的作用是什么?为什么需要它
到目前为止,我尝试过的所有GLSL教程,代码都无法运行,three.js无法编译它。从哪里开始有什么建议吗?这个问题相当广泛 假设你做了这样的事情:
var myRenderer = new THREE.WebGLRenderer();
var myScene = new THREE.Scene();
var myTexture = new THREE.Texture();
var myColor = new THREE.Color();
var myMaterial = new THREE.MeshBasicMaterial({color:myColor, map:myTexture});
var myColoredAndTexturedCube = new THREE.Mesh( new THREE.CubeGeometry(), myMaterial);
var myCamera = new THREE.Camera();
如果将所有这些连接起来,将在屏幕上渲染一个立方体,如果提供颜色和纹理,它将同时显示这两种颜色(由颜色着色的纹理)
但很多事情都发生在幕后。Three.js将通过WebGL API向gpu发布指令。这些是非常低级的调用,如“获取这段内存并准备好绘制”“准备此着色器以处理这段内存”“为此调用设置混合模式”
我不明白gl_FragColor是为整个对象设置颜色,还是按照某种顺序绘制,我可以操纵着色器中的坐标,使其着色
如果是,它如何知道什么形状和着色顺序
你应该读一点关于渲染管道的知识,也许一开始你不会完全理解它,但它肯定能澄清一些事情
gl_FragColor设置缓冲区中像素的颜色(可以是您的屏幕,也可以是屏幕外纹理)。是的,它设置“整个对象”的颜色,但整个对象可以是粒子云(可以将其解释为许多对象)。您可以拥有一个由10x10个立方体组成的网格,每个立方体的颜色不同,但仍使用一个绘制调用(一个对象)进行渲染
那么回到你的话题:
//you dont see this, but three injects this for you, try intentionally adding a mistake to your shader, when your debugger complains, youll see the entire shader and these lines in it
uniform mat4 projectionMatrix; //one mat4 shared across all vertices/pixels
uniform mat4 modelViewMatrix; //one mat4 shared across all vertices/pixels
attribute vec3 position; //actual vertex, this value is different in each vertex
//try adding this
varying vec2 vUv;
void main()
{
vUv = uv; //uv, just like position, is another attribute that gets created for you automatically, this way we are sending it to the pixel shader through the varying vec2 vUv.
//this is the transformation
//projection matrix is what transforms space into perspective (vanishing points, things get smaller as they get further away from the camera)
//modelViewMatrix are actually two matrices, viewMatrix, which is also part of the camera (how is this camera rotated and moved compared to the rest of the world)
//finally the modelMatrix - how big is the object, where it stands, and how it's rotated
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4( position , 1.0 ); //will do the same thing
}
使用三个材质生成的每个材质都具有着色器的这一部分。例如,这不足以进行照明,因为它没有法线
尝试以下片段着色器:
varying vec2 vUv; //coming in from the vertex shader
void main(){
gl_FragColor = vec4( vUv , 0.0 , 1.0);
}
varying vec3 vertexWorldPosition; //this comes in from the vertex shader
void main(){
gl_FragColor = vec4( vertexWorldPosition , 1.0 );
}
或者更好的是,让我们用颜色显示对象的世界位置:
顶点着色器:
varying vec3 vertexWorldPosition;
void main(){
vec4 worldPosition = modelMatrix * vec4( position , 1.0 ); //compute the world position, remember it,
//model matrix is mat4 that transforms the object from object space to world space, vec4( vec3 , 1.0 ) creates a point rather than a direction in "homogeneous coordinates"
//since we only need this to be vec4 for transformations and working with mat4, we save the vec3 portion of it to the varying variable
vertexWorldPosition = worldPosition.xyz; // we don't need .w
//do the rest of the transformation - what is this world space seen from the camera's point of view,
gl_Position = viewMatrix * worldPosition;
//we used gl_Position to write the previous result, we could have used a new vec4 cameraSpace (or eyeSpace, or viewSpace) but we can also write to gl_Position
gl_Position = projectionMatrix * gl_Position; //apply perspective distortion
}
片段着色器:
varying vec2 vUv; //coming in from the vertex shader
void main(){
gl_FragColor = vec4( vUv , 0.0 , 1.0);
}
varying vec3 vertexWorldPosition; //this comes in from the vertex shader
void main(){
gl_FragColor = vec4( vertexWorldPosition , 1.0 );
}
如果在0,0,0处创建一个球体,并且不移动它,则其中一半将为黑色,另一半将为彩色。根据比例,它可能是白色的。假设半径为100,你将看到一个从0到1的梯度,其余的将是白色(或者r,g,b,钳制为1.0)。那就试试这样的
gl_FragColor = vec4( vec3( sin( vertexWorldPosition.x ) ), 1.0 );
我不是专家,所以我甚至不会试图解释我所知道的一点点,但可能会对你有所帮助。此外,youtube网站也帮了我不少忙lot@2pha,非常感谢,你的实验很容易理解!你读过哪些教程?真是甜蜜的gman系列!