Javascript 使用回收帧缓冲区的threejs片段着色器
我正在尝试制作一个模拟长曝光摄影的应用程序。我的想法是从网络摄像头抓取当前帧并将其合成到画布上。随着时间的推移,照片将“曝光”,变得越来越亮。(见附件) 我有一个完美的着色器。这就像photoshop中的“添加”混合模式。问题是我无法让它回收上一帧 我认为它应该是一些简单的东西,比如Javascript 使用回收帧缓冲区的threejs片段着色器,javascript,three.js,webgl,fragment-shader,Javascript,Three.js,Webgl,Fragment Shader,我正在尝试制作一个模拟长曝光摄影的应用程序。我的想法是从网络摄像头抓取当前帧并将其合成到画布上。随着时间的推移,照片将“曝光”,变得越来越亮。(见附件) 我有一个完美的着色器。这就像photoshop中的“添加”混合模式。问题是我无法让它回收上一帧 我认为它应该是一些简单的东西,比如renderer.autoClear=false
renderer.autoClear=false但是这个选项在这个上下文中似乎没有任何作用
下面是使用THREE.EffectComposer应用着色器的代码
onWebcamInit: function () {
var $stream = $("#user-stream"),
width = $stream.width(),
height = $stream.height(),
near = .1,
far = 10000;
this.renderer = new THREE.WebGLRenderer();
this.renderer.setSize(width, height);
this.renderer.autoClear = false;
this.scene = new THREE.Scene();
this.camera = new THREE.OrthographicCamera(width / -2, width / 2, height / 2, height / -2, near, far);
this.scene.add(this.camera);
this.$el.append(this.renderer.domElement);
this.frameTexture = new THREE.Texture(document.querySelector("#webcam"));
this.compositeTexture = new THREE.Texture(this.renderer.domElement);
this.composer = new THREE.EffectComposer(this.renderer);
// same effect with or without this line
// this.composer.addPass(new THREE.RenderPass(this.scene, this.camera));
var addEffect = new THREE.ShaderPass(addShader);
addEffect.uniforms[ 'exposure' ].value = .5;
addEffect.uniforms[ 'frameTexture' ].value = this.frameTexture;
addEffect.renderToScreen = true;
this.composer.addPass(addEffect);
this.plane = new THREE.Mesh(new THREE.PlaneGeometry(width, height, 1, 1), new THREE.MeshBasicMaterial({map: this.compositeTexture}));
this.scene.add(this.plane);
this.frameTexture.needsUpdate = true;
this.compositeTexture.needsUpdate = true;
new FrameImpulse(this.renderFrame);
},
renderFrame: function () {
this.frameTexture.needsUpdate = true;
this.compositeTexture.needsUpdate = true;
this.composer.render();
}
这是着色器。没什么特别的
uniforms: {
"tDiffuse": { type: "t", value: null },
"frameTexture": { type: "t", value: null },
"exposure": { type: "f", value: 1.0 }
},
vertexShader: [
"varying vec2 vUv;",
"void main() {",
"vUv = uv;",
"gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
"}"
].join("\n"),
fragmentShader: [
"uniform sampler2D frameTexture;",
"uniform sampler2D tDiffuse;",
"uniform float exposure;",
"varying vec2 vUv;",
"void main() {",
"vec4 n = texture2D(frameTexture, vUv);",
"vec4 o = texture2D(tDiffuse, vUv);",
"vec3 sum = n.rgb + o.rgb;",
"gl_FragColor = vec4(mix(o.rgb, sum.rgb, exposure), 1.0);",
"}"
].join("\n")
要实现这种反馈效果,必须交替编写WebGLRenderTarget
的单独实例。否则,将覆盖帧缓冲区。不完全确定为什么会发生这种情况。。。但这是解决办法
初始化:
呈现:
this.renderer.render(this.scene, this.camera);
this.renderer.render(this.scene, this.camera, this.rt1, false);
// swap buffers
var a = this.rt2;
this.rt2 = this.rt1;
this.rt1 = a;
this.shaders.add.uniforms.tDiffuse.value = this.rt2;
composer.render();
composer.swapTargets();
试试这个:
this.renderer = new THREE.WebGLRenderer( { preserveDrawingBuffer: true } );
这在本质上相当于posit labs answer,但我已经成功地使用了一个更精简的解决方案—我创建了一个EffectComposer,只使用我想要回收的ShaderPass,然后在每个渲染中为该composer交换renderTargets
初始化:
THREE.EffectComposer.prototype.swapTargets = function() {
var tmp = this.renderTarget2;
this.renderTarget2 = this.renderTarget1;
this.renderTarget1 = tmp;
};
...
composer = new THREE.EffectComposer(renderer,
new THREE.WebGLRenderTarget(512, 512, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat })
);
var addEffect = new THREE.ShaderPass(addShader, 'frameTexture');
addEffect.renderToScreen = true;
this.composer.addPass(addEffect);
呈现:
this.renderer.render(this.scene, this.camera);
this.renderer.render(this.scene, this.camera, this.rt1, false);
// swap buffers
var a = this.rt2;
this.rt2 = this.rt1;
this.rt1 = a;
this.shaders.add.uniforms.tDiffuse.value = this.rt2;
composer.render();
composer.swapTargets();
然后,次要EffectComposer可以获取两个渲染目标中的一个,并将其推送到屏幕上或对其进行进一步变换
还要注意,在初始化着色器类时,我将“frameTexture”声明为textureID。这让ShaderPass知道如何使用上一个过程的结果更新帧纹理一致性。似乎表明您应该创建WebGLRenderer,如下所示:新建WebGLRenderer({preserveDrawingBuffer:true})
,并将renderer.autoClearColor
设置为false。这将用于创建几何体效果(如运动模糊),但它对几何体上的实际纹理没有影响。这正是我试图瞄准的目标。感谢您的响应,doob。不过,恐怕这对我不起作用。我发布的作为答案的“渲染到纹理”技术非常有效……但我仍然不确定为什么会起作用。从它看来,渲染过程可能会被标记为“needsSwap”可自动执行此操作。