Sunday, July 8, 2012

Using WEBGL_depth_texture

Gregg Tavares mentioned on the WebGL mailing list recently that the WEBGL_depth_texture extension was available in Chrome Canary, but I've yet to see anyone talking about how to use it so I figured I'd throw together a quick demo via one of the most popular uses for depth textures: Shadow mapping!
As a quick aside: this is actually the first time I've implemented shadow via depth texture, so forgive me if I've made any silly mistakes or missed any edge cases. Pretty much all of the interesting code is in light.js or the HTML file source, so have a look. I should mention that the code for this demo is not by any means the optimal way of achieving the effect in question, nor is the code the cleanest (sorry!) I was primarily shooting for straightforward shader code.

So first let's talk about what the extension actually does. I'll assume that you're familiar with the basics of render-to-texture. If not, read up here. Before this extension was available you could render to a color texture, but the depth component was provided by a non-readable render buffer rather than a texture, like so:
var size = 256;

// Create a color texture
var colorTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, colorTexture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, size, size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);

// Create the depth buffer
var depthBuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, depthBuffer);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16, size, size);

// Create the framebuffer
var framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorTexture, 0);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, depthBuffer);
This created a framebuffer that, when bound, would render all WebGL draw commands given into colorTexture instead of the WebGL canvas. That texture could then in turn be used as a normal texture with other draw commands, which was primarily a means for creating certain types of special effects like post process effects or "security camera" style video screens.

There are also many effects that utilize the depth values of the scene instead of (or along with) the color output. Antialiasing techniques, field-of-view blurring, and (of course) shadow mapping all are more concerned about how close a point is to the screen than it's color.

The problem with this is that depthBuffer cannot be used as a texture. RenderBuffers do not provide a mechanism for reading back the values written into them, and as such while WebGL would utilize the buffer to correctly depth-test the scene the information stored within was effectively "lost".

This is, of course, a limitation that can be worked around. After all, mine is certainly not the first shadow mapping demo for WebGL. The way these demos work is by using a specialized shader to "pack" a 24 or 32 bit depth value into the RGB(A) channels of the color texture, and unpack them again when evaluating the shadow. This works quite well, but has a few drawbacks, the primary one being the need for specialized shaders which can slow down the lookup a bit.

Enter depth textures!

Depth textures are exactly what they sound like: textures that store depth values. For the most part they work as a simple replacement for the renderBuffer from the previous code.
// Query the extension
var depthTextureExt = gl.getExtension("WEBKIT_WEBGL_depth_texture"); // Or browser-appropriate prefix
if(!depthTextureExt) { doSomeFallbackInstead(); return; }
var size = 256;

// Create a color texture
var colorTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, colorTexture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, size, size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);

// Create the depth texture
var depthTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, depthTexture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT, size, size, 0, gl.DEPTH_COMPONENT, gl.UNSIGNED_SHORT, null);

var framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorTexture, 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTexture, 0);
The framebuffer is used exactly the same as the previous version, but now when we're done rendering to the framebuffer we can treat depthTexture just like any other texture! Querying it in a shader will yield a greyscale value (r, g, and b will all have the same value) between 0 and 1 that represents the depth of the scene at that point.

There's some things I want to point out about the previous code snippet. You may have noticed that I queried the extension but never used the object that was returned aside from checking that it was non-null. That's because all of the required symbols and functions are already part of the base WebGL context. You still have to query the extension for the functionality to be activated, though, and you do need to hold on to the returned object! In my code I found that if I allowed the returned object to be garbage collected my depth textures would stop working. Also, it should theoretically be possible to create a framebuffer that has nothing but a depth buffer, no color component. I've heard reports that that's a buggy case in many drivers, however, so we create an unused color texture anyway for compatibility. It's a shame that we have to lock up the resources just to appease the drivers, but at least we can avoid the computational overhead by disabling color writes with:
gl.colorMask(false, false, false, false);
As for applying this to a technique like shadow mapping? Honestly, I'll leave that to more experienced bloggers than I. I highly recommend looking at this great WebGL shadow mapping tutorial, which goes very in-depth about the technique and does a great job of showing you the shaders involved. The biggest difference is that if you're using depth buffers you can ignore all the bits about packing and unpacking the depth. Yay!

Just as a reminder: This is a fairly new extension and as such browser support is extremely limited right now (as in only recent Chrome builds, as far as I know) and it's my understanding that even when the extension sees more widespread adoption mobile support for the underlying feature may be spotty, so use with caution for the time being. It's a great extension, though, that streamlines one of the more commonly used bits of the effects pipeline and as such I'm thrilled to see it live and usable!

Now how about we work on getting Multiple Render Target support next, yes? :)