Sunday, July 8, 2012

Using WEBGL_depth_texture

Gregg Tavares mentioned on the WebGL mailing list recently that the WEBGL_depth_texture extension was available in Chrome Canary, but I've yet to see anyone talking about how to use it so I figured I'd throw together a quick demo via one of the most popular uses for depth textures: Shadow mapping!
As a quick aside: this is actually the first time I've implemented shadow via depth texture, so forgive me if I've made any silly mistakes or missed any edge cases. Pretty much all of the interesting code is in light.js or the HTML file source, so have a look. I should mention that the code for this demo is not by any means the optimal way of achieving the effect in question, nor is the code the cleanest (sorry!) I was primarily shooting for straightforward shader code.

So first let's talk about what the extension actually does. I'll assume that you're familiar with the basics of render-to-texture. If not, read up here. Before this extension was available you could render to a color texture, but the depth component was provided by a non-readable render buffer rather than a texture, like so:
var size = 256;

// Create a color texture
var colorTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, colorTexture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, size, size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);

// Create the depth buffer
var depthBuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, depthBuffer);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16, size, size);

// Create the framebuffer
var framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorTexture, 0);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, depthBuffer);
This created a framebuffer that, when bound, would render all WebGL draw commands given into colorTexture instead of the WebGL canvas. That texture could then in turn be used as a normal texture with other draw commands, which was primarily a means for creating certain types of special effects like post process effects or "security camera" style video screens.

There are also many effects that utilize the depth values of the scene instead of (or along with) the color output. Antialiasing techniques, field-of-view blurring, and (of course) shadow mapping all are more concerned about how close a point is to the screen than it's color.

The problem with this is that depthBuffer cannot be used as a texture. RenderBuffers do not provide a mechanism for reading back the values written into them, and as such while WebGL would utilize the buffer to correctly depth-test the scene the information stored within was effectively "lost".

This is, of course, a limitation that can be worked around. After all, mine is certainly not the first shadow mapping demo for WebGL. The way these demos work is by using a specialized shader to "pack" a 24 or 32 bit depth value into the RGB(A) channels of the color texture, and unpack them again when evaluating the shadow. This works quite well, but has a few drawbacks, the primary one being the need for specialized shaders which can slow down the lookup a bit.

Enter depth textures!

Depth textures are exactly what they sound like: textures that store depth values. For the most part they work as a simple replacement for the renderBuffer from the previous code.
// Query the extension
var depthTextureExt = gl.getExtension("WEBKIT_WEBGL_depth_texture"); // Or browser-appropriate prefix
if(!depthTextureExt) { doSomeFallbackInstead(); return; }
var size = 256;

// Create a color texture
var colorTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, colorTexture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, size, size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);

// Create the depth texture
var depthTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, depthTexture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT, size, size, 0, gl.DEPTH_COMPONENT, gl.UNSIGNED_SHORT, null);

var framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, colorTexture, 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, depthTexture, 0);
The framebuffer is used exactly the same as the previous version, but now when we're done rendering to the framebuffer we can treat depthTexture just like any other texture! Querying it in a shader will yield a greyscale value (r, g, and b will all have the same value) between 0 and 1 that represents the depth of the scene at that point.

There's some things I want to point out about the previous code snippet. You may have noticed that I queried the extension but never used the object that was returned aside from checking that it was non-null. That's because all of the required symbols and functions are already part of the base WebGL context. You still have to query the extension for the functionality to be activated, though, and you do need to hold on to the returned object! In my code I found that if I allowed the returned object to be garbage collected my depth textures would stop working. Also, it should theoretically be possible to create a framebuffer that has nothing but a depth buffer, no color component. I've heard reports that that's a buggy case in many drivers, however, so we create an unused color texture anyway for compatibility. It's a shame that we have to lock up the resources just to appease the drivers, but at least we can avoid the computational overhead by disabling color writes with:
gl.colorMask(false, false, false, false);
As for applying this to a technique like shadow mapping? Honestly, I'll leave that to more experienced bloggers than I. I highly recommend looking at this great WebGL shadow mapping tutorial, which goes very in-depth about the technique and does a great job of showing you the shaders involved. The biggest difference is that if you're using depth buffers you can ignore all the bits about packing and unpacking the depth. Yay!

Just as a reminder: This is a fairly new extension and as such browser support is extremely limited right now (as in only recent Chrome builds, as far as I know) and it's my understanding that even when the extension sees more widespread adoption mobile support for the underlying feature may be spotty, so use with caution for the time being. It's a great extension, though, that streamlines one of the more commonly used bits of the effects pipeline and as such I'm thrilled to see it live and usable!

Now how about we work on getting Multiple Render Target support next, yes? :)

4 comments:

  1. Thanks for this write-up. My code currently uses RGBA packing/unpacking, so I'm looking forward to playing with depth textures as a replacement and then falling back to using packing/unpacking where depth textures aren't supported.

    Do you think using depth textures will provide more accurate results than RGBA? In the specific case of shadow mapping, are the issues with bias and shadow acne as pronounced as they are with RGBA? Or, are depth textures more helpful with regards to performance than accuracy?

    ReplyDelete
  2. I don't think that there are going to be any precision benefits, outside of maybe preventing some minor floating point error during the packing/unpacking. The big benefits are going to be:

    Minor Speedups - It reduces the shader code required and drivers/hardware can be optimized for it on both read and write.

    Code Simplicity - With the packing method you need a separate shader (or more likely shaders) specifically for writing out the depth buffer. In my demo, however, I was able to use the same shaders for rendering both the depth and the final color scene. I don't really recommend that, mind you, as there's always the potential for the fragment shader to be doing a lot of unnecessary work. Hopefully the drivers skip most of it when color writes are disabled, though.

    You're still going to run into issues with shadow acne, though, as that's inherit to the shadow mapping technique and not really related to the buffer format behind it. (You can see this in my demo around the shoulders of the runner as the light moves, especially with animation paused)

    ReplyDelete
  3. What is the best way to get linearized depth. Using this new extension my output in the texture looks is always very shiny. Now I implemented http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/ to get linearized depth out of my depth texture generated with the extension.

    Or I can pass the vertex positon to the fragment shader and calculate before that vdepth = (-vPosition.z-near)/(far-near);
    And simplay pass vdepth as colors to a texture.

    Comparing the results: the first texture looks a bit brighter than the secon one.
    What is your opinion on the best method to get linearized depth?

    ReplyDelete
  4. When one conceives the issue at hand, i have to agree with your endings. You intelligibly show cognition about this topic and i have much to learn after reading your post.Lot's of greetings and i will come back for any further updates.

    EVERFOCUS KARACHI

    ReplyDelete