Pages

Saturday, June 30, 2012

WebGL Quake 3: 2 years later

Got a great question in the comments on my Quake 3 Tech Talk post today from Daniel P.
It's been almost 2 years since you did this. My question is: how much could you improve the rendering with your knowledge today?
Indeed, both WebGL and myself have improved since I initially did the Quake 3 demo, so it's interesting to look back and see what could be done better today given more expertise with WebGL and the evolution of the API.

First off, let me clarify something: I wouldn't change much about how the level actually looks. My goal back then was to faithfully reproduce the original look and feel of the game and that's what I would do again if I were to take another stab at it. Even though it would be perfectly possible to add things like dynamic lights and whatnot I wouldn't want to simply because that's not what Quake 3 looked like.

What I would be tempted to change in visual terms is to fix up the few effects that weren't quite right the first time around. The lack of fog in the pits has always bothered me, as has the fact that using a cube for the sky results in ugly seams on the corners (the level geometry hides this pretty well). There were also some specular effects that I just faked for the demo that it would be nice to get right, but that plays a pretty minor part in the original scene.

It could also be fun to load and display the appropriate weapons and items around the level, but that would involve writing a whole new parser for the mesh format.

On the technical side, there's one huge change I would make right away that would spill out into everything else. I would not load the level from it's original file format as I did in the demo. Quake 3 was built around a very different hardware environment, and as such most of it's formats are aimed squarely at the fixed function hardware of the day. This means that my code does a lot of pre-processing before it can finally hand the scene off to the graphics card and say "go!". Compiling vertex buffers, calculating curved surfaces, translating q3 shaders into GLSL, etc. It was a fun exercise back then, but if I was looking at improving the experience it would pretty much be a given that I would want to do this once during an export process and serve to the users data that is already in a WebGL format. This could massively cut down on the load times by producing smaller files and easier parsing.

(Fun fact: I actually started on something like this, with the original target being Three.js. The exporter code, a Node.js utility, is on Github under the "exporter" branch of my Quake 3 repo if anyone wants to play with it. I doubt I'm going to take it any further.)

Once the geometry has been processed, honestly I feel as though the current demo renders it about as efficiently as is practical, so there's not any significant speedups to be had there. The one area that comes to mind that could be improved in terms of rendering efficiency is the way multipass effects are handled. Most of the level geometry doesn't have any special rendering effects applied, and as such it's done as a single pass where the lightmaps and diffuse textures are combined in the shader. For any surfaces with their own materials, however (anything that spins, flickers, glows, flows, shimmers, or shines) they are defined as multipass materials, with each pass typically only utilizing a single texture. During my conversion to GLSL I kept these all as individual passes, and as such we're not taking full advantage of the number of texture units available to us.

Some of these passes simply couldn't be combined due to the unique blend modes they use, but many could easily be boiled down into a single shader which would make the rendering of those surfaces a bit cleaner. In the exporter code linked to earlier I am condensing the shaders as much as possible (since Three.js doesn't nicely support multipass materials), but it's an imperfect conversion and there are still some blending issues to be had depending on the level you export.

In the same vein as pre-processing the level geometry I would also try and take advantage of the fact that WebGL has started supporting compressed textures in the time since the demo was first put online. Texture loading still makes up the bulk of the load time for the demo, and the ability to load Crunched DXT files instead of PNGs for everything would cut down on load times immensely, plus be kinder to video memory to boot! Unfortunately support for compressed textures in spotty right now, especially on mobile, so I would probably have to leave in a code branch that uses raw JPEGs and PNGs, but at least the JPEG textures would offer a savings over the current demo, which normalized everything to PNG for the sake of removing guesswork from the load process.

I mentioned at the time that I was unable to find an implementation of visibility culling that actually improved performance, but I think that was more of a symptom of where Javascript engines were at two years ago. I've since very successfully implemented culling in the Team Fortress demo, and would likely add it to this demo as well. For desktop users that probably wouldn't make any difference whatsoever, but it may be of some benefit to mobile devices.

Sound is an area in which browsers have made great strides in the last two years. At the time I had tried to use the audio tag to play ambient noises as you walked around the level (torches crackling, electronics humming, footsteps, etc.) but the audio capabilities simply were not up to snuff. For example: I couldn't accurately get a single audio sample to loop without a gap! Now we have the Web Audio API which allows much tighter control over audio playback (though widespread browser support is still a work in progress), and would allow me to get all of those lovely sound effects into the walkthrough as I had originally wanted.

Anisotropic filtering has landed in WebGL relatively recently, which could improve the look of many textures in the demo and wouldn't be a burden to any modern GPU, so it would be kind of a no-brainer to throw it in there as well. Antialiasing has also become more common, so if your Browser/OS/GPU combo supports it the demo will just magically look better. Gotta love easy visual upgrades!

One thing that the original demo got right that I would want to continue taking advantage of is that all of the level geometry was loaded in a web worker to prevent the window from freezing up. It worked wonderfully then, and still does today! I would unquestionably do the same in the future, with the exception of using tranferables to make the data passing faster. It's also worth keeping an ear to the ground for the possibility of doing WebGL calls directly in a worker, something that's been talked about on the mailing lists and would make the loading process even faster should it be implemented eventually.

One big improvement worth noting about workers since the time of the original demo is that when I first put this online web workers could not use typed arrays. This meant that all of the parsing that I did for the demo was done with strings and charCodeAt()! (Kill me now!) Since then workers have gained support for typed arrays and data views which makes the binary parsing much MUCH nicer, and the demo code has already been updated to make use of them.

There's several other advances that have happened in the last two years that have already made their way into the demo: Fullscreen support, mouse lock, and gamepad support all greatly improve the experience where available. Mouse lock in particular was a feature that I thought we'd never see in a browser at the time the original demo was put together. It's crazy how fast these things move!

So that's my little brain dump on the things I would be looking at were I to rebuild the demo for the web of today. Please don't expect any of these things to actually make it into the demo, since it's difficult for me to justify putting more work into it at this point. It's fun to look at how things have evolved in the last couple of years, though!