Wednesday, October 26, 2011

Building the Game: Part 2 - Model Format

See the code for this post, or all posts in this series.
See the live demo.



So, after the previous BtG post we had a bare-bones pile of boilerplate code and nothing terribly interesting to show with it. Obviously it's impractical to hand-code all of our meshes into our game (although that would make it fast!), so the next order of business should be creating a way to get meshes out of the various different modeling tools and into our renderer in a format that's efficient and flexible.

In layman's terms, we need a model format!


This is actually going to involve two parts - Designing the format and creating an exporter that writes to our new format from an existing tool (because I have absolutely no desire to write 3D modeling software!) And since we can't write an exporter for a format that doesn't exist yet, let's look at the proposed format first.

Most of the existing code out there will take one of two routes when it comes to getting meshes into their renders:

  1. Use an existing format like OBJ, COLLADA, or an existing game format. This is traditionally the route that I've taken.
  2. Create their own format that typically is represented as a JSON object or Javascript code. This is the route that J3D and three.js have gone.
Both routes have good points and bad, but it's hard to claim that either is completely optimal. The problem with existing formats is that they are typically not designed with the web in mind which can make them difficult and slow to parse, and can mean that you're potentially downloading a lot information that will never be applicable to your game. Wasting bandwidth is a big no-no when it comes to browser-based games, so this doesn't really suit our needs.

The JSON route is typically a far more sensible one. After all, browsers have very good JSON parses built right in, and the end result is a ready-to-use javascript object that can often be used to start rendering with only minimal modifications to the data. ASCII data might not be the most compact in the world, but we're going to be G-zipping the files as they go out to the user anyway and the parsing code will be far cleaner and faster than if we tried to do the whole thing binary. Win-win all around right?

Close, but there is one thing that we can probably improve on:

"vertices": [-0.4375,0.164062,0.765625,0.4375,0.164062,...

That's pretty ugly. Not only does having a massive array of floats in the middle of your JSON make it difficult to read, it also takes up more space than the binary equivalent and will be slower to read. (No matter how fast your JSON parser is, turning the string "0.164062" into a float is gonna be ugly) Not to mention that many existing formats store positions, normals, texcoords, and other attributes in their own separate arrays. This means that we're either taking a hit at load time to combine them into an interleaved array or taking a hit at render time to bind four or five buffers when we could have just done one. Neither of those are going to kill us in the performance department, but at the same time why do something sub-optimal if we can figure out a better way?

Well, as it turns out there's a ridiculously efficient way to handle these large arrays. Any web browser that supports WebGL now also supports the ability to return AJAX requests as an TypedArray, which is great for parsing binary data!

var vertXhr = new XMLHttpRequest();
vertXhr.open("GET", url, true);
vertXhr.responseType = "arraybuffer";
vertXhr.onload = function() {
    parseBinaryArray(this.response);
};
vertXhr.send(null);


This is especially true if our data consists of large arrays of floats or integers, which happens to be exactly what we need it for! If we are provided the offsets and lengths for the vertex and index buffers in the binary file, we can pretty much just pass them straight to the GPU, like so:

var vertexArray = new Uint8Array(xhrBuffer, vertexByteOffset, vertexByteLength);
var indexArray = new Uint16Array(xhrBuffer, indexByteOffset, indexByteLength / 2); 
// Indicies are shorts, so 2 bytes per index


var vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, vertexArray, gl.STATIC_DRAW);


var indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indexArray, gl.STATIC_DRAW);

Slick, huh?

Of course, the binary data should probably have a little bit more than that in it. We'll need some simple header information and a few concessions to expansion in the future. In my case, I'm taking a page from some of the old Quake BSP files and using a simple "lump" scheme. That is, the beginning of the file contains some arbitrary amount of offset and length pairs that describe where each chunk of information is found. Each "lump" can then have it's own read logic, and it's easy to tack on new lumps to the header if you find out that you need more data. My format looks like this (forgive the pseudo-C-ish descriptions):

struct Header { // Read at byte offset 0
    char[4] magic; // File identifier. In this case: "wglv"
    uint32 version; // File version. 1 for now
    uint32 lumpCount; // How many lumps this file contains
    Lump[lumpCount] lumps;
};


struct Lump {
    char[4] lumpId; // Four char id that tells us what this lump contains
    uint32 lumpOffset; // Byte offset of the lump data from start of file
    uint32 lumpLength; // Length of the lump in bytes
};

For now we only have two lump types:

struct VertexLump { // lumpId = "vert"
    uint32 vertexFormat; // Bit flags to describe the vertex elements
    uint32 vertexStride; // Stride of a single vertex in bytes.
    byte[lumpLength - 8] vertexBuffer; // The raw vertex data
};



struct IndexLump { // lumpId = "indx"
    uint16[lumpLength / 2] indexBuffer; // The raw index data
};

Pretty simple, but with plenty of wiggle room for expansion later.

Now that's all well and good, but models are more than triangle strips alone! We need material information for a start. If the model is animated we'll need bone data. We'll probably want things like collision hulls and physical properties too. And frankly, a lot of that would be a pain to work with in binary form. So we'll leave those bits to a JSON file! Like so:

{
    "modelVersion": 1,
    "name": "the_crate",
    "meshes": [ 
        {
            "material": "root/materials/crate",
            "defaultTexture": "root/texture/crate.png",
            "submeshes": [
                { 
                    "indexOffset": 0,
                    "indexCount": 36
                }
            ]
        }
     ]
}

This is obviously far easier to read, so I won't spend as much time on it, but let's touch on a couple of things really quickly: The indexOffset and indexCount will be offsets into the index buffer of the binary file (in indices, not bytes). The binary only has one index buffer and one vertex buffer which all of the meshes and submeshes share. This means we only have to bind buffers once for the model (yay!), but also means that we are limited to 65536 verts in a single model. That's not as bad as it sounds though, after all we ARE designing a game for the web. How detailed were you planning on making your models, exactly?

Then we've got arrays of meshes and submeshes. A mesh is simply all of the triangles that share a single material. The submeshes within a mesh are a little more obscure, because they're primarily there in anticipation of a future feature: Skinning. We're going to want to use GPU skinning down the line and the GPU can only accept so many bones at a time. Submeshes will describe the triangles that share a group of bones, so we can render them in the biggest batches possible. (We could also theoretically use the submeshes for tri-stripping, but I'll leave that as an exercise for the reader.)

This structure allows us to be very efficient about rendering, changing state only when we must. The basic pattern for drawing a model becomes:
  • Bind the vertex and index buffers for the model
    • Loop through the meshes
      • Bind the material for the mesh
        • Loop through the submeshes
          • If we're skinned, bind the bones for the submesh
          • Draw the submesh
Next, the material and defaultTexture. We're actually going to completely ignore materials for the time being, because that's a big complex topic that's best reserved for another blog post down the line. We know we'll need them at some point though, so stubbing in a simple path will do nicely. We do want to be able to show something sensible on screen, though, so we'll provide a default texture to use for the time being. This will usually just be a path the the diffuse texture for our mesh, or in the case of some more complicated effects might be a simple colored texture with the word "Water/Fire/Whatever" printed on it. It can be used in a level editor when we don't want to be rendering everything full detail or it can serve as a fallback if the real material is missing or corrupt. The point is it gives us something to display.

Having the default texture there also enforces a principles that I like to encourage: It allows people to elegantly grow their support for the format. If you're a third part that wants to use my format in your own project, you don't HAVE to support my material system from day one. In fact, you don't have to support it at all. But you can implement the defaultTexture and still have something reasonable show up. The same goes for the lump system in the binary file: if you see a lump you don't understand you can usually just skip it! As long as you support the vertex and index lumps the rest can be built up over time. This is a theme that you will see crop up again and again as this series goes on: Don't force a monolithic implementation, and always give developers "checkpoints" that they can hit as they're building up support.

In any case, we now have the two pieces of our model format laid out and ready to go! We'll give them extensions of .wglvert for the binary and .wglmodel for the JSON, and we'll make it a part of the format definition that these two must always share the same name with the exception of the extension. That way we can start the load for both of them simultaneously and let the browser get them to us that much faster.

You can see my code for parsing the format I just described in the model.js file.

Alright, so now we have a nice format all ready to go and exactly no content that uses said format. That simply won't do! We need a way to take models from popular modeling packages and write them out in our own custom format. We need... an exporter!


The first question to be asked is: what tools do we want to support? And the best answer would be "All of them!" After all, all of the good modelers have well documented systems for scripts and plugins. If our new format catches on we should theoretically be able to write an exporter for any format that you can think of. But to do so at this stage would be a daunting task and for our needs right now we only really need one good one to work.

In this regard Blender is a very tempting choice, in that it's free, fairly popular in the open source community, and has a very nice Python scripting system. But after a bit of fiddling around with a few different options, and after seeing Bartek Drozdz give an excellent presentation on J3D at onGameStart, I was inspired to give Unity a try as my initial exporter target.

Unity has a lot going for it. The basic version is free, but it's got the support of a commercial product behind it. The community is reasonably large and fairly active, and the engine has proven itself everywhere from the desktop to the iPhone. The scripting systems are well documented and robust. It's also got a very nice level editing system, something that will be very beneficial to us later on. Plus, it has methods for getting pretty much every major model format into it, so it's an effective bridge for us. Oh, and it comes with some very nice sample resources, which is great for us as we start working on our exporter!

This doesn't mean that we'll support every feature that Unity does, of course, nor does it mean that we're going to do anything that ties us to Unity specifically. It's just a good tool to use as a starting point.

This post is already long enough, so I'll avoid going over the exporter script in too much detail. You can see the full code in WebGLExporter.cs and WebGLModel.st. I do want to hit on a couple of points, though.

First off, to give credit where it's due: The C# script is based on the OBJ Exporter example included with Unity. The technique for using ANTLR templates to build the JSON files was borrowed from Bartek's own J3D Unity exporter. (He's really done a great job with J3D, by the way! You should go check it out!)

The other think to note is that the exporter does (or will do) quite a bit of caching, mapping, condensing, and transforming as it preps the data to be written out to the file. This is because the data layout Unity exposes is not what we want to use in our renderer, so we have a choice of doing the transformation once on export or every single time we load the file. The choice there should be pretty obvious! No matter how ugly our exporter gets, if it creates files that can be loaded efficiently it's totally worth it!

To use the exporter with, for example, the sample AngryBots Unity project you'll need to do the following:
  • Open up the "Unity/Assest/Editor" folder in the code base
  • Copy all of the files, including the dll's
  • Paste them into the "AngryBots/Assets/Editor" folder (on my Mac this is at "MacHD/Users/Shared/Unity")
  • Unity should automatically load the exporter the next time you switch to it, which will give you a menu item of "WebGL"
  • Select an mesh or meshes from the scene, then go to the "WebGL" menu and click "Export selected Meshes"
  • A message should appear in the status bar reading "Exported n meshes"
  • Your exported .wglvert and .wglmodel files, as well as the exported textures, should now be under the "AngryBots/WebGLExport" folder (We'll make that customizable later)
You should be able to copy the folder structure that the export produces directly into the "public/root" folder of our WebGL project. 

Once you've got the exported resources in your root folder, adding them to our renderer is a fairly simple matter. In game-renderer.js, in the constructor we add:

this.model = new model.Model("root/model/model_name"); // No extension!

and in the drawFrame function we add:

this.model.draw(gl, viewMat, projectionMat);


And suddenly... Spinning Vat! Soooo much better than Spinning Crate! :)


The draw will automatically handle things like waiting till the mesh is loaded to draw, so we don't need to worry about synchronization much just yet. Now, this method isn't the best for drawing multiple instances of the mesh, or for drawing scenes where multiple meshes share the same materials, but we'll cross those bridges when we come to them. For now the important bit is that we can verify the export is working correctly.


So that's it for this post! Coming up next, we'll look at mesh skinning as we work our way up to an animated character.


I should mention that while these first three posts have come out in pretty quick succession, that's only because I wanted to start getting to the useful bits fairly quickly. After the next post I would expect things to slow down a bit. Posts will come as I hit various feature milestones.