Sunday, November 27, 2011

Building the Game: Part 5 - Static Level Geometry

See the code for this post, or all posts in this series.
See the live demo.




(WARNING! The live demo for this post will probably take a while to load and will most likely not run at 60 FPS! Optimizations will come in later posts.)


Sorry about the long gap in posts, but this has proven to be one of the more challenging things I've attempted so far. That's primarily been because I wasn't too familiar with how Unity handled things behind the scenes until I started working on this particular post. I've found some surprising (and occasionally disappointing) things about how Unity handles it's levels, and it's forced me to rethink a couple of aspects of this project, but I think I've got a decent handle on it now and at this point the order of the day is progress in small increments.


To that end, we're going to start talking about exporting, displaying, and interacting with levels from Unity, but we're going to do so one step at a time. Today's step is simply going to be getting the level geometry and lighting information exported and rendering brute force. We're not going to be worrying about collision detection, visibility culling, or anything else right now than just getting those triangles out of Unity and into our browser! 



Most of the level formats that I have worked with in the past have had contained most of the geometry needed to display them in a large internal mesh that's been broken up by visibility (the traditional "BSP Tree"), and then occasionally will have instances of common meshes (like crates or lights, etc) that need to be placed throughout the level. (The Source engine uses these and refers to them as "props") Typically the tools that you use to build these levels start by having you block out the large geometry with "brushes", convex polygons that typically make up the floor and walls of your level. Then you can go and place high detail meshes, usually imported from an external tool, to give the level some more detail and texture.


As such, I was surprised to learn that in Unity everything in the level is treated as, essentially, a "detail prop". What I mean by this is that even large scale geometry such as the floor or walls is defines by placing instances of a mesh throughout the level. Those instances may simply be of a texture cube that you created in-editor, or of a mesh that was imported from Blender or Maya, but Unity treats them all the same.


This approach has some upsides and some downsides, which I'm not going to dissect here, but it does give us one big advantage in that it's easier to get the basic rendering in place because everything is going to be rendered the same way! All we really need to do is put together a format that lets us define the positions, size, and rotations of mesh instances, and then we'll render them using the instancing technique described in the last BtG. We'll be sticking to JSON for this part of our format, and it's going to be dead simple for right now:



{
    "levelVersion": 1,
    "name": "Sample Level",
    "props": [ 
        {
            "model": "root/model/Barrel_Large",
            "instances": [
                { 
                    "pos": [28, 3, -30],
                    "rot": [-1.2, -0.38, -0.05, 0.9],
                    "scale": 1
                },
            
                { 
                    "pos": [7, 2.14, -62],
                    "rot": [0, 0, 0, 1],
                    "scale": 1.2
                },
            ]
        },
        {
            "model": "root/model/Barrel_Small",
            "instances": [ ... ]
        }
    ]
}

As you can see, all that our "level" really contains is paths to mesh files and then information about the transform of each mesh instance. Utilizing our previous post's instancing code, it's super easy to load the appropriate data into memory, as we show in level.js:



for (var i in this.props) {
    var prop = this.props[i];
    var url = prop.model;
    prop.model = new model.Model();
    prop.model.load(gl, url);
    
    for(var j in prop.instances) {
        instance = prop.instances[j];
        instance.modelInstance = prop.model.createInstance();


        // Set up the instance transform
        instance.modelInstance.matrix = mat4.fromRotationTranslation(instance.rot, instance.pos);
        mat4.scale(instance.modelInstance.matrix, [instance.scale, instance.scale, instance.scale]);
    }
}


And drawing those instances is equally simple:



for (var i in this.props) {
    var prop = this.props[i];
    prop.model.drawInstances(gl, viewMat, projectionMat);
}

Assuming that we have some way of exporting to this new format, we've can now render an entire level! Check it out!




This, of course, is just a small segment of a level, (in this case, the "AngryBots" sample level included with Unity). And it's really cool to look at that and say "Hey! That actually looks like something!" We've made a jump from random floating crates to a full blown environment in just a few lines of code! (Mostly, anyway. We're ignoring the export for the moment.)


A key element is missing from our scene, however, and that's lighting. There are several different options for lighting. We could, for example, go with nothing but dynamic lights, a technique famously employed in Doom 3 (which is open source now! Sweet!) But the most common technique, and what we'll be using here, is static lighting via lightmaps. Lightmapping has been in use since the original Quake, and is alive and kicking in modern marvels like Modern Warfare 3, and it should serve use well in our game too. And, fortunately for us, Unity will build the lightmaps we want!


Unity's lightmaps work a bit differently than most others that I've seen, but I kinda like the way they're set up so for the moment I'm going to be sticking to how they do it for my own code too. Typically anything that gets lighmapped in a level will have a unique set of lightmap coordinates, even if that mesh is instanced several times throughout the level. This can mean you end up storing the same mesh with different lighting UVs over and over again. In Unity, and subsequently in our renderer, each mesh gets a single set of lightmaps UVs, as if it were just another normal texture. Each instance of the mesh within the level contains a lightmap index, a UV offset, and a UV scale. The meshes lightmap UVs are transformed in a shader by that offset and scale to line up with the rect on the lightmap that has been reserved for this mesh instance. The lighting information for the mesh is laid out the same way for each instance, just shifted around the texture.


The lightmaps color information is also a bit different, and it took me a moment to figure out how to display it correctly. The lightmap itself looks like this, but it will be hard too see much because there's a lot of alpha:


If you try to simply multiply your texture colors by the lightmap RGB values like you might expect to be able to you'll just get an over-bright mess. What you actually need to do is modulate the RGB values by the alpha value multiplied by a brightness factor (I've found that 9 seems to match the Unity rendering reasonably closely.) I'm not 100% sure why they do this, aside from maybe some higher precision on brighter lights. It does give you some nice effects where the lighting can be really strong and starts to wash out the diffuse colors, though. Kind of a "psuedo-HDR". The shader code is pretty simple:



void main(void) {
    vec4 color = texture2D(diffuse, vTexCoord);
    vec4 lightValue = texture2D(lightmap, vLightCoord);
    float brightness = 9.0;
    gl_FragColor = vec4(color.rgb * lightValue.rgb * (lightValue.a * brightness), 1.0);
}



We also have to add the approriate lighting info to our level format. This means an array of lightmap paths at the beginning of the file


"lightmaps": ["root/texture/level1/light0.png","root/texture/level1/light1.png"]


And the lightmap index, offset, and scale for each instance



    "pos": [0, 0, 0],
    "rot": [0, 0, 0, 1],
    "scale": 1,
    "lightmap": {
         "id": 1,
         "scale": [0.0126953, 0.0126953],
         "offset": [0.566791, 0.587325]
    }
}

We'll also tweak our instance rendering to give us a way to render lightmapped instances (see drawLightmappedInstances in model.js). The result? Not too shabby looking!




What a difference a few textures can make, huh? Now we have a nicely lit scene that we can fly around in and build the rest of our level information (collision, item positions, etc) on top of. The entire scene weighs in at about 32 Mb, which isn't spectacular but is far better than the 200Mb I was seeing for the Source Engine demo.


There are a couple of other implementation details that may be of interest here. I've implemented a simple Texture Manager (in texture.js) to do simple checks to see if a requested texture has already been loaded and if so return a reference to the existing one. This helps improve performance quite a bit, since this particular level tends to share textures between a lot of different meshes.


I've also gone and fixed some issues with the flying camera (camera.js), since this is the first demo to really use it. I had made a few really stupid mistakes in earlier versions that caused the camera to lock up if you were looking straight up or down. (doh!) That's fixed here, though you'll want to avoid the flying camera from earlier posts code.


Of course, I've avoided talking about the export process for this the whole time and, um... I'm mostly going to keep avoiding it. Fact is, it's kinda ugly and I'm still working out wether or not I want to keep doing things the way I'm doing them now. If you really want to see all the gory details feel free to dive into WebGLExport.cs and look at the logic in ExportLevel. There are a few quirks that deserve special mention at this point, though:


I'm basically just looping through every mesh in the scene and exporting any that are visible and static. Anything that has a dynamic component to it is simply being skipped right now. This does leave a couple of gaps if you export a complicated scene like the one above, but handling dynamic scene components is a subject for another day.


Also, I'm kinda cheating on mesh scale at the moment. Unity stores instance scales as a 3 dimensional vector, and I'm boiling it down to a scalar value. Most of the time this will work fine, since scaling unevenly leads to squashed looking meshes, but if anyone is doing that intentionally in a Unity scene we'll loose it during the export. I'm doing it this was to keep my transform matrices orthogonal, which has various nice properties in terms of inverting and so on, but I may change my mind on this restriction later if a compelling reason presents itself.


Another thing worth mentioning about the export is a leak that I've run into and am really not sure how to fix. If you try to export a large scene with a lot of textures (like AngryBots) the export will die part way through and spit "Too many files open" out onto the console, at which point Unity basically needs to be restarted. Preventing texture exports will allow the entire level to be output without issue. Obviously I've got a file handle leak, but I can't see where, and nobody on Unity's Stack Overflow imposter seems to know (or care) either. Interestingly, it seems to perform better on OSX Lion but I can still get it to crash after a few exports. I'd appreciate anyone that cares to give any suggestions in this area.


Oh, and finally I should mention that Unity apparently hates the way that the rest of the world orients their X axis, so they've flipped it. Yeah, I was happy about that too. I'm still deciding wether or not I want to fix this, but the immediate consequence is that everything we export is mirrored for now. The easiest thing to do if this gets to be a problem would be to invert X on our projection matrix, but I'm gonna just leave it be for now.


So, where does this leave us? Well, we can render all the static, lightmapped geometry in a level just fine, but we are rendering ALL of it. There's no visibility culling yet. I was able to get away with that in Quake 3, but it probably won't fly here. (Note that on my MacBook I can render the whole AngryBots level at 60fps, but without much wiggle room. We need better performance if we want to start adding game logic.)


Beyond simple visibility culling, however, I'd like to try to optimize how some of the geometry is handled. There are some meshes that appear in the levels that don't really need to be "instanced", since they only appear once and won't be applicable to other levels. (Many of the floors and walls will meet this criteria.) It would be more efficient to pack this geometry into a level specific buffer and do some additional state sorting on it. That will be a nice future optimization at some point.


We also don't have any collision information in our export, and that will obviously be critical moving forward, as will be adding things like triggers for doors and items like health packs or weapons. And we're still sidestepping the whole material issue by rendering everything with a single shader. So... yeah. Long ways to go yet. This is a decent step in the right direction, though!


I'm hoping that the next post won't take as long as this one did, but sadly with Christmas just around the bend, a new project starting up at my work, and a WebGL Camp to prepare for I think that it's best to brace for a bit of a delay. Sorry in advance!

17 comments:

  1. Hi Brandon,

    I'm very impressed with your work.
    Can I reach you by email?

    Thanks!

    Ken

    ReplyDelete
  2. Aw Snap! Crashes my tab a lot. 32mb for the inital bootup is quite large. I peeked at how you load resources, and it's a lot of little pieces of stuff, many individual textures, sometimes containing very little actual pixels.

    1) split geometry in two groups, those that are smaller to transfer indexed, and the others.
    2) Run a triangle strip calculator over your geometry, where it saves vertices, use it.
    3) Collate meshes into a binary buffer, load it with XHR2 arraybuffer responseType
    4) Analyze textures and cut away unneeded pieces
    5) create a texture atlas that packs textures as tight as possible
    6) where possible use JPGs instead of PNGs
    7) where possible use procedural geometry and textures
    8) implement a loader that tracks progress of loading the resources and displays a loading bar

    ReplyDelete
  3. Florian, have you been peeking at my notes? Your list basically reads like an outline of my next few posts! :)

    As I stated at the beginning of this post, this is very unoptimized currently. There's plenty of work to do before this could be considered "game ready", the idea is simply to give us a starting point.

    ReplyDelete
  4. Brandon, no peeking at notes, scouts honor ^^.

    I'm currently also going about implementing a game (quite different though then what yours looks to be).

    ReplyDelete
  5. Following with great interest!! Keep up the good work.
    /E

    ReplyDelete
  6. Great update as usual! Well worth the wait, as I am sure the next one will be!

    ReplyDelete
  7. On the subject of the one dimensional scaling, it could be an issue if the level designers use kit bashing(manipulating and combining existing assets to make new looking assets). It's a special case, but it's not totally uncommon and has some benefits otherwise.

    ReplyDelete
  8. Yeah, I was thinking about that too, actually (the scaling issue). I think I may backpedal on that a bit and go with a three-component scale after all. The biggest issue it introduces is that I can't pull off some of the matrix invert tricks I'm doing in a shader any more, but maybe that's for the best.

    ReplyDelete
  9. Very cool stuff indeed! Seeing your Unity exporter i wonder how far along you are with the vertex skinning format.

    I've been struggling to write exporters for XSI and Blender, but it seems every artist uses a different workflow and rig setup that ultimately breaks the exporters. Unity seems to do reasonably well with FBX imports. Abusing it for a normalized export as in your case looks like a very nice solution to me.

    ReplyDelete
  10. I really like the way you achieve impressive creative results with ease :)

    ReplyDelete
  11. Actually Unity lightmaps should be multiplied by 8 (not by 9). That is all.

    ReplyDelete
  12. Really? That's interesting! I'll have to update my code. (I was just eyeballing it anyway.)

    Might I ask where you got the multiplier from? Is it documented somewhere?

    ReplyDelete
  13. @Brandon: Yes, it's documented next to DecodeLightmap() in Unity\Editor\Data\CGIncludes\UnityCG.cginc.

    ReplyDelete
  14. Congrats for this awesome work!
    Regarding the way that Unity exports the static level geometry. Wouldn't it be easier to create an scene manager this way (octree, bsp-tree, ...)? Because you have the level already splitted.

    Álvaro

    ReplyDelete
  15. This comment has been removed by the author.

    ReplyDelete
  16. hey brandon,

    brandon iam unable to export textures from unity3d to webgl......please tell how to load textures of our drawing into webgl...

    regards,

    Faiz Masroor

    ReplyDelete