Friday, July 27, 2012

The WebGL Guide to reading OpenGL shaders, Part 1

When discussing graphical techniques in WebGL, it's not uncommon to hear people say "Here's some shader code from an OpenGL desktop app! Use that!" And for the most part that's an entirely reasonable thing to say. Typically it's only when you start looking at OpenGL 3+ (DirectX 10+) level hardware and applications do we run into things that are simply out of reach of WebGL as it stands today.

There are, however, some oddities from the desktop relm that have (wisely) been excised from OpenGL ES 2.0, OpenGL 3.1+, and WebGL. These can make a simple "Oh just use this legacy shader" recommendation an exercise in frustration for someone who isn't intimately familiar with OpenGL 2.0 development on the desktop. If that sounds like you, then this is your guide!

(For a quick glance at everything available to OpenGL 2.0 shaders, check out this handy cheat sheet)

ftransform()

I felt this deserved it's own section, as it's one of the only functions explicitly provided to mimic fixed function pipeline behavior in a shader. Occasionally you will see vertex shaders that look like this:
main() {
    // Other logic here, setting up varyings, etc.
    gl_Position = ftransform();
}
In essence all that this does is instruct the shader "Run my vertex through the standard fixed function transform logic." It's a nice shortcut for developers coming from an older style of OpenGL development, but it's also not doing nearly as much work as you might think. You can replace it entirely with a couple of matrix multiplies:
attribute vec3 position;
uniform projectionMatrix;
uniform modelviewMatrix;

main() {
    // Other logic here, setting up varyings, etc.
    gl_Position = projectionMatrix * modelviewMatrix * vec4(position, 1.0);
}
Simple stuff, so you shouldn't have any trouble finding a reasonable way to replace ftransform() should you encounter it.

Attributes

In OpenGL 2.0 there are a number of special cased variants of gl.vertexAttribPointer that imply specific semantics in the fixed function pipeline. for example: glVertexPointer, glNormalPointer, and glTexCoordPointer. Rather than exposing vertex streams to arbitrary attributes in the shader, they had pre-defined names they were associated with:
  • vec4 gl_Vertex;
  • vec3 gl_Normal;
  • vec4 gl_Color;
  • vec4 gl_SecondaryColor;
  • vec4 gl_MultiTexCoord0;
  • vec4 gl_MultiTexCoord1;
  • vec4 gl_MultiTexCoord2;
  • vec4 gl_MultiTexCoord3;
  • vec4 gl_MultiTexCoord4;
  • vec4 gl_MultiTexCoord5;
  • vec4 gl_MultiTexCoord6;
  • vec4 gl_MultiTexCoord7;
  • float gl_FogCoord;
Almost all of these are very straightforward attributes that don't require much guesswork, and they can all be replaced directly with attribute names of your choice. You'll see many desktop shaders that use the gl_MultiTexCoordX attribs as arbitrary data streams, often packing things like tangents or skinning data into them. I recommend that you pick attribute names that are contextually relevant, rather than trying to mimic these names.

The one funny outlier of the group is gl_FogCoord. It's intended to be used with the fixed function glFog behavior, and was used to specify the distance of the vertex from the viewpoint if you wanted to override the default behavior of using the fragment depth. Confused? Yeah, me too. It's sufficient to say that this has very little use in a shader-driven environment, and you'll probably rarely if ever see it used.

Simple Uniforms

Under OpenGL 2.0 GLSL contains many "magic" uniforms that are derived from various leftovers of the fixed function pipeline, such as lights and built in matrices. Since WebGL has no fixed function rendering path these uniforms aren't present, but you'll frequently encounter them when reading other peoples shaders. Here's a quick reference for what those mean and how you can emulate them if needed:

mat4 gl_ModelViewMatrix

Probably the most commonly used built-in uniform, this was the Model/View matrix as computed by the gl___Matrix functions (when glMatrixMode was GL_MODELVIEW). This is easily replaced by simply providing a view matrix, and you're probably already doing it in your code right now! Otherwise it's likely that nothing would show up.

mat4 gl_ProjectionMatrix

Same as gl_ModelViewMatrix, this is the projection matrix set when glMatrixMode is GL_PROJECTION, and once again you're almost certainly already using a replacement for this.

mat4 gl_ModelViewProjectionMatrix

As the name suggests, this is a convenience uniform that contains the Model/View matrix multiplied by the Projection matrix. You can replace this inside your shader code with:
mat4 modelViewProjectionMatrix = projectionMatrix * modelViewMat;

mat4 gl_TextureMatrix[]

I personally haven't seen this used very often, but it's simply an array of matrices intended to transform your texture coordinates. They could be set to anything you wanted, and as such most of the time this appeared in a shader it's simply a convenient container for an arbitrary matrix. (I've seen them used for GPU skinning, for example.) No need for emulation here. Figure out what the author intended for the matrix to mean and use a better uniform name.

mat4 gl_ModelViewMatrixInverse
mat4 gl_ProjectionMatrixInverse
mat4 gl_ModelViewProjectionMatrixInverse
mat4 gl_TextureMatrixInverse[]

These are all the inverse versions of the matrices listed above, and it was actually massively convenient to have them made automatically available for certain shaders. Unfortunately WebGL does not have a good way to compute an inverse matrix in the shader code, so if you find yourself needing an inverse matrix of any kind you'll need to pass it in yourself.

Your matrix library of choice should provide a function to compute the inverse of any matrix (and if it doesn't get a new matrix library!) In glMatrix you do it like this:
var modelViewMatrixInverse = mat4.create();
mat4.inverse(modelViewMatrix, modelViewMatrixInverse);

var projectionMatrixInverse = mat4.create();
mat4.inverse(projectionMatrix, projectionMatrixInverse);

var modelViewProjectionMatrixInverse = mat4.create();
// Multiply the Model/View and Projection together
mat4.multiply(modelViewMatrix, projectionMatrix, modelViewProjectionMatrixInverse);
// Invert the resulting matrix 
mat4.inverse(modelViewProjectionMatrixInverse, modelViewProjectionMatrixInverse);
(Remember, though, matrix reuse is your friend!)

mat4 gl_ModelViewMatrixTranspose
mat4 gl_ProjectionMatrixTranspose
mat4 gl_ModelViewProjectionMatrixTranspose
mat4 gl_TextureMatrixTranspose[]

As you may have guessed, like the Inverse matrices above these are transposed versions of the same matrices, and are calculated much the same way:
var modelViewMatrixTranspose = mat4.create();
mat4.transpose(modelViewMatrix, modelViewMatrixTranspose);
Transposing a matrix, being a much simpler operation, CAN be done in a shader without to much trouble, but it's probably cleaner to just do it in your JS code.

mat4 gl_ModelViewMatrixInverseTranspose
mat4 gl_ProjectionMatrixInverseTranspose
mat4 gl_ModelViewProjectionMatrixInverseTranspose
mat4 gl_TextureMatrixInverseTranspose[]

Sensing a pattern yet? These are the transpose of the inverse of the original matrix. Calculating these matrices is just a matter of combining the two steps:
var modelViewMatrixInverseTranspose = mat4.create();
mat4.inverse(modelViewMatrix, modelViewMatrixInverseTranspose);
mat4.transpose(modelViewMatrixInverseTranspose, modelViewMatrixTranspose);

mat4 gl_NormalMatrix

This one is a bit more interesting. The Normal matrix, which is used all the time in shaders that calculate lighting of any sort, is the inverse transpose of the upper 3x3 matrix (the rotation matrix) of the modelView matrix. You use it to multiply your normals when your object is rotated so that they point the right direction and the object is lit properly.
var normalMatrix = mat3.create();
mat4.toInverseMat3(modelViewMatrix, normalMatrix);
mat3.transpose(normalMatrix, normalMatrix);

float gl_NormalScale

Having never heard of this uniform before writing this post, I'll defer to the description from the OpenGL Red Book Appendix:
Vertex normals can be normalized within a shader by using the normalize() function. However, if normals were uniformly scaled (all calls to glScale() had the same values for x, y, and z), they could be normalized by a simple scaling operation vec3 normal = gl_NormalScale * gl_Normal;
So in essence glNormalScale is a special cased way of making your normals unit length. That's great, but since WebGL won't do the grunt work of determining if you can use this or not, just use normalize()

Coming up next

There's quite a few more uniforms left to discuss, mostly involving lighting, but I don't want this single post to get gigantic so I'll break those into a separate entry.

I'd love to get some feedback on wether or not this is a useful resource, or if anyone feels that one of the variables could be explained better. I'm totally up for editing this entry to clarify things and make it more accurate, so please speak up!