The nice thing is, however, that since the entire goal of the spec is to bring OpenGL ES 3.0 capabilities to the browser the chances of things deviating too much from that spec are pretty minimal, and it's probably safe to assume that most ES 3.0 features will make their way into your browser soon. (With a few notable exceptions that I'll discuss in a bit.)
So what is new in 2.0? I realize that reading specs is rarely fun, so allow me to summarize for you. While this won't be a complete rundown of the upcoming features it should provide a good sense of where things are headed.
(Note that links to the OpenGL Wiki may refer to desktop OpenGL features that are not available in ES 3.0.)
[UPDATE: Apparently the words "This won't be a complete rundown" weren't explicit enough, so allow me to point out that the following are not the only features in WebGL 2.0! Array Textures, Depth Textures, NPOT textures, Floating point textures, New compression formats, and all the rest of your favorite ES 3.0 features are coming unless we find serious compatibility issues with them. Don't freak out because something isn't explicitly listed below. (Unless it's listed in the "Not gonna do it" section, then feel free to complain to the mailing list.]
Promoted Extensions
Some of the new features are already available today as extensions, but will be part of the core spec in WebGL 2.0. The big benefit to being part of the core being, of course, that support is guaranteed. No longer will you have to code fallbacks for Multiple Render Targets! If your device supports WebGL 2.0 then you have these features, period!
Multiple Render Targets
Currently exposed through the WEBGL_draw_buffers extension. Allows a single draw call to write out to multiple targets (textures or renderbuffers) with a single draw call. This is "the big one" in a lot of developers eyes because it makes many of the modern deferred rendering techniques that have become such a core part of modern realtime 3D practical for WebGL.
Currently exposed through the ANGLE_instanced_arrays extension. Instancing allows you to render multiple copies of a mesh with a single draw call. This is a great performance booster for certain types of geometry, especially when rendering highly repetitive elements like grass.
Fragment Depth
Currently exposed through the EXT_frag_depth extension. Lets you manipulate the depth of a fragment from the fragment shader. This can be expensive because it forces the GPU to bypass a lot of it's normal fragment discard behavior, but can also allow for some interesting effects that would be difficult to accomplish without having incredibly high poly geometry.
New Features
These features are brand new to WebGL, and bring us to near-feature-parity with OpenGL ES 3.0.
Multisampled Renderbuffers
Previously if you wanted your WebGL content to be antialiased you would either have to render it to the default backbuffer or perform your own post-process AA (like FXAA) on content you've rendered to a texture. That's not a terribly happy situation if you're concerned about the quality of your rendered output. With this new functionality you can render to a multisampled renderbuffer and then blit that multisampled content to a texture. It's not as simple or fast as rendering directly to a texture, but it's still a lot better than not having multisampling at all for non-default framebuffers.
Previously if you wanted your WebGL content to be antialiased you would either have to render it to the default backbuffer or perform your own post-process AA (like FXAA) on content you've rendered to a texture. That's not a terribly happy situation if you're concerned about the quality of your rendered output. With this new functionality you can render to a multisampled renderbuffer and then blit that multisampled content to a texture. It's not as simple or fast as rendering directly to a texture, but it's still a lot better than not having multisampling at all for non-default framebuffers.
3D Textures
This feature is pretty self-explanatory. A 3D texture is essentially just a stack of 2D textures that can be sampled with X, Y, and Z coords in the shader. This is useful for visualizing volumetric data (like medical scans), 3D effects like smoke, storing lookup tables, etc.
This feature is pretty self-explanatory. A 3D texture is essentially just a stack of 2D textures that can be sampled with X, Y, and Z coords in the shader. This is useful for visualizing volumetric data (like medical scans), 3D effects like smoke, storing lookup tables, etc.
Sampler Objects
Currently when you create a texture it contains two distinct types of data. First is the image data: the array of bytes that makes up the actual pixel data at each mipmap level. Then there's the sampling information, which tells the GPU how to read that image data. This is things like filtering and wrap modes. It's often useful to separate those two concepts, however. For example: Today if you want to read from the same texture twice in one shader, one instance with linear filtering and the other with nearest filtering, your only option is to duplicate the texture! Less critical but still annoying: currently you have to set the sampling information on every single texture you create, even if they'll all use the same settings.
Sampler objects allow you to store all the information about how to sample data from a texture separately from the texture itself, which becomes nothing but a container for the image data. During your draw loop you can pair textures and samplers as needed. It's a simple thing, but surprisingly useful!
Uniform Buffer Objects
Setting shader program uniforms is a huge part of almost any WebGL/OpenGL draw loop. This can make your draw calls fairly chatty as they make hundreds or thousands of gl.uniform____ calls. Uniform Buffer Objects attempts to streamline this process by allowing you to store blocks of uniforms in buffers stored on the GPU (like vertex/index buffers). This can make switching between sets of uniforms faster, and can allow more uniform data to be stored. Additionally, uniform buffers can be bound to multiple programs at the same time, so it's possible to update global data (like projection or view matrices) once and all programs that use them will automatically see the changed values.
This functionality is reasonably close to Direct3D 11 Constant Buffers.
Currently when you create a texture it contains two distinct types of data. First is the image data: the array of bytes that makes up the actual pixel data at each mipmap level. Then there's the sampling information, which tells the GPU how to read that image data. This is things like filtering and wrap modes. It's often useful to separate those two concepts, however. For example: Today if you want to read from the same texture twice in one shader, one instance with linear filtering and the other with nearest filtering, your only option is to duplicate the texture! Less critical but still annoying: currently you have to set the sampling information on every single texture you create, even if they'll all use the same settings.
Sampler objects allow you to store all the information about how to sample data from a texture separately from the texture itself, which becomes nothing but a container for the image data. During your draw loop you can pair textures and samplers as needed. It's a simple thing, but surprisingly useful!
Uniform Buffer Objects
Setting shader program uniforms is a huge part of almost any WebGL/OpenGL draw loop. This can make your draw calls fairly chatty as they make hundreds or thousands of gl.uniform____ calls. Uniform Buffer Objects attempts to streamline this process by allowing you to store blocks of uniforms in buffers stored on the GPU (like vertex/index buffers). This can make switching between sets of uniforms faster, and can allow more uniform data to be stored. Additionally, uniform buffers can be bound to multiple programs at the same time, so it's possible to update global data (like projection or view matrices) once and all programs that use them will automatically see the changed values.
This functionality is reasonably close to Direct3D 11 Constant Buffers.
Sync Objects
With WebGL today the path from Javascript to GPU to Screen fairly opaque to developers. You dispatch draw commands and at some undefined point in the future the results (probably) show up on the screen. Sync objects allow the developer to gain a little more insight into when the GPU has completed it's work. Using gl.fenceSync, you can place a marker at some point in the GPU command stream and then later call gl.clientWaitSync to pause Javascript execution until the GPU has completed all commands up to the fence. Obviously blocking execution isn't desirable for applications that want to render fast, but this can be very beneficial for getting accurate benchmarks. It may also possibly be used in the future for synchronizing between workers.
Query Objects
Query objects give developers another, more explicit way to peek at the inner workings of the GPU. A query wraps a set of GL commands for the GPU to asynchronously report some sort of statistic about. For example, occlusion queries are done this way: Performing a gl.ANY_SAMPLES_PASSED query around a set of draw calls will let you detect if any of the geometry passed the depth test. If not you know the object wasn't visible and may choose not to draw that geometry in future frames until something happens (object moved, camera moved, etc) that indicates the geometry might have become visible again.
It should be noted that these querys are asynchronous, which means a queries results may not be ready for many frames after the query was originally issued! This makes them tricky to use, but it can be worth it in the right circumstances.
Transform Feedback
Transform feedback allows you to run geometry through the vertex shader and write the resulting vertices into a buffer. That buffer could then be used to re-submit the draw calls without going through the full vertex transforms again. This could be used to capture the positions of a GPU-driven particle system, or write out the results of mesh which was skinned on the GPU.
Transform feedback allows you to run geometry through the vertex shader and write the resulting vertices into a buffer. That buffer could then be used to re-submit the draw calls without going through the full vertex transforms again. This could be used to capture the positions of a GPU-driven particle system, or write out the results of mesh which was skinned on the GPU.
What didn't make the cut
The nature of browsers and Javascript means that WebGL can't simply be a dumb wrapper over the OpenGL C interface. Most of the time this doesn't pose a big problem, and can actually yield nicer interfaces in some cases (like gl.texImage2D accepting img tags), but in some cases the realities of the environment simply don't allow for some functionality to be exposed in a reasonable manner.
Please understand that nobody likes dropping features, but these decisions have been well thought out (and in some cases fought over.) That said, if you feel very strongly that the wrong decision was made now, while the spec is in draft, is the time to voice your opinion!
Mapped Buffers
Javascript VMs simply cannot, for many reasons, expose a raw unchecked chunk of memory. This means that any attempt to expose a mapped buffer interface would involve WebGL allocating a managed typed array, copying the desired data into it, and then going through some extreme contortions to detect changes and copy them back to the actual mapped memory. This would completely destroy any of the performance benefits of using this API, and since the whole point of the API is to achieve better performance it would be disingenuous at best to try and expose something that looked like mapped buffers but didn't actually act like it.
In place of the mapping functions, however, WebGL 2.0 will provide a getBufferSubData function to allow you to query back portions of a buffer. It's not a complete replacement, but it will help cover at least one of the use cases.
Program Binaries
This is a feature that sounds really useful when you first hear about it, but the shine wears off quickly when you learn about the realities of how it works. The fact is that even if we exposed this to Javascript is would be extremely difficult for developers to use in a meaningful fashion.
Instead WebGL implementations can attempt to cache program binaries after they are first built and will re-use that cache for subsequent runs. This is behavior that is already in place, and will continue to function with WebGL 2.0.
drawRangeElements
Similar to mapped buffers, there doesn't appear to be a good way to expose this function and deliver the performance benefits that are assumed to come with it's use. (The WebGL implementation would have to re-validate all it's indices to ensure that they all fall within the range specified.) As such drawRangeElements it probably won't make it into the final spec, but we'd appreciate external feedback on how widely used/necessary this function is in ES 3.0 or desktop GL content.
Program Binaries
This is a feature that sounds really useful when you first hear about it, but the shine wears off quickly when you learn about the realities of how it works. The fact is that even if we exposed this to Javascript is would be extremely difficult for developers to use in a meaningful fashion.
Instead WebGL implementations can attempt to cache program binaries after they are first built and will re-use that cache for subsequent runs. This is behavior that is already in place, and will continue to function with WebGL 2.0.
drawRangeElements
Similar to mapped buffers, there doesn't appear to be a good way to expose this function and deliver the performance benefits that are assumed to come with it's use. (The WebGL implementation would have to re-validate all it's indices to ensure that they all fall within the range specified.) As such drawRangeElements it probably won't make it into the final spec, but we'd appreciate external feedback on how widely used/necessary this function is in ES 3.0 or desktop GL content.
What does this mean for you?
For the time being, not much. This is a draft spec, not an announcement of implementation. Full implementations of the spec won't be appearing in browsers for a little while, and even when they do there will be a period where it will likely be hidden behind a flag. As such it's going to be a little while before the average user will be able to see your web content.
As I mentioned earlier, however, some of the more exciting pieces of functionality are available right now as WebGL 1.0 extensions. If you'd like to start prepping your content for WebGL 2.0 feel free to start playing with WEBGL_draw_buffers and ANGLE_instanced_arrays now and you can feel confident that those features will "just work" when WebGL 2.0 lands. As for the rest of WebGL, nothing is being removed or deprecated and WebGL 2.0 is fully backwards compatible. This means that all it should take to "upgrade" your existing content is to add a "version: 2" the context attributes. That won't magically yield any performance or quality gains, of course, but it's nice to know that the code you are writing today won't be obsolete a year from now.