If you're not familiar with the situation already, Chrome recently switched rendering engines from WebKit to Blink, which is based off the WebKit source. The fact that we're so early in the life of Blink means that the two rendering engines haven't diverged too much yet, aside from dead-code cleanup on both ends, but even this early there are a few changes in how Chrome handles WebGL.
Before we get into that, though, lets talk about what exactly happens when you make a gl.whatever call in your javascript. If you were to ask most people, even many WebGL developers, how the browser handles WebGL commands they would probably give you a fuzzy description of a model that looks like this:
This is based on a couple of commonly understood facts: For security purposes WebGL commands must be validated/secured to ensure that malicious web pages can't crash the users graphics driver or OS. Secondly, on Windows ANGLE is often used to translate GL commands into DirectX commands to take advantage of what have traditionally been more stable and better optimized DirectX drivers.
That's all well and good, but there's an awful lot that could go wrong here. Graphics drivers, especially on older systems, can be pretty unreliable. The fact that they are hooked so deeply into the OS means that there's a lot of opportunity for bad behavior. Since the browser wants to protect the user from both unexpected driver behavior and malicious code that is attempting to provoke bad behavior from the driver Chrome sandboxes all driver interaction in a locked down GPU process. This allows us to carefully control what commands we send to the GPU, and should something go wrong a crash in the GPU process won't take down the entire browser.
So updating our model to take this into account, we get a diagram that looks a bit more like this:
So... that's pretty close to the real thing. Right?
Before we get into that, though, lets talk about what exactly happens when you make a gl.whatever call in your javascript. If you were to ask most people, even many WebGL developers, how the browser handles WebGL commands they would probably give you a fuzzy description of a model that looks like this:
This is based on a couple of commonly understood facts: For security purposes WebGL commands must be validated/secured to ensure that malicious web pages can't crash the users graphics driver or OS. Secondly, on Windows ANGLE is often used to translate GL commands into DirectX commands to take advantage of what have traditionally been more stable and better optimized DirectX drivers.
That's all well and good, but there's an awful lot that could go wrong here. Graphics drivers, especially on older systems, can be pretty unreliable. The fact that they are hooked so deeply into the OS means that there's a lot of opportunity for bad behavior. Since the browser wants to protect the user from both unexpected driver behavior and malicious code that is attempting to provoke bad behavior from the driver Chrome sandboxes all driver interaction in a locked down GPU process. This allows us to carefully control what commands we send to the GPU, and should something go wrong a crash in the GPU process won't take down the entire browser.
So updating our model to take this into account, we get a diagram that looks a bit more like this:
So... that's pretty close to the real thing. Right?
The reality is, as usual, quite a bit more complex. Lets take a look at a really quick-and-dirty diagram of the actual classes/files involved in processing a WebGL call in Chrome's original WebKit backend:
So why so many layers of code? Isn't this a bit excessive?
Well, since WebKit is used by many different browsers, not just Chrome, a certain level of abstraction and separation is required for any interface like this. There's a lot of code that's handled by the WebKit core, but anything platform or browser specific is deferred to port-specific implementation code (denoted here by the Chromium logo). It's necessary to achieve the "runs on anything" flexibility that WebKit enjoys, but creates complexity.
Let's run through a quick overview of what each of these steps do:
V8WebGLRenderingContext is automatically generated from the WebGL interface IDL. It marshals data to and from the Javascript VM (V8) into more user-friendly formats, which are then passed to...
WebGLRenderingContext. This is where most of the WebGL-specific logic happens. Anywhere that The WebGL spec has many additional restrictions above and beyond the OpenGL ES 2.0 spec, and those validations are all done here. Once the data is squeaky clean and WebGL-approved, we hand it off to...
GraphicsContext3D. The browser uses OpenGL for more than just WebGL. A lot of work goes into making the browser utilize your GPU as effectively as possible on every page to make scrolling smoother, rendering faster, and interactions snappier. We don't want all of the browsers internal code to be subject to the same restrictions as WebGL, however. This class exposes an API very close to OpenGL ES 2.0 to any portions of the WebKit codebase that need to perform GPU work (with occasional detours through Extensions3D to handle, well, extensions.) But since WebKit must work on multiple platforms, this class is fairly abstract. It has different backends on different platforms, but in Chrome's case it is implemented by...
GraphicsContext3DChromium/GraphicsContext3DPrivate. These two classes form Chrome's implementation of WebKit's GPU interface. On some platforms this might be where the actual GPU calls are made, but in Chrome's case we want to take advantage of the sandboxed GPU process. As such, this class primarily feeds GL commands directly to...
WebGraphicsContext3D. This class facilitates communication with the GPU process via a "Command Buffer" that packs the GL commands into safe little memory packets which are pushed across process boundries via shared memory. Actually, that's a lie. This class doesn't do that directly, but do you really want the chart to be more complicated? It's accurate enough for our needs.
Most of the non-WebKit Chrome code will use this class directly instead of the GraphicsContext3D since there's no need for the abstraction once you're in Chrome specific code.
Finally, everything ends up in the GPU Process, which is quite complex by itself but which I won't be covering in-depth here. After unpacking the GL commands from shared memory it does another round of validation to ensure that it's been fed valid instructions (it's the GPU process' policy to not trust external processes that communicate with it.) Once it's satisfied that everything is safe, it finally makes the actual driver call, which in some cases actually becomes an ANGLE call to redirect through DirectX instead.
Whew! Bet you never realized how much your WebGL calls got tossed around before hitting your graphics card! But all of that is how Chrome used to work when it was based on WebKit. Coming around to the original question: How has Blink changed this process?
Let's look at the same data flow in Blink as it stands today:
Still complex, but noticeably more straightforward. Specifically what has happened here is that we've taken the places that WebKit required to be split up for the sake of abstraction and merged them into a single class. The code from GraphicsContext3DChromium and GraphicsContext3DPrivate are now part of GraphicsContext3D. Likewise for the Extensions3DChromium, and as a result a whole layer of abstraction melts away.
Something to take note of: the actual code that gets executed is actually almost identical between the Blink and WebKit versions. The only thing that has really changed is that we've merged what used to be port-specific code into the base classes and taken the time to remove dead code paths that Chrome never would have hit anyway. This means less files to juggle and less abstractions to work around for the Chrome developers, but will probably yield very little in terms of performance (We've eliminated a couple of virtual function calls, but that's about it.)
In the end changes like this are about speeding up development, and will have almost no impact on end users, but that's very much in line with Blink's goals of code simplification and quicker iteration. Eventually we may find optimization opportunities that result from this refactoring, but for now it's all about cleanup.
Other than implementation simplification, though, I would expect the switch to Blink to have very little effect on the evolution of WebGL. Apple wasn't blocking WebGL features or anything like that, and pretty much all of the WebGL implementors are on the same page when it comes to what should and shouldn't go into the API. We're already pretty happy with the direction WebGL is going, so don't expect any drastic changes to be pushed because of Blink.
There is one user-facing, Blink-specific change that's on it's way, though: Keeping with Blink's policy regarding CSS and Javascript prefixes, as we expose new extensions we won't be prefixing them anymore (so no more "WEBKIT_" in front of extension names). Instead, we will hide extensions that have not yet been approved by the community (known as "draft" extensions) behind a flag. This will allow developers to continue experimenting with upcoming extensions without the worry of production sites becoming dependent on them, which creates problems if the spec changes before the extension is approved.
So there you have it: Blink has allowed us to simplify WebGL's implementation a bit, and it's prefix policy will change how developers try out new extensions, but otherwise it will continue to be the WebGL you know and love.
Hope you enjoyed the anatomy lesson!
[EDIT: Want more? Check out Part 2]