Monday, February 29, 2016

Moving towards WebVR 1.0

Consumer VR is at our doorsteps, and it’s sparking the imagination of developers and content creators everywhere. As such it’s no surprise that interest in WebVR is booming. Publications like the LA Times have used WebVR to explore the landscape of Mars, and a doctor was able to save the life of a little girl by taking advantage of Sketchfab’s VR features. The creativity and the passion of the WebVR community has been incredible!


Those of us who have been defining and implementing the API feel a responsibility to make sure that it keeps pace with the current state of VR hardware and software. That’s not always easy with the breakneck speed with which the field has been evolving, and as we look at the API as it exists today there’s some significant disconnects from the realities of modern VR.


A quick refresher about how we arrived at the point that we're at now: When WebVR was first conceived by Vlad Vukicevic (April 2014) Oculus had just barely announced the DK2 and the only VR headset most people could get was the DK1. The Vive, 6DoF controllers, Hololens, and GearVR were still behind closed doors at this point. Cardboard had only just been announced when we first started making builds available. The APIs used to interact with the hardware that was available looked very different than it does today. That’s why in my initial blog post about the API I said “Keep in mind that these interfaces absolutely, without question, WILL change”.


We’re taking that sentiment to heart, and in the interest of keeping WebVR relevant and (hopefully) a bit more future proof we’re proposing some major, backwards-compatibility-breaking changes. You can see the new proposed spec here, but I wanted to cover some of the changes in a bit more detail and go into the rational behind them.

Proposed changes

Renewed focus on WebGL content for WebVR 1.0

The idea of displaying DOM elements in VR is an attractive one, and on the surface it feels like it should work fairly naturally with existing features like 3D CSS transforms. The devil is in the details, though, and there’s a lot of subtle issues that make the jump from a 2D screen to true 3D space difficult. We would still like to see DOM content in VR someday, but in the interest of delivering a stable, useful API as soon as possible we’re putting that functionality on hold and focusing exclusively on presenting WebGL content for WebVR 1.0. This will allow us to rally our efforts around supporting one type of content really well rather than multiple (and completely different) types of content poorly.


If you are more comfortable with declarative syntax instead of raw WebGL or Three.js there are great libraries like A-Frame available to help you build that kind of content.


Combine VRPositionSensorDevice and VRHMDDevice into one object

We always wanted the API to be flexible so that as hardware gained new capabilities we could easily expose them. The original idea was that each VRDevice variant (currently PositionSensor and HMD) would represent a feature of the hardware, and all the features associated with a piece of hardware could be identified by a shared hardwareUnitId. This required users to iterate over the enumerated devices multiple times to figure out what a single piece of physical hardware actually looks like.


In practice it’s exactly as awkward as that sounds.

This is one of the areas that we’ve most frequently seen confuse newcomers, and even more experienced devs get it wrong sometimes. As such, we’re discarding this loose association approach in favor of getting back a single object for each piece of VR hardware, now referred to as a VRDisplay.


Remove reliance on requestFullscreen for presentation

The way content is presented to HMDs is probably the area that has evolved the most since WebVR started. When we first started releasing WebVR-enabled builds the DK1 was only recognized as an external monitor. In early experiments people would set up their displays in mirror mode in order to present their content. Given that environment the idea of latching VR present to fullscreen made a lot of sense, and that idea was somewhat validated by Cardboard's emergence. So we ran with it.


That approach has since proven to limit what type of content can be displayed and how developers can present it, and so we’re making some significant alterations to the VR presentation method.


With the new API you will now get a compositor object from the VRDisplay. In order to prevent abuse of the API and unexpected transitions on Cardboard-style platforms the compositor will only be able to be retrieved during a user gesture (the same limitation exists for the Fullscreen API) On the compositor you’ll be able to set the source of the VR frames you’ll be rendering (currently limited to either an HTMLCanvasElement or OffscreenCanvas) and some options such as the viewport to be used for each eye.


The compositor will have it’s own requestAnimationFrame function which will fire at the refresh rate of the VRDisplay rather than the normal refresh rate of the system. Developers will call requestPresent to begin displaying on the HMD and must call submitFrame to explicitly indicate to the compositor when the canvas contents are ready to be presented.


This approach allows for some important improvements:
  • On desktops developers can now control how content is mirrored to external displays. (If you want to mirror it at all.) You can control where it is displayed on the page and how large it is, decide if you want to show both eyes, one eye, or even a completely different view of the scene, depending on your sites needs and performance budget.
  • Relatedly, it now allows interaction with the page from external displays while the VR experience is being presented, which is great for guided demos.
  • It allows for more efficient rendering of monoscopic scenes, which can be a great performance enhancement for things like photospheres and 2D videos, especially on mobile devices.
  • Because the new presentation method isn’t subject to the same security restrictions as Fullscreen we can allow the browser to stay in VR mode between sites, enabling linking from one page to another without leaving VR.


Replace DOMPoint with TypedArrays

WebVR has been using elements of the Geometry Interfaces spec till now to represent vectors and quaternions, but depending on them is probably not wise moving forward. The spec doesn’t have universal support from browsers yet, and areas of it have proven contentious, slowing adoption. Additionally, there’s overhead associated with creating a number of these objects each frame that we’d prefer to avoid in code that will definitely be on performance-sensitive paths. As such, we’re changing WebVR to represent vectors, quaternions, and matrices with Float32Arrays. These offer a number of performance benefits, but the most compelling is that they allow for easy use of upcoming SIMD APIs for hardware accelerated vector math.


This may seem like a pain, but in reality the changes required to account for a it are likely to be minimal. For example, if you were using WebVR with Three.js previously where you used to call Vector3.copy() to get the values from the DOMPoint into a Three.js vector now you’ll call Vector3.fromArray() instead.


Enable “Room Scale” experiences

Some VR devices, like the HTC Vive, are designed to allow for VR experiences that let you walk freely around a large area, commonly known as “Room Scale VR”. One important factor to enabling these types of experiences is to deal with a coordinate system that is centered around the users physical space. By placing the origin at the center of the users room on the floor, you can easily build your virtual space around it (by placing your virtual floors at y=0, for example). This helps the user feel grounded in the virtual world.


Since not all VR devices have the necessary information about their environment to enable this mode, WebVR will report position and orientation centered around the headset’s position when resetPose was last called. (OpenVR refers to this as “Sitting space”) On systems that support room scale VR, though, the VRDisplay’s stageParameters object will provide the information needed to transform those values into “Standing space”, with an origin at the physical center of floor of the users play space. This includes a transform matrix and the play area dimensions, so that content can be scaled accordingly.


Clearly report capabilities



A newly added capabilities property on the VRDisplay indicates whether or not the device can report positional and/or orientation data, and whether or not the VRDisplay has an external display (such as a desktop monitor) attached that allows for content mirroring.


Renaming and reorganizing for clarity



We’re also taking the opportunity to revise naming of various objects and concepts in the API to better match the underlying APIs and conform to more common verbiage. For example: Previously we referred to the orientation/position data returned by the headset as “PositionState”, which is an obvious misnomer because position was only part of the information returned. We now refer to it as “Pose”, a name which describes the data better and is common to most underlying APIs.


Removing lesser-used and poorly supported features



In order to simplify use of the API and make sure that it covers as wide a range of hardware as possible we’ve removed some minor features of the old API. Most notably, we’ve taken out the ability to set the HMD’s field of view, and in conjunction have removed the max/min/recommended/current field of view parameters in favor of a single fieldOfView property per eye. This makes the API simpler to use and implement, and reflects the reality that APIs like OpenVR don’t allow for user specified FoV values.


Controller support

With both the Oculus Rift and HTC Vive coming out with 6DoF controllers interest in accessing them on the web is naturally high. While these controllers are tightly coupled with VR devices, though, we feel strongly that they should be exposed through the existing gamepad API and not as VR devices. As such controller support is, strictly speaking, not part of WebVR 1.0.


That said, we are still very interested in seeing these devices be supported and will be working with the appropriate people on the gamepad spec to try and extend it to provide the functionality needed. We simply won’t be blocking WebVR on getting that support in place.


Trying it out today

I’ve uploaded new Chrome builds (Windows only) that implements the majority of the new API so developers can begin to work with the new API. Support isn’t perfect yet, but I already feel like the new API allows for much better handling of the underlying devices. And, as always, the code for these builds is available in a Chromium experimental branch for those that are interested. The API changes will be making their way into Top-of-Tree Chromium soon and become available on Android Canary and Dev builds shortly after that, but the Oculus and OpenVR backends are still a little ways out from making their way into the Chrome trunk.

(A note on compatibility: At this time I am only going to be supporting Windows and Android through the experimental binaries. I know that this is disappointing to many developers, but if you've been following VR development lately you know this is more of an ecosystem issue than a choice on my part.)

We’ve also been working to ensure that some of the more popular WebVR libraries are updated to work with the new spec. In some cases the only change developers may need to make is to download and deploy the update version of the library. Three.js and a branch of webvr-polyfill have been updated to support the new API, and support from other libraries is incoming. Also worth noting: the updated webvr-polyfill can wrap the new API with the old interface when provided the ‘ENABLE_DEPRECATED_API’ config flag, which may be helpful for backwards compatibility while you update your code to the new API.


To help demonstrate the version 1 API and provide a reference point for developers we’ve also been working on a new set of WebVR samples. (See the live versions here.) These samples are all single page applications that demonstrate a specific aspect of the API. They use raw WebGL instead of a framework or engine, and keep support libraries to a minimum in order to focus on the API calls in question instead of boilerplate. The samples are a work in progress, so expect to see the list of them grow over time. (And if you have suggestions for a topic you’d like to see covered let us know via the issue tracker!)

Seeking community feedback

While we feel confident that these changes will allow WebVR to keep up with the ever changing VR landscape we also can’t anticipate every possible use case. We would love to hear from you, the WebVR community, about how these changes will affect your content and especially if they make the content you hope to produce unreasonably difficult or impossible. We know that these are breaking changes and that existing content will require updates, and we’re very sorry for the inconvenience! We do feel strongly, though, that it’s worth breaking some content that was built while the API was still labeled “experimental” for the sake of future-proofing.

We’re excited to get WebVR to a point that the API is stable and widely accessible, and this is the first step along that path. Thank you for your patience and support, and keep building amazing VR content!