HTC Vive vs. Oculus Crescent Bay: My 10 VR Takeaways

By Norman Chan

Lessons learned from hands-on time with Oculus and SteamVR’s latest public prototypes.

Palmer did it. Virtual reality isn’t vaporware. It’s going to change the way we think about home gaming and media, and consumer-ready products are coming out before the end of the year. That’s super exciting, but also a little bit scary. Holiday 2015 is still nine months away, and there’s a lot we still don’t know about when it comes to VR. There’s a language that we, even as enthusiasts, have to learn when talking about virtual reality products and evaluating them.

At Tested, Will and I have been privileged enough to be one of the first people to use and test hardware in this latest iteration of VR, all the way back to the first Oculus dev kit shown at CES 2013. Since then, we’ve used every public prototype from Oculus and other manufacturers, leading up to the HTC Vive at last week’s GDC. And with every demo, we’re not just thinking about how hardware and software has iterated or how new technologies will feed into a consumer product--we’re learning a new lexicon for virtual reality that we never had to seriously consider before. Display persistence, chromatic fringing, fresnel diffusion, foveated rendering, positional tracking, etc. Walking out of the HTC Vive demo of Valve’s SteamVR system, the important thing learned wasn’t if it was better than Oculus’ Crescent Bay prototype, but how the systems are different, and what those different approaches inform us about the potential of virtual reality experiences.

Coming out of GDC, here are my big takeaways about the state and near future of virtual reality.

Don't Dismiss Vertical FOV

When we talk about the specs of VR systems, one of the simplest to comprehend is field of view. And generally speaking, the assumption is that a wider field of view is correlated with a more immersive experience. Enthusiasts claim that the sweet spot for FOV is 120 degrees, where you may not feel like you’re wearing ski goggles or binoculars, and also not wasting GPU rendering for graphics on the edge of the frame. We made the assumption that the HTC Vive would use its two 1200x1080 panels for a wider FOV than Crescent Bay. As it turns out, the horizontal field of view of HTC Vive and Crescent Bay are quite similar--perhaps the only differences noticed in how the corners are rounded (Vive has rounder corners). Valve’s design actually arranges the two screens in portrait orientation, extending the vertical FOV. Simply, the image is taller.

That’s something we never really considered before, and made a noticeable difference in the demo. And if you think about it, while your eyes do dart back and forth around your field of view, you actually glance up and down pretty often. It seems like Valve made this decision early in their HMD design--you can see that even in prototypes from 2013, the dual-display systems had screens oriented vertically. I want to believe that their own testing shows that there’s a good reason for this. The use of dual screens is also something we aren’t seeing from Oculus and Sony--there could be hardware limitations on Sony’s side since Project Morpheus is taking a single video feed from the PlayStation 4, and Oculus’ optics are somewhat tied to Samsung’s product lines and capabilities. But the use of two screens may have advantages, such as better support for dual-GPU rendering and the use of optics that don’t have to take into account that space between your eyes.

If Presence is Binary, We Probably Haven’t Experienced It Yet

Many people talked about experiencing the sensation of Presence for the first time with the HTC Vive demo. Will and I definitely did. But if you think about the impressions of attendees at Oculus Connect last September, and our own Crescent Bay demo at CES earlier this year, we also claimed to have experienced some form of presence in those demos. My point is that presence, however you want to define it, may not be as simple as a binary trigger--it’s not that you either experience it or don’t. I believe that the sensation of presence is very much a sliding scale of immersion--a moving goalpost that can always get better until virtual reality is indistinguishable from reality. The HTC Vive demo offered the best sensation of presence yet, for sure, but my take is that it’s more interesting to analyze why that demo was special than to say we’ve finally reached some kind of VR milestone.

Looking back to September and the Oculus Connect event, Oculus’ claim that Crescent Bay was their first prototype to achieve presence gives us some clues as to their own criteria and benchmarks. At his keynote, Oculus CEO Brendan Iribe specifically listed technical requirements: sub-millimeter tracking accuracy, sub-20ms latency, 90+Hz refresh, at least 1Kx1K per eye resolution, highly calibrated and wide FOV eyebox. I thought those factors were good enough to sustain a “place illusion” in their demos. Wearing Crescent Bay, my brain was tricked into believing that the virtual place around me had volume--had space.

Being able to look around and passively experience a place isn’t enough--true presence requires an element of agency.

But the place illusion, as explained by VR engineer and researcher Sebastien Kuntz, is only half of the equation for presence. The other required component is the "Plausibility Illusion"--that your actions directly have an impact in that virtual space. Being able to look around and passively experience a place isn’t enough--true presence requires an element of agency. If the place illusion is the benchmark for immersion, then the plausibility illusion is the benchmark for interaction.

Presence = convincing immersion + interaction.

The HTC Vive demo was the first VR system to give us both, which is a large part of why it was so much more compelling than Crescent Bay.

Extra Space to Walk around is a Presence Multiplier

An interesting note: almost all of the VR demos we saw at GDC had a standing component. Back at Oculus Connect, I asked Palmer why Crescent Bay was a standing demo, even though Oculus has maintained that the rift will be best when sitting. In subsequent discussions, we were told that they wanted to show off the prototype with a standing demo because standing is a presence multiplier. I think that’s a really important concept that we took for granted. There are many technical factors that VR headset makers can iterate to inch immersion ahead. But there are a few switches that by themselves have a significant impact. Oculus calls those presence multipliers, and it totally makes sense that standing is one of them.

(This is also why, for example, the Virtuix Omni has been viable at all. Despite a finicky motion tracking system and clunky setup, it gets points in your brain just for allowing you to use VR while standing up.)

But my takeaway from GDC isn’t just that standing is a presence multiplier--we knew that from last september. It’s that any extra space to walk around in VR exponentially augments that multiplier. The SteamVR demo, with room to walk around a 15ft x 15ft space, was much more compelling than the near stationary Crescent Bay demos, which limits your movement to a pad 3ft x 3ft. Going back to the Crescent Bay “Thief in the Shadows” demo the day after my HTC Vive demo, I immediately noticed how restricted and stationary the experience was. I felt like I was standing in a pit, and on uneven ground.

Extra space in VR for walking around is a presence multiplier, but there is obviously a limit to how much space can is practical for software and games that have to accommodate a lowest common denominator experience. Not everyone is going to be able to devote a 15ftx15ft room as their personal holodeck. And even for those people who have the space, VR designers need to figure out solutions for conveying long-distance travel in games. Ideas like perspective shifting your environment or teleporting players are being tested, and we’re eager to try those solutions out.

Room/Environment Mapping will be Important

This idea actually came to us during our Sixense STEM demo, and we were delighted to see it actually implemented in the HTV Vive demo later that day. In the STEM lightsaber demo, the UI flashed red on screen when we walked near a predetermined border around the demo station. Sixense’ CEO Amir Rubin told us this boundary system--kind of like a forcefield--actually came about from users testing early Kickstarter versions of STEM. Sixense allows users to use the motion controller to “draw” a boundary around their play space before they entered the game.

That kind of user-directed room mapping will be important for when positional tracking takes off in VR experiences. There are many ways to do this--user defined dimensions, Kinect-style room scanning, or positionally tracked boundary markers. I believe that the SteamVR boundary system--which looks very much like a holodeck wall grid, albeit blue--is defined by the user typing in the dimensional space of the room. I would prefer a system where I could use the SteamVR controllers to manually map out my play area, to take into account irregular obstacles like desks, beds, and other furniture. It’s good to hear from developers that the boundary system is part of the SDK--it’s software that will be made available for anyone making a Lighthouse-based game.

Every Additional Point of Positional Tracking is a Presence Multiplier

This is the biggie. Precise and low-latency positional tracking may be the key differentiator between VR systems, and SteamVR’s Lighthouse is looking really strong. If standing and walking around in VR is a presence multiplier for the place illusion, I believe that every additional point of positional tracking outside of head movement is also a presence multiplier for the plausibility illusion.

Sixense’s STEM showed us this with their lightsaber VR demo, which was based on a modified Oculus Development Kit 2 (using STEM for head-tracking). Head-tracking alone isn’t enough for agency--you simply can’t do much to affect the virtual world just by glancing and bobbing around. Crytek’s “Dinosaur Island” demo, being shown at GDC, fell victim to this. In that demo, you could nudge virtual dinosaur egg shells around with your head, but that felt awkward.

The feeling of presence taps into your mind’s innate understanding of your own body--how your head, arms, hands, hips, legs, and feet move in relation to one another. It makes sense that with every additional point of your body tracked and represented positionally in the VR space, the more convincing the feeling of presence is in that space. Even if that tracking isn’t 100% perfect, your brain does a lot of the work, as we saw in last year’s ControlVR demo at E3. We were blown away by that demo then, but now we know why. Positionally tracking your hands with STEM controllers and the SteamVR controllers taps into the same trick, and we can expect that additional tracking of hips and feet would further enhance the sensation.

Positionally-Tracked Controller is Essential

A positionally tracked controller doesn’t just assist in the place illusion for presence, it gives VR users the much-needed feeling of agency that we haven’t experienced using gamepads (joystick and throttle in Elite: Dangerous gets close). I mentioned before that the head-butting dinosaur demo using Crescent Bay was clumsy, but it’s indicative of a larger problem with the existing demos we’ve seen on the Oculus side. In Gear VR, where control is either dictated by a gamepad accessory or head movement, there are too many “head laser” games that become straining after a while.

Head-tracking as a primary control mechanism for gaming isn’t just ergonomically unsustainable, it’s a waste of the VR potential of head-tracking. I get the use of head-tracking to activate gameplay cues or push along an interactive VR story, but primary gameplay control should be the responsibility of dedicated controllers, whether that’s your hands or something your hand is holding. That’s how our bodies work in the real world--evolution has made us tool users, not head-butters. And when your head is allowed to naturally and passively be part of the observational experience, the content of a virtual space gets to shine.

The SteamVR Controller Solves Many Problems

One of the brilliant things about the SteamVR controller is that it takes advantage of the 1:1 positional touchpad that was already in development for the Steam Controller. That 1:1 control essentially acts as positional tracking for your thumbs. For example, when scrolling around a circular color wheel in the balloon demo, the marker on the virtual wheel was exactly where my thumb was on the controller pad--it’s once again a presence multiplier. That’s a brilliant design move--abstracting finger positional tracking using a skeletal model without placing the Lighthouse sensors on your thumb. And this will be key in the future of VR: the combination of different types of positional tracking technologies to track your entire body and the objects around you.

The haptic feedback the controller utilizes is also a big deal. It’s not just vibrations--the linear actuator motors developed for the Steam Controller can be precise enough to simulate the friction of pulling open a cabinet drawer or the relative tension of a bow string. This kind of feedback will be very useful when VR software integrates physics.

We were really surprised that Oculus didn’t have a controller solution to show at GDC, since our assumption was that they would need to get one in developers’ hands if it was going to be part of a consumer release. My fear is that a controller won’t be an essential part of the Oculus ecosystem at launch. That would be a big disadvantage--even Sony’s Move is closer to SteamVR than Crescent Bay.

Lighthouse May Pay Off in Long Run

The most underrated part of SteamVR may be the Lighthouse positional tracking system that Valve is allowing third-party hardware companies to adopt. It’s an expensive hardware platform--the optical sensors aren’t cheap and you need at least five for every peripheral to positionally track. Definitely more expensive than infrared LEDs and an HD camera being used in Oculus’ current public prototype. But the bet may really pay off in the long run, especially when you think of it like a GPS-based system. Tracking isn’t limited to the conical space observed by a camera--it can scale with the addition of Lighthouse laser modules. Valve has even said that we’ve only seen one of Lighthouse’s modes in the HTC Vive demo, and that it can eventually track non-rigid objects.

A Glimpse of Content Portals

Another interesting note: the SteamVR demo gave us a first glimpse as to how Valve might present a VR content library to the user. The interface was a white room that you could walk around in, as opposed to the flat wall of content that is Oculus Home. Even in what are essentially menus, Valve is volumizing the interface, placing it in virtual rooms to emphasize the place illusion. Even though you don’t have a virtual representation of a body or feet, you never feel like a disembodied head floating in nothingness.

Gameplay is the Big Sell

Finally, the HTC Vive demo shined because it actually offered interactive content that gave us a sense of what video gaming would be like in VR. Valve being a game developer themselves makes this one of their strong suits--the interactive experiences started off as passive ones to acclimate you to the system, but then included short demos from partner devs given early access to SteamVR (like Owlchemy Labs), and culminated in a demo set in the Portal game universe. The familiarity of that game world literally put into perspective the potential of VR for gaming--seeing Portal’s Atlas and GlaDOS characters in “life-size” shows that first-person in VR is not the same as playing an FPS on a computer monitor. You could stand and look these characters in the eye. It’s going to change how developers think about character and level design for VR. And it’s all new ground.

I left my virtual reality demos at GDC with more questions than ever--which is a great thing. There are the technical questions for new technologies: how will Lighthouse work with reflectivity? How would it work with multiple people being tracked in one room? Exciting possibilities start opening. There are the questions about the games themselves: what kind of experiences will the first VR games be? How with physical room spaces be taken into account? And there are the obvious ecosystem questions: how will Oculus’ controller solution be similar or different? How will their respective walled garden of VR content differ?

And there’s not a whole lot of time to get answers to those questions. Developers are getting access to the SteamVR dev kit in the next few months. E3 is just a breath away. Holiday 2015 looms. My body is ready.