I've had a few days now to digest all the information that came out of this past weekend's Oculus Connect conference. It may have only been a two-day developer conference, but the keynotes alone had enough information to expand the imaginations (and lexicon) of virtual reality enthusiasts. There was of course the big Crescent Bay prototype announcement and demo, which Oculus unfortunately said that it has no plans to release or show anywhere else. It was also my first time being able to try the Samsung Gear VR and Oculus' current VR UI solution in Oculus Home and the Cinema application. My mind's been buzzing since I got back from LA, and I wanted to distill some of my personal takeaways from the experience.
Presence is NOT the same as reality
More so than at any past Oculus event or meeting I had attended before, the Oculus team emphasized the idea of presence--a significant milestone in virtual reality technology. It's this threshold past which your brain's subconscious computing starts to take over and makes you believe that you're in a separate space within a VR headset. Presence was emphasized because the team thinks that they've achieved it for most people in the Crescent Bay prototype. The 10 minute demo I had with Crescent Bay was leaps and bounds better than the DK2 experience, but I'm going to hold off on giving them the sustained presence checkbox until I can get more time with it. More importantly, we now know Oculus' definition of presence, and the specific technical requirements they're targeting for a consumer release (sub-millimeter tracking accuracy, sub-20ms latency, 90+Hz refresh, at least 1Kx1K per eye resolution, highly calibrated and wide FOV eyebox).
The reason I'm a little hesitant to say that I achieved the full presence in Crescent Bay is that I really have no appropriate point of comparison for that sensation. The feeling of presence in a virtual space should not be confused with the feeling of reality. I think a lot of people will expect that once they put on something like Crescent Bay, what they see inside the headset feels exactly like what the real world feels like. That's not the case at all. It still looks very much like rendered game graphics, with aliased edges and surreal feeling of disembodiment. To me, presence is about the feeling of space inside of the headset--a sense that the virtual objects and environments you're looking at have volume and a distance from you eyes that's not just two inches away on a screen. Stereoscopy and proper mapping of your head movements are a huge part of that. Presence in these VR demos never takes away the awareness of the virtual nature of that space, but you do feel more apart to it.
Standing in VR opens up possibilities
The biggest question for me coming out Oculus Connect was whether the consumer version of the Rift would be a sit-down-only experience. I know that Palmer told everyone in interviews that the Rift is meant to be used sitting down, but I agree with commenters that it may just be them working out a legally and ergonomically acceptable solution for a stand-up design. At least that's fun to think about. Regardless, the Crescent Bay demo confirmed that standing up in VR is technically possible with what Oculus has made so far, and that walking around isn't necessary for a stand-up VR experience (ie. we don't need VR treadmills). The square mat we were allowed to walk around on in the demo was sufficient to show how effective positional tracking could be in a stand-up experience. Even the ability to shift your full body and weight around was extremely meaningful--being able to physically crouch and duck in the virtual space felt liberating in a way that I think will have a profound impact in VR game design. Spinning around in a full 360 degrees was less important, or at least emphasized less with these demos.
Of course, this setup would require more hardware, including a way to mount the positional tracking camera above the standing user, and a cable management system to keep the headset cable out of the way.
Disembodiment vs. First-Person
The Crescent Bay demos were split between two types of experiences: first-person scenes where you were placed in a scene scaled 1:1 to your body, and abstract scenes where you were not suppose to get a sense of having a physical body. In the former, the feeling of presence was mitigated whenever I looked to my feet--there weren't any. That makes sense because your exact limb movements aren't being tracked by the camera. So it was more the feeling of walking around as the invisible man or a ghost--either way, it was a little disconcerting. The one Alice-in-Wonderland themed demo where you could look into a mirror and see a floating mask was maybe the best of the first-person scenes. The scenes that didn't try to simulate a physical place, where you were just a disembodied avatar looking at a 3D model were more engaging, in my opinion. In particular, one scene where I could walk around a floating miniature of a city, examining all the animated details up close and from any angle. I feel like game developers should veer toward those types of games--like Lucky's Tales and Epic Games' Strategy VR demo--instead of trying to immediately tap into first-person games.
What's up with Valve's VR room
In describing the benchmark for VR presence in his keynote, Oculus CEO Brendan Iribe mentioned the "Valve room" demo that convinced him that consumer VR could work. That's the infamous room that Valve Software set up to test its virtual reality hardware, back when Oculus' Chief Scientist Michael Abrash was still there. Palmer Luckey told me that he thinks that the Crescent Bay prototype in many way surpasses the technology of the Valve room (which has obvious limitations that prohibit it from being a consumer product). I think there's an assumption that once Abrash left Valve to join Oculus, Valve's VR efforts were halted to let Oculus lead the way. Several conversations I had with attendees at Oculus Connect lead me to believe that that's not the case; that Valve is still researching VR and improving its room hardware (which may not even use a room, eventually). And I don't think that it's R&D for the sake of R&D--that just doesn't feel like Valve's MO, especially after its cancellation of Jeri Ellsworth's CastAR project when she was still there. If Valve is still working on VR--and had been ahead of Oculus before Crescent Bay--they could be the dark horse to give Oculus some competition in the CV1 era. Kind of amusing to think of Valve as an underdog, too.
Gear VR has a future
I think a lot of VR fans dismissed GearVR as a gimmick for Samsung to tap into the VR buzz and upsell their Note 4 phones. At best, maybe Oculus lent its name to the product to trade for access to Samsung's display development and production chain. After all, the restrictions of the mobile phone hardware made adapting it for VR a technical challenge--which is why John Carmack has spent so much of his time at Oculus working solely on this project. But my demo time with GearVR left me optimistic. Even though it doesn't deliver presence, the responsiveness of the head tracking (minus positional tracking) and fidelity of the virtual spaces is impressive for something computed on a battery-powered mobile device. This is definitely not a poor experience, and the software Oculus developed for it to run on lays the foundation for the desktop interface. If I was a Galaxy Note 4 owner, I think Gear VR would be a compelling accessory, even at the rumored price of $200. Just the sheer amount of finished content you'll be able to run on it makes it more viable than the DK2 if you need your VR fix this year. I would still recommend that enthusiasts be patient.
The questions raised by and lessons learned from GearVR will be meaningful for the desktop version of Oculus, as well. Headlook as a control mechanism is serviceable as a UX scheme for now, but I think Oculus is going to have to figure out a scheme that won't risk neck strain. My big takeaway from the GearVR demo is that VR user interfaces have a long way to go before the code is cracked.
Virtual Cinema will be a killer app
As I talked about in my video impressions, one of Gear VR's biggest shortcomings is its lack of positional tracking. But not all VR applications need positional tracking to be compelling. The best app I used at Oculus Connect was actually the Virtual Cinema in Gear VR. No head translations is needed if you're sitting back, reclined in an movie theater seat (read: any comfy office chair), and I could immediately see the benefits of adding head tracking to a HMD media player. The simple act of adjusting the theater screen to compensate for your subtle head movements made Virtual Cinema infinitely more usable than the HMDs of old (like Sony's line of headsets). I could easily imagine Virtual Cinema being a networked social experience (good thing the Note 4 is a phone, right?), and the way the cinema is rendered allows video content to be displayed using as much of the 2560x1440 panel as possible. No graphics upscaling on the actual content, just on the environment render.
Panoramic video need positional tracking
On the flip side, I think that panoramic video experiences will need some way to support positional tracking, either in the way the video is actually recorded, of in the processing of that video. It won't be an easy technical hurdle to overcome, but the good thing is that it can be offset with smart directing and the use of cues to push viewers toward a curated 360 degree video experience. This is a new filmmaking medium, and I expect that it'll be years before directors and cinematographers figure out virtual reality filmmaker, if ever. But that's a very exciting prospect.
Spatial audio isn't easy
Everyone is thrilled that Oculus is taking 3D audio seriously, but the demos shown with the Crescent Bay prototype aren't enough to convince me that they've cracked the code for positional audio. Oculus has licensed software from the University of Maryland--algorithms that can process any piece of audio to make it sound like it's coming from anywhere a game developer places it or where the game thinks it is in relationship to your head. The demo of this technology in the lobby of Oculus Connect was promising, but I really need to hear it in a game setting where I have more freedom to move around a scene than within the confines of a scripted experience.
Oculus only has so much say over display tech
If the assumption among VR fans was that Oculus partnered with Samsung to get better access to panel technology and the engineering process, John Carmack's keynote at Oculus Connect dispelled that theory. It became very clear that Carmack--and by extension, Oculus--has pretty limited say as to how Samsung engineers its panels, display controllers, and low level software. Carmack talked at length about the battles he had to fight (and the battles he's looking to fight) to get Samsung engineers to sway his way. I don't think that Oculus has partnered with Samsung to make a panel completely from scratch that's optimized for CV1, but I do think that the partnership has given Oculus and Carmack unique insight into the panel hardware so they can optimize all the peripheral components (optics, rendering) to reach their targets. It's cheaper and more effective for Oculus to engineer a custom lens that mitigates the undesirable effects of Samsung's OLED panels (pixel fill, pentile sub-pixel arrangement) while taking advantage of their assets (flickering for low persistence, overclocking for high refresh rates, etc).
John Carmack is a goddamn genius
If anything Oculus Connect left attendees (and viewers at home) feeling much more optimistic about the future of virtual reality. These guys don't just get it (Abrash is a brilliant pragmatist when it comes to VR tech), but can make the mental leaps necessary to hack their way past current technical limitations. John Carmack's talk was full of explanations about how he used tricks to get Gear VR and the DK2 to where they are. You don't get a 75Hz or 90Hz panel out of nowhere--every iteration is a huge technical win. We used to think of John Carmack as the guy who could program game engines in his sleep, but it turns out that he's a pretty brilliant hardware hacker too. And he's savvy enough to know that what he's figuring out for VR can have applications elsewhere in consumer technology. By working intimately with Samsung's panels and tweaking them, it's like he's part of their extended engineering team. And if Samsung is smart enough to adopt any of his ideas to exploit as features for future phones, the technical fruits of VR development could end up being a competitive advantage in the lucrative smartphone business.
Photos courtesy Oculus VR