In an interview with Charlie Rose, Google X founder and department head Sebastian Thun gave a brief demo of Google's Project Glass headset. This was the first time the project has been shown to journalists, though Thun didn't reveal any new concrete details about the system's hardware of capabilities. Outside of Google's official press photos, we've seen working prototypes of Project Glass in the while, most notably being worn by Google co-founder Sergey Brin. But this is the first time we've seen the headset on video, and actually being operated by a user. Thun took a photo of Rose using the headset, which was uploaded to his Google Plus account later that day. From that interaction, we can infer a little bit about how Project Glass works.
Thun snapped a photo by pressing a button on the right side of the visor. He then waited a second while staring at Rose and keeping his head still before saying that he had just taken a photo, which indicates that there's a shutter delay between pressing the button and capturing an image. This makes sense since you don't want the camera to shake while pushing a button on the presumably light glasses, though there was no indication of whether the display actually shows a live viewfinder feed from the camera, of it's just a countdown to shutter release.
After taking the photo, Thun indicated that he was being shown a list of his Google Plus friends and groups. He rotated his head left to make a selection along the horizontal axis, and the nodded twice to approve sharing the photo. It looks like head turning and nodding, detected by acclerometers, will be the primary way of interfacing with headset's display, though voice control will also be possible. I don't think eye-tracking is built into the design, so prepare to give your neck a workout when using Project Glass. I can't imagine complicated head tilting gestures catching on, though. Thun kept his head pretty still during the entire process, and you could see his eyes focusing on the display rather than on Rose, while the interview was still going on. It didn't seem to affect his ability to multitask, but it was very obvious and could be a little off putting in person. The final photo uploaded is very low-res and overblown, but small camera sensors are a technology guaranteed to be improved over time.
When asked what Google envisions Project Glass will be used for, Thun talked about obvious use cases like reading and composing email, receiving ambient notification alerts, and finding information about the world around the user. He also noted that in Google's testing, augmented reality isn't the most compelling use scenario for Project Glass. Instead, testers are using it more to share their experiences with friends in real-time with photos or video. Since photos will automatically be uploaded to Google Plus, Project Glass is one way for the company to generate more interest in its social network. Much like how photos taken with Android phones can be automatically backed up to Google Plus, Project Glass photos can stream directly into your Plus feed.
I think that's a potentially powerful feature that's psychologically different from the way we shoot photos with smartphones. In my experience, I primarily shoot photos with a DSLR or cameraphone for archiving memories, some of which I can choose to share later on Instagram, Twitter, or Facebook. Archiving comes first with sharing as a secondary benefit. With Project Glass, it seems like sharing photos is the primary function of the camera, and as long as that process is fast and convenient, it could really change the way we interact with the world. Instead of absorbing visual information, we'd be broadcasting it--which is exactly what data-centric Google would want.
In addition to Project Glass, Thun also talked about Google's self-driving car project, as well as his Udacity program for granting students free online access to higher education courses. You can watch the full interview here.