We strap on an Oculus Development Kit and mount Birdly, a full-motion virtual reality rig that simulates flying. It's one of the most awesome and intuitive VR experiences we've ever had, and we chat with Birdly's creators to learn how it works.
We strap on an Oculus Development Kit and mount Birdly, a full-motion virtual reality rig that simulates flying. It's one of the most awesome and intuitive VR experiences we've ever had, and we chat with Birdly's creators to learn how it works.
The physical design and internal mechanics of a periscope has changed quite a bit over the years, but there’s one thing that still remains the same: in order to see what’s going on above the water even the most high-tech modern periscope still has to poke it’s little head out above the surface. And when you’re a military machine whose main goal is stealth that isn’t exactly a smart move. That’s why, for at least a decade, some scientists and engineers have been trying to figure out how to build a virtual periscope. One that can see what’s happening all around without having to come up for air. And they’re starting to make some significant and exciting progress.
According to the US Navy, the first periscope was designed in 1854 by a French chemist named Edme Hippolyte Marie-Davie. It was simply a long tube with mirrors set at 45 degrees angles at each opening. There were several attempts to perfect the design through the following decades--among them a 65-foot, 130-ton tube set with eight prisms designed by American John Holland in 1900, which gave the viewer a very dim 360 degree view of the horizon and could actually be rotated.
The modern periscope, or, at least, the one we all remember from Looney Toons, was a perfected version of Holland’s design. Patented in 1911 by Dr. Frederick O. Kollmorgen, the new version used two telescopes instead of a series of lenses (or prisms). Because it didn’t need prisms at the opening or a series of lenses throughout, the new periscope could be built at a variety of lengths and its opening above the surface could be much smaller. Kollmorgen started a company to develop and update his telescope design and, in fact, the company he created (called Kollmorgen) still exists today.
Kollmorgen’s original design went through several upgrades through the years--adding night vision, star pattern recognition systems, optical magnification, and antennas for satellite communication, but the overall concept mostly remained the same. Then, in the 1960s, the US Navy created the Type 18 periscope, which added television cameras that allowed its images to be displayed anywhere on the submarine and also recorded.
In modern US submarines, beginning around 2004 on all Virginia-class attack subs, the periscopes were replaced by photonics masts. These are telescoping arms that have visible and infrared digital cameras at the top. Since they don’t use mirrors or telescopes, there is no need for the control room to be located directly below the masts anymore. Because of this, the Navy has relocated these sub’s operations area away from the hull and down one deck where there is a lot more space.
The past few weeks have witnessed developments that could spell the end of radio-control aeromodeling as we know it. In short, the Federal Aviation Administration (FAA) has claimed jurisdiction over certain RC activities. This move comes as part of the FAA’s attempt to grasp control of the rapidly expanding presence of Unmanned Aerial Systems (UAS) in the national airspace. What was once a relaxing pastime could soon be a punishable offense. Here's how that could affect you and your FPV multi-rotor flying friends (like us!).
The FAA’s recent actions have put them sideways with the bulk of the model airplane community. The group on the front lines defending the interests of modelers is the Academy of Model Aeronautics (AMA). To fully understand the situation, a short history lesson is in order.
In February of 2012, the FAA Modernization and Reform Act became law. Among many other things, this law instructed the FAA to integrate UAS activities into the national airspace. At that time, the FAA had no specific regulations governing the use of these machines, or even a firm definition of what constitutes a UAS.
Anticipating that the law would give birth to blanket policies that could negatively impact aeromodeling, the AMA fought for provisions to exclude hobbyists. At the time, the FAA stated no ill will towards RC modelers and Congress had no intention to impose any regulations on the hobby. The win-win provision that emerged is Section 336 of the FAA act – Special Rule for Model Aircraft. It prohibits the FAA from introducing any new rules to regulate “hobby or recreational” use of model aircraft.
Taken at face value, the FAA’s new stance on aeromodeling would have drastic and far-reaching implications to the hobby as well as the small industry that it supports.
Mood Check: AMA – Relieved, FAA – Overwhelmed
Since that time, the FAA and AMA have met regularly to ensure that both parties were on the same page as the FAA moved forward with its obligations under the new law. Although the FAA’s progress was glacial and milestone dates continually moved to the right, they frequently reassured the AMA that they had nothing to worry about.
Mood Check: AMA – Cautiously Optimistic, FAA – “What was that due date again?”
In June of this year, the FAA released a memo indicating its interpretation of Section 336. Not only was this memo produced absent of any coordination with the AMA, its wording is contrary to previous statements made by the FAA. Taken at face value, the FAA’s memo-defined stance on aeromodeling would have drastic and far-reaching implications to the hobby as well as the small industry that it supports.
Mood Check: AMA – Deceived, FAA – “You mad bro?”
European phone handset company liGo produced this interactive guide to the 120-year history of headphones. It ends up on a weird note showcasing the current state of fashion headphones (ie. Beats), but the stuff about the development of copper headphones for home to military use is fascinating. liGo did a good job pairing each section of this web guide with era-appropriate music and media. Included in the retrospective is brief video of John C Koss (founder of the Koss headphone company) talking about bringing the first stereo headphones designed for listening to music to consumers. I've embedded it below:2
Two things struck me while testing the Microsoft Surface Pro 3 and Nvidia's Shield Tablet, devices I ended up really liking. Both are ostensibly tablets, but the way I used each of them differed from how I used my iPad Mini. First, I rarely used held either of them like a notepad, with one or two hands gripping the sides. Most of the time, I had the Surface propped up in its "canvas" position using its kickstand on a flat surface, and kept the Shield Tablet propped up on a small makeshift kickstand as well. They were tabletop computers, not handheld ones. Second, I was surprised by how much I enjoyed using the stylus on each of these devices, and not necessarily as writing instruments. For both the Surface and the Shield Tablet, the stylus actually became a second navigational tool, used to swipe through the home screen and browse the web. These use cases became as intuitive as touch pointing and gestures--still the primary physical for iPads. And it made me think about how much Apple is limiting the potential of its iPads by staunchly sticking to touch.
Let's start with the Surface Pro 3, which has an active stylus. As I said in our video, my limited digital drawing abilities don't allow me to discern the difference between the Wacom-based digitizer used in the last Surface Pro and the N-Trig one used here. What matters to me isn't degrees of pressure sensitivity, it's accuracy and latency. And the Surface Pro 3's stylus was completely sufficient for note-taking in OneNote--my chicken scratch handwriting looked on-screen like they would have on paper. The ability to manipulate those scribbles as vectors and use the stylus to crop/copy/paste images with annotations made those notes more useful than the ones in my paper notebook after having made my jots.
But my favorite way to use the Surface Pro 3's stylus was actually as an extension of my fingers on the touchscreen. On the Windows desktop, the stylus became a proxy for my mouse cursor. Even with Windows' improved touch tracking for tapping small buttons, the one thing that touch can't facilitate is a cursor hover. With the active stylus, I could hover the tip over the screen and see where the cursor is before making a pinpointed tap. Even when I had I mouse connected to the Surface, I would use the Stylus in combination with my fingers to browse the web--tapping Chrome's UI and scrolling with the pen and easily still pinching to zoom on pages with my fingers. That complementary use of fingers and stylus felt completely natural. Much like how I've found touchscreens to be a delightful complement to the primary keyboard and mouse interface on a laptop, I've found the stylus to be an intuitive complementary input method to finger touch on tablets. You can have the cake and eat it too.
The only thing I wish is that Microsoft could have found a better way to store the stylus to the Surface Pro 3. In past versions of the Surface Pro, the stylus stuck magnetically to the side of the device, attached to its charging port, actually. It wasn't particularly secure, and meant that you had to remove the stylus to charge the Surface. On the Surface Pro 3, the stylus has no docking port--only a sleeve on the type keyboard accessory to slip into. I realize that given the thickness of the stylus and the densely packed design of the tablet's guts, there's no space for a recessed stylus dock. It's the problem that Steve Jobs bemoaned when mandating a touch-only interface on the iPad, but not an impossible task. Lenovo's ThinkPad 2, for example, is a hybrid device with a built-in stylus dock.
Media archiving is a noble yet labored pursuit, as archivists struggle to find and adopt new technologies and mediums that won't go obsolete. We've previously discussed the US Library of Congress's approach to archiving millions of pieces of video. Back in the 90s all sorts of analog media was being transferred to what was then thought to be an enduring platform: the compact disc. NPR's All Things Considered recently interviewed the LoC's head of Preservation, Research, and Testing Division to learn about how those CDs have held up in the two decades since, and what surprising deterioration has occurred on the now dated format.
Our friend Sean Charlesworth stops by the office for this week's Show and Tell to share another cool piece of technology history from his personal collection: a Nagra SN portable audio recorder. This series of miniaturized reel-to-reel recorders were built in the 1970s and used during the Cold War as covert spy recorders by the CIA.
Very cool work from Microsoft Research: "We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyperlapse videos: time-lapse videos with a smoothly moving camera." A more technical explanation video here.
"Since the days of Copernicus, man has dreamed of flight. On this historic day, we remember the Wright brothers, Orville and Redenbacher. Whose dreams and visions inspired generations. And now, again, one man's vision ushers in a new era of aerial travel. Proving the power of Imagination, and Intellect. The magic... of Flight." - Eric Cartman
One of the hurdles that this current wave of virtual reality has to overcome is finding control mechanisms for virtual spaces. Whether that means gamepads, prop weapons for shooting games, accessories like steering wheels and flight sticks, or full-on hand and arm tracking, these systems will have be appropriate and intuitive enough to match the software you're seeing through a head-mounted display. If you're playing a racing game from the perspective of a driver behind the steering wheel, you want the control system to match what your brain knows about steering and driving from real-world experiences. But interestingly enough, one of the most immersive virtual reality demos I've used uses a novel control scheme to simulate something that most people have never actually experienced before: the act of flying. And the sensation is incredible.
Birdly is a research project being conducted at the Zurich University of the Arts. Lecturer Max Rheiner and a small team of students began experimenting with a virtual reality rig last November, culminating in the the Birdly system that Max and his team are now taking on tour. We visited Max at the swissnex offices in downtown San Francisco last week to try out Birdly before it went to the Exploratorium and then onto this week's SIGGRAPH conference.
Rheiner told me that the goal of Birdly was simple: to embody the experience of flying like a bird though a full-motion simulator. But getting to that goal with a motion-control rig built from scratch, and then tuning the experience to match what users intuitively understand as a bird's flight was a bit of a challenge. Over six months, Max's team fabricated and tested several prototype rigs (documented in videos here) before coming up with the Birdly system we used. And surprisingly, the current setup looks very polished--more like a beautifully crafted modern furniture than homemade exercise machine. The rig looks like a futuristic massage table, with users lying flat on their belly atop the padded frame. Users put on an Oculus HMD (the first development kit) along with headphones, before stretching their arms out on what are essentially wings. A fan is mounted on the front of the rig simulates wind being blown in the user's face.
After mounting on the table and strapping all the VR gear, the software booted up and dropped me in a virtual model of San Francisco, placing me a mile above where my body actually was in downtown SF. Birdly uses ariel imagery and building models provided by mapping companies--Pictometry International and PLW Modelworks--and the city looked as like a high-resolution version of Google Earth. Then I started flapping for dear life.
In my continued testing of the Oculus Development Kit 2, one thing I'm sure of now is that a 1080p display for the Oculus will be insufficient for games that require reading text on screen. That includes cockpit-based space sims like Elite: Dangerous, where your in-game HUD is part of the cockpit model and not just floating in space in front of your space. At 1080p (and with the game's current font), I have to seriously struggle and squint to make out text that's even remotely in my periphery--it's why many people believe that Oculus won't release a consumer HMD until they have a display that's higher resolution than 1080p. One of the problems is that those high-density 1440p displays--used in smartphones like the LG G3--aren't cheap. But late last month, Nvidia's engineers released a research paper that proposes an display solution that effectively quadruples the number of pixels on screen, with the use of cheap LCD parts. The idea is called "Cascaded Displays," in which two 1280x800 LCD panels are stacked on top of each other, offset by a quarter pixel, and with a special quarter-wave film in between them. The stacked displays, each with a unique (and synced) video feed, combine with a single backlight to effectively double the resolution. The setup creates some image distortion, decreased brightness, and narrower viewing angles, but Nvidia believes that these side effects can be corrected or are suitable for use in virtual reality HMDs. Check out the video below for Nvidia's explanation of the system:1
"MIT engineers have fabricated a new elastic material coated with microscopic, hairlike structures that tilt in response to a magnetic field. Depending on the field's orientation, the microhairs can tilt to form a path through which fluid can flow; the material can even direct water upward, against gravity." More information here.
We have the Oculus VR Development Kit in the office (two of them!) and have been testing them for over a week. We sit down to discuss the new hardware, compare it to our first development kit, and then run through as many game demos as we can get working. Couch Knights multiplayer! Elite: Dangerous with a HOTAS setup!
Your cat is stuck in a burning building too dangerous for rescue crews to go inside, so off go the drones instead – five little unmanned aerial models that hover and flit through fiery beams and door frames without any human control. They know to spread out to cover more ground, and know how to adjust their search patterns when the communication links with the other drones go down. Their algorithms find and retrieve your cat in what rescue crews tell you is record time.
Or that's the dream anyhow, to one day build artificially intelligent, self-organizing robot systems that can collaborate on complex tasks – or, at the very least, rescue imperiled cats. We're not there yet, but researchers have been getting closer, thanks in part to what we're learning from the collective behavior of ants.
Look back through artificial intelligence literature from the past few decades and you'll find ant-inspired algorithms are a popular topic of study. Of note, Swiss artificial intelligence researcher Marco Dorigo was the first to algorithmically model ant colony behavior in the early 1990, and Stanford University biologist Deborah Gordon published her own study on the expandable search networks of ants a few years after. Today, both have different but related ideas on how we might implement so-called ant-inspired swarm intelligence in robots – and perhaps soon, drones – outside of the lab.
Consider, for example, how ants explore and search. Ants change the way they scour for things such as food and water depending on the number of ants nearby. According to Gordon, if there is a high density of ants in an area, the ants search more thoroughly in small, random circles. If there are fewer ants, the ants adjust their paths to be straighter and longer, allowing them to cover more ground.
This is all well and good in typical ant environments – but how do the ants adapt when interference is introduced, and their communication with other ants interrupted? To find out, Gordon sent over 600 small, black pavement crawlers to the International Space Station in January, and believes that studying how they react to the unfamiliar microgravity of space could help build better robots. Her research is especially prescient in the age of the drone.
In a Stanford news release, Gordon likened the interference introduced by microgravity as "analogous to the radio disruption that robots might experience in a blazing building." Depending on how Gordon's space ants adapt, she thinks the results when applied to robotics and artificial intelligence could help us program more efficient algorithms for search and exploration – especially when our robots are faced with unfamiliar environments, and with little to no human control.
I just got back from a two-week trip to France to see my wife's extended family. This is only my fourth time leaving the country and I've been working on paring down my travel gear to the essentials. The only thing worse than not having what you need is having a bunch of stuff you don't. This year I tried to travel as light as possible. I knew I should spend most of my time visiting family, not staring at a screen, but I also knew that two weeks without doing any sort of writing would drive me nuts.
Even trying to bring the bare minimum, I brought a bunch of stuff I didn't end up using. One Bag Travel people would laugh at me. But I did manage to travel without a laptop for the first time. If you can manage, I highly recommend it. You'll save a lot of weight and volume and most of the things you use a laptop for can now be done with a smartphone or tablet.
Before getting into the specific gear I brought (and what I'd leave behind next year), let's talk about what I consider to be the travel essentials: power and data.
Inventern champ Sean Charlesworth joins us in the Tested office this week to share one of his prized possessions: a Curta mechanical calculator. Designed in the 1940s before electronic calculators, this hand-cranked device was considered the the most precise pocket calculator available, and was used by rally car drivers and aviators.
At next week's Black Hat security conference, researchers Karsten Nohl and Jakob Lell plan on presenting a demo of malicious software that shows just how fundamentally at-risk the USB protocol is for unprotected computers. Their software, called BadUSB, lives in the firmware of a USB key, not the flash memory. The researchers say that reprogrammed firmware used as malicious code can't be detected by current anti-virus software. And the scariest part may be that the BadUSB firmware can be installed on any USB device, not just memory sticks.1
The 1956 composition "Illiac Suite for String Quartet" is a pleasant enough sounding piece of music – for the first three movements, that is. It's when you get to the fourth and final movement, that things get...weird. The notes sound random and dissonant. It doesn't sound much like music at all. But the peculiarity of "Illiac Suite" makes a little more sense when you realize how it was composed. This was the computer's first algorithmically generated song.
Programmed in binary by Lejaren A. Hiller, assistant professor of music at the University of Illinois, and Leonard M. Isaacson, a former research associate on the school's Illiac computer, "Illiac Suite" was nevertheless a revelation. That a computer might one day compose music indistinguishable from that of a human artist became an irresistible pop culture trope – for better and for ill. In his New York Times obituary, Hiller is said to have joked that "he would have computers compose all possible rock songs, then copyright them and refuse to let anyone perform them."
Luckily for us, computers are nowhere close to realizing that humorous albeit dystopian vision. And yet "Illiac Suite" remains an impressive feat, even today.
We can actually trace the beginnings of "Illiac Suite" back to none other than the British mathematician and computing pioneer Alan Turing. In 1951, Turing published a book on programming for an early computer known then as the Ferranti Mark I*. The machine had a loudspeaker, sometimes called a "hooter," that was used primarily to issue warnings or during debugging. But Turing found that the loudspeaker could also be used to produce solid tones – notes, if you prefer.
It didn't take long before programmers began to exploit this functionality to playback simply melodies and songs. But two programmers by the name of David Caplin and Dietrich Prinz decided to take things a step further.
Lloyd Alter, the editor of Treehugger, wrote this insightful feature about the history and design of the typical household bathroom. It traces the origins of the modern plumbing system that weaves through our cities, and explains why the many design defects of the current standard bathroom setup. For one, ergonomics is poor--toilets are too tall for a comfortable squat--and sinks are too low. But more importantly, the modern bathroom is extremely wasteful. Alter suggests alternatives like composting systems that split off greywater from blackwater, and a shower setup that only dispenses water when you need it. Of course, this doesn't take into consideration the other activities that currently happen in many bathrooms; the water closet is now a place where many people get their work done. Smartphones and tablets in the bathroom are still gross, by the way.
Adam shared this awesome story yesterday: an explanation for why it's so difficult to choose the shortest line at the supermarket. The answer lies in queueing theory, or the mathematical study of how people wait in lines to best optimize and predict wait times. According to queueing theorists, simple probability explains why your chances of choosing the fastest line in an scenario with lots of line options is small. In a perfect world, a single long line at the supermarket that funnels into the next available checkout counter would be the most optimal (like a bank or post office line), but human psychology rejects that. We would prefer to take the gamble of trying to find the fastest of multiple lines at the store--it gives us the illusion of control and the hope that we can beat the system.3
From Microsoft Research: "Project Adam is a new deep-learning system modeled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry." Wired has an in-depth story about how this new approach to running neural networks--using a technique called asynchrony--allows its deep learning system to train computers to do things like recognize images. Skynet jokes aside, advances in machine intelligence is something we can get behind.