Quantcast

MIT Camera Outsmarts Kinect at 3D Imaging

By Wesley Fenlon

MIT's camera uses some intelligent calculations to judge the location of objects even when they're transparent or obscured by environmental factors.

The latest amazing invention out of MIT's research labs is a 3D imaging nanocamera in the vein of Microsoft's Kinect, with one key difference: it's smarter. In 2011, MIT researchers invented a $500,000 camera that could track the movement of light in a scene at approximately 1 trillion frames per second. This camera costs only about $500, but can achieve a similar effect. The camera's advantage over Kinect comes from the types of objects it can accurately scan, or measure. Movement or transparency can throw off the Kinect's readings, but they won't affect the nanocamera.

"The camera is based on 'Time of Flight' technology like that used in Microsoft’s recently launched second-generation Kinect device, in which the location of objects is calculated by how long it takes a light signal to reflect off a surface and return to the sensor," writes MIT News. Because we know the speed of light, Time of Flight makes it simple to calculate location. Unless something interferes--like smoke or a transparent or translucent object, for example. MIT News explains "Changing environmental conditions, semitransparent surfaces, edges, or motion all create multiple reflections that mix with the original signal and return to the camera, making it difficult to determine which is the correct measurement."

Photo credit: BRYCE VICKMARK/MIT

The MIT team figured out how to perform calculations that would account for those issues. Associate professor Ramesh Raskar explains "We use a new method that allows us to encode information in time. So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal.”

Because the code is the major breakthrough, the hardware used in the nanocamera is pretty much off the shelf.

Another team member compared the technique to the algorithms used to unblur photographs that were caused by shaky hands. Adobe first showed off that feature for Photoshop in 2011. In the nanocamera, it's the paths light takes between the camera and the objects it encounters that are deblurred. The code applied to the data that comes back effectively isolates different paths of light, so an image that would be blurred or confused by a translucent object comes out clean and clear.

As for why it's so cheap compared to MIT's last supercamera--because the code is the major breakthrough, the hardware used in the nanocamera is pretty much off the shelf. Simple LEDs are used to emit constant light pulses to survey a scene.

Check out MIT's video below for a look at the camera in action.