Computational Photography Chip Aims to Imrpove HDR and Low-Light

By Wesley Fenlon

A chip developed at MIT hopes to perform functions often done in software in less time, while using up less power.

As smartphone photography becomes ubiquitous, the technology behind it has to evolve. For a few years, that just meant more megapixels--sensors weren't getting much bigger, but pixels were getting smaller, often leading to higher resolution (but hardly better quality) photographs. Lately camera makers have taken care to improve low light performance in these sensors, and HTC's One, announced earlier this week, sacrifices its pixel count for larger pixels and superior light absorption. But that's just one part of the equation; going forward, we're also going to see more phones use computational photography to improve their photos.

Nvidia's Tegra 4, for example, can shoot HDR video and uses an image signal processor to process HDR photos in a snap. Developers at MIT's Microsystems Technology Laboratory have built a chip that specializes in computational photography processing, performing tasks that are often done in software designed to run on smartphones and computers.

According to one of the chip's developers, it was built to perform operations normally done in software while consuming less power. That includes processing HDR photographs "in a few hundred milliseconds on a 10-megapixel image," the chip's creators told MIT News.

The chip is also built to improve low-light photography:

"...the processor takes two images, one with a flash and one without. It then splits both into a base layer, containing just the large-scale features within the shot, and a detailed layer. Finally, it merges the two images, preserving the natural ambience from the base layer of the nonflash shot, while extracting the details from the picture taken with the flash.

To reduce noise, certain pixels are blurred with surrounding pixels, and the chip uses bilateral filtering to, theoretically, over-blurring the edges of pixels. "To perform each of these tasks, the chip’s processing unit uses a method of organizing and storing data called a bilateral grid," writes MIT News. "The image is first divided into smaller blocks. For each block, a histogram is then created. This results in a 3-D representation of the image, with the x and y axes representing the position of the block, and the brightness histogram representing the third dimension.

This makes it easy for the filter to avoid blurring across edges, since pixels with different brightness levels are separated in this third axis in the grid structure, no matter how close together they are in the image itself."

Even if this particular chip doesn't find its way into smartphones, its features likely will. Computational manipulation will become a more important tool in digital photography as phones grow more powerful; hopefully that trend is accompanied by better image sensors and an industry-wide shift away from the focus on megapixels.