Over the past week, a series of unrelated events has prompted me to ponder on the concept of bokeh, and more broadly, the role and influence of software technology on photography. First, I've been testing the HTC One (our In-Depth review goes up tomorrow), which features two cameras on its back. It's not the first HTC phone to do so--the ill-fated HTC Evo 3D used a dual camera system to shoot stereoscopic 3D photos and videos--but it is the first to use the secondary camera as a way to ostensibly enhance the quality of a 2D photo. HTC's camera app can use visual information from the second camera to artificially render the background out of focus, simulating a the shallow depth-of-field that you would get from using a wide-aperture lens or large-format camera sensor. The primary rear camera is used to capture the base image, while the secondary camera that sits about an inch away snaps a photo a from a slightly different angle. Software then compares those two photos to determine which is the foreground and which is the background, and you can choose which to blur out of focus.
It's a neat trick, which is why it wasn't surprising that Google released a similar feature in its new stand-alone Android camera app, which approximates the same shallow depth-of-field effect using a single camera. With Google's technique, you take a photo and then slowly move the phone up while the camera takes note of the displacement of foreground and background subjects--called parallax--to approximate the depth of the scene. Then, just like with HTC's app, you can choose which parts of the scene to keep in focus, and even accentuate the blurring effects. Google engineer Carlos Hernandez' blog post on the subject cites heady triangulation and computational modeling algorithms that go into creating this defocus effect, as to boast about the technical complexity of the feature. Behold the power of software, Google seems to be saying, it can do the things that previously required expensive (and in the case of smartphones, prohibitively bulky) hardware.
And scoff as DSLR-wielding photographers might at this claim, given the novelty and fickleness of HTC and Google's defocusing camera features, it's the exact sentiment being made by the camera technologists at Lytro, who today announced their second-generation light-field camera. The Verge's feature on the Illum camera posited just as much: "Lytro's ultimate, simplest goal is to turn the physical parts of the camera — the lens, the aperture, the shutter — into software." And the concept of light-field photography, broadly speaking, is not unlike what HTC and Google have done with their camera apps. But instead of using one or two cameras to compute the depth of a scene, light-field cameras use thousands of microlenses in an array to capture light and depth information from every part of the scene. Lytro's software is the secret sauce that can analyze that raw data to produce what looks like a typical shallow depth-of-field photo--just one that you can adjust focus after the fact.
I'm not quite sure how I feel about this. It's analogous to the use of software filters in Instagram or even Lightroom to approximate the look of vintage film cameras in smartphone or DSLR photos. On the one hand, software filters only simulate the gestalt of old camera technology; they lack the nuance and serendipity that lomography enthusiasts claim cannot be replicated with digital photography. On the other hand, Instagram filters can be genuinely fun to use, and have emerged as a distinct visual language for modern digital photography. Can that same line of thinking be applied to simulated bokeh? Appropriate use of depth-of-field is one of the hallmarks of professional photography, and the combination of HTC, Google, and Lytro's bokeh ventures may feel like new technology intruding into an exclusive playground, dumbing down the effect for anyone to use or misuse.