Quantcast

Living with Photography: We Have the Technology to Fake It

By Norman Chan

Over the past week, a series of unrelated events has prompted me to ponder on the concept of bokeh, but more broadly, the role and influence of software technology on photography.

Over the past week, a series of unrelated events has prompted me to ponder on the concept of bokeh, and more broadly, the role and influence of software technology on photography. First, I've been testing the HTC One (our In-Depth review goes up tomorrow), which features two cameras on its back. It's not the first HTC phone to do so--the ill-fated HTC Evo 3D used a dual camera system to shoot stereoscopic 3D photos and videos--but it is the first to use the secondary camera as a way to ostensibly enhance the quality of a 2D photo. HTC's camera app can use visual information from the second camera to artificially render the background out of focus, simulating a the shallow depth-of-field that you would get from using a wide-aperture lens or large-format camera sensor. The primary rear camera is used to capture the base image, while the secondary camera that sits about an inch away snaps a photo a from a slightly different angle. Software then compares those two photos to determine which is the foreground and which is the background, and you can choose which to blur out of focus.

It's a neat trick, which is why it wasn't surprising that Google released a similar feature in its new stand-alone Android camera app, which approximates the same shallow depth-of-field effect using a single camera. With Google's technique, you take a photo and then slowly move the phone up while the camera takes note of the displacement of foreground and background subjects--called parallax--to approximate the depth of the scene. Then, just like with HTC's app, you can choose which parts of the scene to keep in focus, and even accentuate the blurring effects. Google engineer Carlos Hernandez' blog post on the subject cites heady triangulation and computational modeling algorithms that go into creating this defocus effect, as to boast about the technical complexity of the feature. Behold the power of software, Google seems to be saying, it can do the things that previously required expensive (and in the case of smartphones, prohibitively bulky) hardware.

HTC's defocusing feature on the right, with the original photo on the left.

And scoff as DSLR-wielding photographers might at this claim, given the novelty and fickleness of HTC and Google's defocusing camera features, it's the exact sentiment being made by the camera technologists at Lytro, who today announced their second-generation light-field camera. The Verge's feature on the Illum camera posited just as much: "Lytro's ultimate, simplest goal is to turn the physical parts of the camera — the lens, the aperture, the shutter — into software." And the concept of light-field photography, broadly speaking, is not unlike what HTC and Google have done with their camera apps. But instead of using one or two cameras to compute the depth of a scene, light-field cameras use thousands of microlenses in an array to capture light and depth information from every part of the scene. Lytro's software is the secret sauce that can analyze that raw data to produce what looks like a typical shallow depth-of-field photo--just one that you can adjust focus after the fact.

I'm not quite sure how I feel about this. It's analogous to the use of software filters in Instagram or even Lightroom to approximate the look of vintage film cameras in smartphone or DSLR photos. On the one hand, software filters only simulate the gestalt of old camera technology; they lack the nuance and serendipity that lomography enthusiasts claim cannot be replicated with digital photography. On the other hand, Instagram filters can be genuinely fun to use, and have emerged as a distinct visual language for modern digital photography. Can that same line of thinking be applied to simulated bokeh? Appropriate use of depth-of-field is one of the hallmarks of professional photography, and the combination of HTC, Google, and Lytro's bokeh ventures may feel like new technology intruding into an exclusive playground, dumbing down the effect for anyone to use or misuse.

For now, there's nothing to fear from HTC and Google's fake bokeh effects. They are a novelty at best, given how they work. Just check out my sample photos below to see what I mean. In both HTC and Google's faux-bokeh implementation, the effect is a combination of just two tricks: depth-mapping and blurring. Neither are done particularly well. Depth mapping is the processing of distinguishing the foreground from the background. HTC's dual-camera system does a better job of this, since it's snapping both reference photos at the same time. In Google's single-camera technique, moving objects in the background confuse the software, so its depth-map may combine some things in the background with the subject in the foreground for keeping sharp. The upshot is that with the software as it is today, and with the use of just two small smartphone camera sensors, the depth-maps created by HTC and Google's apps are inconsistent and imperfect.

HTC One photo before Unfocus tweaking.
HTC One photo after Unfocus tweaking.
Defocus with Google's new camera app.

And once the depth-map is computed, the software masks the foreground as a layer and applies blurring to the area of the image that you want to look out of focus. The masking process itself in these apps is poorly executed; the blending between the masked and unmasked areas is too abrupt. If you look at the defocus attempt in the photo above, half of my hand is in focus and then suddenly half is blurred, even though my whole hand was on one plane.

But it's the blurring filters themselves that need the most improvement. Bokeh isn't a binary quality of photos--backgrounds aren't just in focus or out of focus. There are degrees of bokeh that reflect the size of the image sensor, the aperture of the lens, and the way the lens is constructed. Software--even Lytro's own light-field system--doesn't account for that. It's simulated-lens-wide-open or nothing.

These are shortcomings that I believe that software, combined with more processing power, will eventually be able to address. I could easily imagine a powerful software defocusing feature that lets you calibrate for what kind of aperture you want to simulate, like a dramatic f/1.4 vs. a gentle f/5.6. One that cross references your photos with others in a database to simulate both background and even extreme foreground defocusing. And even blurring that smartly analyzes the scene to pick out light sources to simulate the edged bokeh circles caused by a lens' overlapping aperture blades. Software tricks may not perfectly replicate the effects of physical lenses and large camera sensors, but they'll be good enough.

The threshold for "good enough" is what I'm worried about. I really don't mind the use of software to simulate hardware effects--we use them all the time without thinking about them. Cameras have built-in noise-reduction algorithms that use image databases to artificially boost the clarity of images. Lightroom is chock full of software filters like those to simulate film grain and lens vignetting--both analog traits. There's a very valid place for image post-processing, as long as you have a grasp of what you're trying to simulate where those effects originated. Bokeh as an automated post-processing effect doesn't scare me; it's the lack of understanding of how and when bokeh should be used that could be a slippery slope. Instead of a tool used to appropriately draw attention to a subject, it maybe just be something that's used for the sake of looking "artsy". We're already on that road--how do you think most people use Instagram's radial blur tool?

Instead of simply touting algorithms and gee-wizardry in its blog post, Google Research should be educating users about the nuances of bokeh--the whys and hows of photographic technique as a complement to its new software tools. And while these tools may be not impressive to me right now, they are at least very interesting as yet another example of how new photography technology is slowly finding its place alongside technique.