Quantcast
Latest StoriesPhotography
    How RAW Photography Will Change Smartphone Cameras

    One of the things we think Apple does better than other smartphone manufacturers is build great cameras into its phones. It's one of the reasons that iPhone is one of the most popular cameras in the world, period. Based on our experience, the iPhone 5S' camera produces better-looking photos than that on high-end Android phones like the Nexus 5 and HTC One M8, and it's a safe bet that the next iPhone will have yet another camera upgrade. Sony currently supplies the small CMOS camera in iPhones, and it's also the supplier of camera sensors on a variety of Android phones. The difference in photo quality between those devices, then, can partially be attributed to the lens system used. But photo quality is also tied to the imaging software built into the phone's OS. And on that front, Android may take a leap over iOS later this year.

    While Apple is opening up manual camera controls to developers in iOS 8, one feature that's sorely lacking is support for RAW photo capture. And coincidentally, that's one feature that Google is bringing to Android L--support for camera apps to write raw pixel data from the camera sensor as a DNG (digital negative) file. While this may not sound like a big deal for most smartphone users, it is in fact a huge deal for photographers who are doing more than just taking photos to immediately share on social networks. As I've said before, the post-processing of a photo is just as important to the whole of the photography process as the act of snapping the shutter. The ability to save smartphone photos as RAW files instead of just JPEGs is the equivalent to an immediate and free upgrade to the camera, regardless of the sensor make.

    Photo credit: iFixit

    To understand the benefits (and costs) of RAW, let's quickly go over the limitations of JPEG images. JPEG is a lossy file format, using image compression algorithms to reduce the file size of an image while retaining as much details as possible. Standard JPEG settings allow for a compression ratio of 10:1 from the original image without noticeable reduction detail, especially on small screens. JPEGs are also most commonly saved with 8-bit color profiles. That means that each of its RGB color channels top out at 256 gradiations. 256-degrees of brightness for each color channel is plenty for a photo, but camera sensors can actually record much more detail than that. Digital imaging chips can process light coming into sensors in 12 or 14-bits--light data that is lost when converting an photo to a JPEG. That extra data, when run through a RAW image processor, allows for more flexibility when editing and helps avoid image artifacts like light banding.

    Another limitation to JPEGs is the inconsistency of compression engines between smartphones. The amount of compression used to save a photo in the iPhone is different than that of an Android phone, and can vary between camera apps. For example, a 13 Megapixel photo taken on the new OnePlus One Android phone is compressed to a file between 1-2MB at the highest quality setting. The iPhone 5S, using a 8MP sensor also made by Sony, saves JPEG photos that are also around 1.5MB each. (By comparison, the 14MP camera on the Phantom 2 saves 4-6MB JPEGs). So where did that extra megapixel data go? While some camera apps have JPEG quality settings, the amount of compression isn't always transparent, so you don't know if you're getting the best possible photo you can from your phone.

    Photo credit: DPReview

    RAW image files eliminate that ambiguity, because it's just storing the unfiltered image data taken from the camera sensor. And the best part is that saving in RAW isn't a hardware limitation. All digital camera sensors have to pass that raw information through the phone's image processors--it's up to OS and camera software to give users a way to save that data before it gets lost in the JPEG output. The high-end Nokia Lumia phones have RAW photo capability, and previous phones like the Lumia 1020 were granted RAW file saving with a software update. DPreview ran a comparison of RAW and processed JPEG photos with the Lumia, and I ran own tests with a small-sensor camera to show you the image detail differences.

    Tested: Google Camera vs. Best Android Camera Apps

    So you've picked up a spiffy new Android phone, but the camera interface isn't to your liking. Even if you don't have any strong feelings either way, you may still wonder if there isn't something better out there. The Play Store has plenty to choose from, but most aren't doing anything particularly impressive. A few might be worth your time, though, and of course Google has thrown its hat into the ring recently with a stand-alone photography app. Let's see how the Google Camera app stacks up against the best third-party camera options.

    Photo credit: Flickr user janitors via Creative Commons

    Google Camera

    If you have a Nexus or Google Play Edition device, this is the stock camera app. For everyone else, it's an alternative downloaded from the Play Store. It's a complete redesign of the old default app from AOSP that fixes many of the issues people have been complaining about in the camera UI for years.

    This is by no means a unique feature among your camera options, but Google's camera app finally shows the full sensor frame. Previously, it would crop the top and bottom of 4:3 images in the viewfinder, making it hard to frame the shot. It now gives you the option if you want to take wide or square shots (the crop depends on the device). This alone makes it a better app for Nexus users.

    Some of the "advanced" features we used to see in the stock camera are gone with this new version, which might make it a deal breaker for some people. There's no timer mode, no white balance, and no color control. The user base for these features is probably smaller than the complaints online would make you think, and you DO still have manual exposure control. The rest of the features will probably trickle back in over time.

    In Brief: iOS 8's Manual Camera Controls

    We talked about this on last week's podcast, but I wanted to talk more about the changes coming to iPhone in Apple's upcoming iOS 8. The camera app in the new iOS, which presumably will work on the current iPhones, in addition to whatever iPhone models Apple releases this year. New features mentioned in the WWDC keynote include a built-in time-lapse recording mode and shutter delay timer, but the more exciting capabilities for photographer are the manual ISO, shutter speed, exposure compensation, and white balance settings. According to Anandtech, only exposure bias will make it into the default camera app, while the rest of the manual controls are exposed through the API for third-party apps to surface and exploit.

    Unfortunately, while these new controls are welcome additions to the iPhone, there's been no mention of any RAW shooting capability. The updated iCloud will support RAW file syncing for photos imported from external sources into iPhoto, which gives hope that Apple may announce RAW photo capabilities in the Fall with new hardware. RAW photography is already available on some smartphones like the Nokia Lumia 1020, and has been rumored to be an upcoming Android feature. While smartphone camera sensors are still limited by their physical size and lens, lossless RAW photography will avoid JPEG compression and let you make important post-processing adjustments like white balance. DPReview's evaluation of RAW processing of Lumia 1020 photos was promising, and I've had success using RAW processing to improve the photos taken by our DJI Phantom 2 Vision+ quad.

    Norman
    Why Adobe Lightroom Mobile for iPhone is a Killer App

    My favorite program to use on my desktop PC at home and MacBook Air is Adobe Lightroom. Other than the Chrome browser, it's the most essential app I rely on for day-to-day computing. I could even argue that it's one of the few reasons I still need an x86 PC--to process and develop my RAW digital photos. That's something that smartphones and tablets just aren't good at yet, even with the numerous photo tweaking apps available and the great displays on devices like the iPad Air. Adobe's goal for Lightroom is to convince photographers of all skill levels that the post-processing of digital photos--the bits-and-bytes equivalent of analog film development--is just as important to photography as the act of setting up and snapping the shutter button. It's a sentiment I agree with.

    That's why I was a little disappointed by the release of Lightroom Mobile for iPad earlier this year. The way the app was designed--and the constraints limited by the iPad and iOS--made it difficult to incorporate the app into my photo developing workflow. The iPad app was a way to load synced Smart Previews of photos from your desktop to the tablet and do light tweaking or flagging. You could send JPEGs from your iPad's camera roll back to Lightroom, but not RAW files. There was no way to use the iPad as a funnel to ingest RAW photos from your DSLR on the go and have them pop up back on the desktop.

    Today's release of Lightroom Mobile for iPhone doesn't change much, but the shift from tablet to smartphone is quite a big deal (I'll get to why in a moment). Adobe did address some feedback from users of Lightroom for iPad, adding the ability to give star ratings to photos in synced library collections and custom sort order. Functionally, the iPhone version has feature parity with the iPad version, just rescaled for the iPhone's aspect ratio and screen. The app is still free and connects to the updated Lightroom 5.5 through a Creative Cloud subscription.

    Convention thinking would make it seem like Lightroom Mobile is a better fit for iPad than iPhone, given the tablet's better screen. But I think smartphones are actually the more natural fit for this application, because they're the devices with the better cameras. The iPhone is the most popular single camera platform in the world, and the photos taken with it are rarely processed the same way you would a RAW photo from a DSLR. With Lightroom Mobile and Cloud syncing, all the JPEG photos taken on the iPhone can be piped (at full resolution) back to Lightroom on desktop for post-processing. That makes it much more useful than Apple's own PhotoStream for organizing and making use of your smartphone photos and not letting them sit idle on the phone. That makes the smartphone much more useful as a complementary camera to my DSLR.

    Living with Photography: Testing the Sony a6000

    For the video we shot with The Wirecutter's Tim Barribeau discussing what makes a great entry-level DSLR camera, I rented Sony's new a6000 mirrorless interchangeable lens camera. It wasn't part of my quest to find a great companion camera for my DSLR, but I wanted to use it on camera to show as an example of a mirrorless alternative to a relatively cheap DSLR. I've only had a6000 for a few weeks, and haven't tested it as extensively as the RX100 II or Fuji X100s. While those and the other cameras I've tested so far this year are technically mirrorless cameras (in that they don't have the flipping mirrors and pentaprism optical viewfinders), the a6000 is the first mirrorless interchangeable lens camera I've really used since my NEX-5R. I was curious about the state of MILCs since regularly using one. Since the NEX-5R came out in 2012, Sony has launched several mid-range follow-ups, and even nixed the NEX moniker with the a5000 and a6000 cameras.

    So the following are some testing notes from my time spent with the a6000, based on my relatively limited time with it. It never served as my primary camera as a daily carry or the main camera for any major events; I just put it in my bag alongside the 6D and used it when I had spare moments. The lens I paired with it was the Zeiss 35mm f/2.8--an expensive $800 piece of glass that's more expensive than the a6000 body itself. It's roughly the equivalent of a 50mm lens on a full-frame camera, which is a focal length I've grown quite fond of. Long enough for flat portraits, wide enough to capture small scenes by taking a few steps backward. Together, this is quite a nice kit--definitely not something I would consider entry-level or a camera to learn manual shooting with. It's not a high-end camera, either, lacking a full range of physical controls or a full-frame sensor. It occupies an interesting middle ground--a sub $1000 camera that's aimed at experienced shooters who are either switching to their first MILC or upgrading from something like a first-gen NEX or older Micro four-thirds. It's for people who are considering something like an Olympus OM-D EM-1 or a Canon 60D. In other words, I'm not the target user for this camera.

    Tim from The Wirecutter recently ranked the a6000 as his second favorite MILC for under $1000, placing it under the Olympus OM-D E-M10. Having not used it, I can't speak to the Olympus, but trust his judgement. I can't recommend one over the other, so consider my testing notes extra experiential anecdotes that may help you make a more informed purchasing decision. Let's start with the size of the camera.

    Testing: Sony RX100 MK II Compact Camera

    The search for a pocketable companion camera for my DSLR continues. For the past month, I've been shooting with the Sony RX100 II, the second of their three point and shoot camera with 1" sensors (Correction: while the sensor is called 1-inch, it is actually only 16mm in diameter). This is the same sensor size as that found in Nikon's CX line of interchangeable lens cameras, which is far larger than the sensors you'd find in cheap point and shoots and smartphones. For reference, the iPhone 5S has a 1/3.2" (5.68mm) sensor, while Canon's Powershot G12 has a 1/1.7" (9.5mm) sensor. A 1" sensor is roughly the same size as a Super 16mm film frame, but still considerably smaller than micro 4/3rds and APS-C sensors found in higher-end mirrorless cameras.

    While the RX100 II has already been succeeded by the new Mark III model being released this month, that $800 camera is actually considered a step up, and Sony will keep selling the Mark II at its $650 price. I previously tested the first RX100 when it was released in 2012, and the Mark II's upgrades are more feature-based than improvements in image quality. An improved back-illuminated CMOS sensor claims better low-light sensitivity, but it has the same zoom lens as the first release (still available for $500). It also has new features like a tilting LCD, hot shoe, 24p 1080p video recording, and Wi-Fi.

    In my testing of this camera, I wanted to figure out whether it could be a sufficient alternative to my DSLR in places where carrying around a large camera would be impractical. So I took along for a trip to my friend's wedding, on a day trip wandering around the Bay Area, and to an evening event at the local science museum. Each of these were places where I could have taken the DSLR, but chose not to in order to 1. try enjoy the event more and not make it a photography trip, and 2. challenge the RX100 II in situations where people would normally take smartphone photos. Here's what I found.

    In Brief: Photographer Dan Winters' Man Cave

    Even if you've never heard of the name, it's very likely that you've seen Dan Winters' work. The famed editorial photographer has taken cover photos for just about every prestigious publication (his Wired magazine photos are a favorite of mine), and his subjects have ranged from actors to musicians to entrepreneurs (team Oculus included!) to heads of state. But Winters' interests expand beyond photography; this Wired gallery of his studio and workspace show obsessions (in the best sense of the word) with military history, mid-century ephemera, and entomology. The artifacts and handmade props that permeate his creative space are as beautiful as they are varied, but project a sense of belonging--together forming a portrait of the man. And yes, that does look like a jetpack on the floor of his shooting gallery. I'd love to be able to see him at work. Winters' friend and actor Nick Offerman also wrote this lovely tribute to the photographer for Time back in 2012.

    Norman
    Living with Photography: Eyes Up Here

    Going to run through a simple but practical tip today--something I've been trying out lately and has been working well. It's about being able to properly focus on your subject when you're using your camera's autofocus system. As I've mentioned before, one of the reasons I prefer using a DSLR over a mirrorless camera is the optical viewfinder. The clarity of the real-time image you see through a DSLR's lens, by way of mirror, cannot be matched by an LCD or EVF, even though those systems let you see exactly what the camera sensor is registering. It doesn't matter if your camera's LCD or EVF has 3.8 million dots; the resolution of reality is only limited by the rods and cones in your eyes. That degree of clarity is a tremendous help with finding focus, with the trade-off being that you don't get the same kind of digital overlay that you would get on an LCD, like edge peaking.

    My method for finding focus is a pretty standard one--using the center focus point in one-shot AF mode to find my focus and then reframing the shot to get the composition I want. There are a few reasons I choose center focus instead of full-on autofocus, which may apply to your shooting style. The first is that DSLRs have a limited number of autofocus points. On the Canon 6D, that number is on the low side, at 11. A high-end model like the 5D Mark III has many more (61 points), which makes autofocus much more useful and accurate as a guiding tool. But even with many auto-focus points, not all of them have the same abilities. The autofocus points on a DSLR are phase-detect sensors, but some of them can only detect contrast along one dimension (either vertical or horizontal), while some are cross-type, meaning that they can detect details across two axes. The center autofocus point is also typically the best at doing its job, with the center point on the 6D rated to detect detail at a full stop of lower than its siblings.

    Additionally, the array of autofocus points on a camera sensor are usually grouped toward the center of the frame, in a cross-like pattern. You're not going to get autofocusing ability near the edges of the frame, much less the corners. So while you can manually set which specific autofocus point (or grouping of points) to use in a specific shot, that's only going to work if the subject you want in focus is covered by that array. In the simulated viewfinder image below, the auto-AF system wouldn't be able to focus on either of the players' faces, or even the basketball. Even if your subject is withing the AF grid, manually selecting that point can take precious seconds away from the shot, and compromise your ideal composition. By default, DSLRs require that you press the AF Grid button before tapping the directional pad to select your AF point. But even when you turn this off in the camera's menus, I find that it's not as fast as using center focus and then readjusting the framing .

    Image credit: DPReview

    The problem with using the center point autofocus method, though, is that you leave yourself susceptible to losing that focused point through the simple act of reframing.

    Borrowing Cameras: How to Test Before You Buy?

    For the past week, I've been testing Sony's RX100 MK II compact camera, as part of my quest to find the perfect companion camera for my DSLR. Along with the Fuji X100s and Sony's A7, it's a camera that I've loaned from Borrowlenses, who've graciously allowed me to borrow cameras and lenses for trying out and experiential testing. In each of these cases, I've used the cameras for a solid month, which has been the baseline for how long we want to live with products to get a feel for how they can fit into our daily lives and workflow.

    With cameras, I've found that getting used to a new model isn't like swapping to a new laptop or smartphone; while there are plenty of quantitative attributes to evaluate, the qualitative elements are oftentimes the most important to consider. Image quality as a technical benchmark is of course paramount, but photographers also need to know how the camera feels when they're shooting with it. Ergonomics, button placement, menu design, and playback responsiveness are qualities that affect every photo you take. The fact that the zoom and focus rings on Nikon cameras rotate in the opposite direction as Canon cameras is completely arbitrary design difference, but could be a meaningful one if you're switching ecosystems. Those qualitative traits don't just vary from manufacturer to manufacturer, they're different for every single camera.

    With my testing of the RX100 MK II, another interesting thing happened--Sony announced the impending release of the RX100 MK III, a successor that improves on the MK II's processor and optics while bringing the price up by over $100. That doesn't make my testing of the MK II moot; in fact, it underscores an important idea that we too easily forget: you don't necessarily need the latest gear. My current use of the MK II won't just be context to judge whether the MK III's improvements are worth it, but also to see if last year's camera is good enough for my needs.

    How To Make Your Own Giant Papercraft Head

    If you saw us at Maker Faire this year, you may have caught us bumbling about wearing big versions of our heads, giving high-fives and confusing children. A bunch of people we met asked what these masks were made of, and how we built them to look so much like our own heads. These paper heads were constructed using a combination of cool techniques and software: photogrammetry with PhotoScan, modeling in Maya, and papercrafting with Pepakura. The heads took about about two weeks to build from start to finish, including several long nights of paper cutting and gluing, right up until the morning of Maker Faire. They were laborious to make, but it's technically a project that anyone can do without previous papercrafting experience. Of course, knowing what I know now about the process, I would've done several things differently to streamline the build. Having just a few more days of time for putting the heads together would've gone a long way to making them look so much better, but I think they still turned out great for the first attempt.

    So for anyone who wants to build their own giant papercraft heads, I'm going recap the steps of the project from start to finish, guiding you with tips that I learned from time around. Photogrammetry expert Brandon Blizard, who did the modeling work on this project, also has some tips for how to image your visage for use in Pepakura. And for those with more experience with this kind of Pepakura papercrafting, please share your own tips in the comments below.

    Step 1: Capture Your Head with Photogrammetry

    The first thing we needed to do was get a digital copy of our heads. This could've been done a few different ways, ranging from a completely manual process to a fully-automated one. On one end, we could have sculpting a model from scratch in a CAD program like Maya or Sketchup, which requires 3D sculpting skills we didn't have. This method also doesn't give us an image texture for our faces. On the other end, we could have generated a computer model of our heads using a 3D imager, like a hand-held laser or optical scanner. 3D Systems makes one, but these tools are generally pretty expensive.

    We went with a middle ground--somewhat automating the modeling with photogrammetry. That's the process of generating a 3D model computed from the processing of numerous photographs, all taken around one subject from a multitude of angles. Photogrammetry apps like 123D Catch would've worked for our needs, but we invited Brandon to our office to take high-resolution photos using a DSLR rig to import into a piece of software called PhotoScan.

    We've previously detailed the basics of Photogrammetry software and hardware along with some best practices for taking your photogrammetry pictures in these guides.

    The Best Cheap Camera Under $200

    If you want to buy a decent, basic point-and-shoot for less than $200, you’ll probably be best off buying the $179 Canon 340 HS (the IXUS 265 HS elsewhere). It combines extreme ease of use with sharp photo quality, complete with vibrant colors and low noise levels. This means you’ll be able to just flip this thing on and take a good photo without having to fiddle with manual controls (good for situations where you won’t have time or patience to focus much on your camera, like outings with friends or kids’ birthday parties). The 12x zoom capabilities and built-in Wi-Fi that can sync photos to your computer are nice features to have as well.

    This is a replacement for our pick from last year, the Canon 330 HS, that manages to fix some of the more frustrating problems with that model. Notably, the old 330 HS had a major battery life bug that made it just about unusable for recording videos.

    The Canon 340 HS can fit in your pocket, has a 12x optical zoom, and the automatic image quality to ensure that, really, all you have to do is point and shoot.

    But the 340 HS isn’t perfect: Though better than its predecessor’s, its battery life leaves something to be desired. Photos taken while zoomed in all the way at 12x can be fuzzy due to the nature of shooting at full zoom. And its high-burst mode, while useful, can’t shoot at full resolution, meaning you’re sacrificing speed for image quality if you shoot in that mode.

    Maker Faire 2014: Zooming into GigaPixel Macro Photos

    We've seen gigapixel panoramas stitched together from thousands of individual photos for landscape photography, but the same process can also be used to take microscopic images too. We chat with the makers at GigaMacro, who've built a rig to take incredible macro photos of insects, found objects, and even electronics.

    Sony's RX100 Mark III Compact Camera

    I love my full-frame DSLR, but for the past six months, I've been on a camera quest for a perfect pocket companion camera to pair with it. I've tested compact full-frame cameras like Sony's A7, rangefinder lookalikes like the Fuji X100s, and even gave cameras on smartphones another chance. Sony's newly announced RX100 MK3 may be what I'm looking for. I was a fan of the first RX100 when it was released two years ago, and actually just got an RX100 MK2 in the office to test. I'll now be using that to take reference photos for comparison when I eventually test the new model in June. The Mark III's most notable new features are a faster image processor that allows for higher-quality 1080p video recording, and a new 24-70mm zoom lens that can open up to f/2.8 when extended to 70mm. The previous RX100s opened up to f/1.8 at 28mm, but quickly closed down to f/4.9 when zooming in just a bit. There's also Wi-Fi and a built-in EVF that pops up for bright daytime shooting.

    The technical specs look good, but three trends worry me. The first is Sony's now apparent annual refresh cycle for this camera line; I really hope they don't rush these updates to market. Second is pricing--the MK2 was more expensive than the MK1, and at $800, the new MK3 gets a price bump. And finally, weight and size--these RX100 models have slowly gotten both larger and heavier with each revision. The new processor and features also consume more power, so battery life has gone down. At some point, these features take away from the original appeal of the RX100. The new model should push the prices of the MK2 and first RX100 down even further, so getting a used MK2 may be worth considering. Can't wait to test this.

    FireFLY6: A Different Kind of RC Multi-Rotor

    With the exploding popularity of multi-rotor RC aircraft, the market is becoming flooded with models in all sizes and rotor counts. While there are definite variations in quality and performance, nearly all of these models are constrained by the limitations of a pure multi-rotor platform. An upstart company in New Hampshire has created a hybrid multi-rotor design that separates itself from the herd. Its creators believe that this machine retains all of the things that we love about multi-rotors, while adding some useful new capabilities.

    The aircraft of note is the FireFLY6 from BirdsEyeView Aerobotics. It is a Y-configuration tri-rotor that has some very unique attributes. The machine’s most obvious feature is its wing--a moot appendage on standard multi-rotors. On the FireFLY6, however, the forward two rotors can tilt 90-degrees to transition the ship from a hovering tri-rotor to a forward flying airplane (where the wing proves quite useful). To equate this capability in full-scale aircraft terms, think Harrier Jump Jet or V-22 Osprey.

    What's the Benefit?

    Despite their beloved camera-toting abilities, most multi-rotors are only capable of rather pedestrian forward speeds. Flyers that want to cover a lot of ground with each flight are typically forced to abandon the rotors and adopt conventional fixed-wing models. By doing so, they forfeit the ability to hover and operate from confined spaces. These are the same constraints that spurred development of the Harrier and Osprey: the need to take off and land with little (or no) runway and then get where you’re going in a hurry.

    BirdsEyeView’s founder Adam Sloan notes that “I’ve long believed that the ultimate flying robot is one that can take off and land vertically, but also transition to efficient forward flight.” As off-the-shelf multi-rotor electronics matured to the point that hands-off hovering was feasible, Sloan began piecing together the elements of his dream machine. By February of 2012 his prototype aircraft was able to execute in-flight transitions from hovering to forward flight.

    The FireFLY6 is a morph of a tri-rotor and a flying wing. The idea is to be able to take off and land without a runway and still be able to fly somewhere quickly.

    As you might expect, it is the transition phase that has given Sloan’s team the most grief. He explains, “We had to learn the best techniques for piloting into and out of hover situations through trial and error. Learning how to do so reliably (both in terms of materials used and method of piloting), and getting that dialed-in to the point where it is now, was probably the biggest development hurdle.”

    Aside from the potential performance benefits of an airplane/multi-rotor hybrid, Sloan points out that there are longevity perks as well. He notes, “I think those who are familiar with foamies (model aircraft constructed of foam) will find themselves doing a lot less repair work. Vertical takeoff and landing is a much less violent process than your average foamie skid landing (sliding on the ground without landing gear).”

    But as with any “Jack of all trades” endeavor, you have to have to make some sacrifices.

    Hands-On with Parrot's Bebop Drone Quadcopter

    Last week, we attended the unveiling of Parrot's newest RC quadrotor, the Bebop Drone. Though it was founded as an audio and car accessories company, Parrot has been moving into the consumer RC toy market, starting with the AR.Drone quadrotor that it released three years ago. The Bebop will be the company's fourth RC quad, after the AR.Drone 2 and the MiniDrone that was announced at CES. The AR.Drone--sold for $300 online and at retail stores like Brookstone--was successful for the company, and Parrot clearly wants to take advantage of rising quadrotor interest with the Bebop, which will be its high-end model. Like the AR.Drone, it'll be positioned as a ready-to-fly package for consumers, competing with camera-mounted quadrotor kits like DJI's Phantom 2 but presumably at a lower cost (pricing wasn't announced).

    The Bebop won't be released until much later this year, but Parrot CEO Henri Seydoux confidently demoed early pre-production prototypes for a crowd of attending press in the courtyard of San Francisco's Old Mint museum. We were allowed to fly the prototypes briefly and even demo its FPV feature using an Oculus Rift development kit. Here's what we took away.

    Similarities and Differences with AR.Drone

    Parrot is calling the Bebop a drone, but we all know that it's technically an RC quadrotor. It has autonomous flight capabilities that the AR.Drone didn't have, but this is a very different type of quad while won't replace the AR.Drone line. Parrot designed the Bebop with weight and safety in mind; the quadrotor is lifted by four brushless outrunner motors, spinning at much lower RPM than what you'd find on something like the Phantom. That's because the Bebop weighs much less--380 grams without its foam bumpers--and uses a three-blade design for its propellers. The motors are linked so that if one fails or is blocked, the others immediately stop spinning so the quad doesn't fly out of control.

    Bebop does inherit some design elements from the AR.Drone, though. It's stabilized with the use of accelerometers and gyros, and has a vertical camera underneath its body to calculate flight speed and direction (much like an optical sensor on the bottom of a mouse). An ultrasound sensor monitors altitude up to eight meters, and the onboard GPS is used for flight tracking, but not flight stabilization (eg. it won't use GPS to compensate for wind at high altitudes). Flight time is estimated to be 12 minutes for the 1200mAh LiPo battery, which takes two hours to recharge. While that's lower than the 20+ minutes offered by the new Phantoms, we've found that we seldom sustain a single flight for more than 10 minutes without landing anyway. Flying and tracking these quads for extended durations can be mentally exhausting. One thing that's nice is that the battery is attached via a standard power connector, so third-party batteries may be usable.

    Living with Photography: Angling the Plane of Focus

    The lure of bokeh in photography is strong. To the untrained eye, an out-of-focus background is correlated to a better photo, or at least the use of more expensive camera equipment, than a "flat" photo. That's why there's software to artificially add bokeh to photos by strategically blurring the background. And that's why many new photographers use wide-aperture lenses and shoot with the widest F-stop available to them. It's not something I recommend, but there's also technically nothing wrong with shooting wide open. You just have to know what you're getting into, and whether or now it's giving you the kind of photos you really want.

    Photo credit: Flickr user janitors via Creative Commons.

    Case in point, ever since getting the Canon 24-70mm f/2.8 zoom lens, I've shot the vast majority of my photos at the maximum aperture. A glance at my Lightroom Analytics from WonderCon, for example, shows that all but four of my photos were taken at f/2.8. Part of this is because I know what kind of depth-of-field that aperture gives me with a full-frame sensor, and part of it is because I'm not attuned to subtle DOF changes while in the moment, when I'm thinking about shifting ISO and shutter speeds. Optical viewfinders don't do a very good job approximating the depth-of-field of a resulting photo. I stick with f/2.8 for now while I try to master the camera's other attributes, and take advantage of the amount of light it gives me.

    But when shooting at a fixed and relatively wide aperture, there are smarter things you can do with your composition that dramatically affect how a photo turns out. That's something I've recently become much more aware of; that while shooting, my brain tries to visualize the plane of focus where the parts of my subjects are sharp. And that by adjusting the camera angle and composition, I can manipulate that plane of focus to put more of the subject in focus without having to change the aperture.

    The plane of focus (not to be confused with a camera's focal plane) is an imaginary two-dimensional plane that "slices" through your scene. Everything lying on that plane is in focus, and objects in front or in back of it are out of focus, to various degrees depending on the camera and optics. The plane lies parallel to the camera sensor, so as you tilt your camera, the plane moves along with it. For some of the photos below, I've Photoshopped an approximation of the plane of focus as it intersects with the subjects.

    Let's take a look at this photo above, taken when we visited Frank Ippolito for the painting of the Zoidberg Project. As Frank was working, I was maneuvering around him snapping up photos of the painting process. I wanted to capture the detail not only of the paint job, but of the fine wrinkles and creases in the mask sculpt. So using auto-focus, I pinpointed the focus on Zoidberg's tentacles--the equivalent of his nose. And while I got those tentacles in focus, the result was an unflattering photo, because the rest of the mask was lost in the bokeh. When taking portraits, one of the most important parts of a subject to get in focus is their eyes--it's what viewers draw their own eyes toward--and focusing on the nose usually means losing focus on the eyes. And yes, while this Zoidberg mask didn't even technically have eyes yet, I wanted to find a way to get both his eye sockets and tentacles in focus at the same time. The solution was simple.

    In Brief: Yosemite Drone Ban and FAA Regulations

    Yep, we've heard. The National Park Service made waves over the weekend with the announcement that it is banning the use of drones (what the FAA technically classifies as Unmanned Aircraft Systems) in Yosemite National Park. Reasoning for the ban isn't, well, unreasonable: drone usage in the park has increased over the past few years, and can impact the wilderness experience for visitors. It's similar logic that Yosemite uses to arbitrarily limit the number of people who can book camp sites in the park every season--overcrowding can disturb the natural environment. But the legal basis for instituting the drone ban--which carries a fine and jail time--may not apply to the use of drones. It's being challenged (rightfully) by legal experts. The problem is that existing regulations aren't sufficient enough to cover this new ground, with the FAA still a year away from its mandated deadline to figure out new guidelines for drone safety (a deadline they may not even meet). And self-regulation is not likely a viable option. NPR's All Things Considered today interviewed FAA Administrator Michael Huerta about the challenges of writing these new rules, without which the use of drones for sensible commercial practices like reporting the news is prohibited. Time is not on the FAA's side, as consumer quadcopters get cheaper and more powerful, even though other countries already have unmanned aircraft-specific operational guidelines in place. We'll have more on the current practices and self-imposed etiquette standards of drone-flying hobbyists this week.

    Norman
    Quadcopter Fun Flight Fridays #1

    Welcome to a new series we're calling Fun Flight Fridays. We're having a lot of fun learning the DJI Phantom 2 Vision+ quadcopter, and want to share our test flights with you as we take the drone to new heights around the Bay Area and our travels. This week, Jeremy and Norm explore the space around San Francisco's AT&T baseball park.