Quantcast
Latest StoriesPhotography
    The Best Entry-Level DSLR Today

    The Nikon D3300 is, simply put, the best low-end DSLR on the market. It combines some of the best image quality we’ve ever seen at this price with excellent battery life, easy to use controls, and a guide mode to help you learn to use it—all for the extremely reasonable price of $650. Mirrorless cameras are still more portable, but if image quality is your focus, you can’t beat the D3300 for the price.

    Photo credit: Flickr user hrns via Creative Commons.

    Last year, when we put together a previous version of this recommendation, we begrudgingly said the Canon SL1 was the best pick. But honestly, none of them were really worth it as they were all too expensive, lacked image quality, or didn’t have the features we wanted. That has now changed thanks to the D3300. Just look at these comparison photos—it’s not even close.

    But, for a lot of people, a mirrorless camera will do just as well as a DSLR.

    But, for a lot of people, a mirrorless camera will do just as well as a DSLR. If you’re looking for something smaller, lighter, and more affordable, an entry-level mirrorless camera will provide you with the same sharp, bright images as this camera. You just won’t have an optical viewfinder or quite as many lenses to choose from.

    10 Uniquely Interesting Places To Mount A GoPro Camera

    As human progress marches on, things that were once enormous and expensive get tiny and cheap. Case in point: cameras. What used to be a family heirloom that would break if you looked at it funny is now a powerful, essentially disposable recording device like the GoPro camera. People do all kinds of things with these little buggers – here are ten of the most interesting.

    Testing: Adobe Lightroom Mobile for iPad

    You could hear my cheers resonate throughout my house last night when I read that Adobe had finally released a mobile version of its Lightroom photo processing application for iOS. This wasn't an unexpected move--there were leaked mentions and details of this program back in January--but it's still exciting to finally see and use it in person. Lightroom mobile is currently an iPad-only app (iPhone version coming soon, but no word on Android) that's available to download right now from the app store. I've been testing it since it became available last night and all this morning, and wanted to run through its features and share my initial thoughts.

    Before last night's release, I had been looking for a good way to incorporate my iPad into my RAW photo workflow. Back when I was shooting JPEGs, the iPad was a great device to import photos, using the SD card accessory to transfer full-res JPEGS onto the tablet and Apple's Photostream to get those on my desktop PC. When I started saving hefty RAW files, the iPad became much less useful. Yes, you can import RAW photos using the camera card adapter onto the iPad, but the native Photo app isn't smart enough to differentiate between JPEG and RAW duplicates, so you end up with two copies of every photo (I still save JPEG for fast reviewing purposes). iPhoto for iOS could ingest RAW files, but editing was slow, even on the new iPad Air. Plus, there was no easy to to get those RAW files back to my desktop.

    My photo processing workflow then became desktop-oriented, using Adobe's Lightroom to manage all my photos, pushing the ones I wanted to share to Flickr, and manually downloading some to my iPad to review in full-resolution. For the purposes scanning through my photo library, I had been using Moasic Archive, a paid web-service that works as a plug-in in Lightroom, uploading your library to its servers to review and make metadata edits on an iOS app. It wasn't for photo editing; it's just for photo reviewing and tagging. Mosaic has a free option that syncs previews of your latest 2000 photos to review online or through its app--I used it as a PhotoStream substitute.

    But now there's Lightroom mobile, which does offer RAW photo editing capabilities. Well, sort of. Lightroom mobile uses the same Smart Preview system that I love about Lightroom 5. Basically, whenever you import a RAW photo into the desktop version of Lightroom, you have the option to automatically create a small 2.5MB DNG file--a digital negative--that's a resized version of the original photo. Its limited to 2560 pixels wide, but you can edit them just as you would the original RAW file, and Lightroom will sync those edits. Smart Previews are how I can sync up my Lightroom library between multiple computers in Dropbox, so edits made on my Macbook Air appear on my desktop library, where the originals are saved. Lightroom mobile works in a similar way, but instead of using Dropbox to sync those Smart Previews, it uses Adobe's Creative Cloud storage system.

    Yes, Lightroom mobile requires that you have a Creative Cloud subscription.

    Living with Photography: Testing the Fujifilm X100s Digital Camera

    It's been a while since I've done one of these columns, and that's not because I haven't been taking photos or thinking about photography. On the contrary, I've taken about twice as many photos in the past month than on an average month (just past 22K photos in Lightroom!), both from going to week-long events like SXSW and from testing a bunch of new camera gear. Some of that gear includes new smartphones, like the HTC One M8 that I've been testing, but also new cameras and lenses that I've been lucky enough to get on loan from BorrowLenses. I sent back the Sony A7 full-frame mirrorless camera that I brought to Austin, an experience that made me want to continue testing compact cameras with respectable image quality.

    That lead me to the Fuji X100S, the successor to the X100 mirrorless camera that Matt Braga reviewed for us back in 2011. Matt had a lot of good things to say about the camera, including its use of an APS-C sensor in a very compact body (at the time), which when paired with Fuji's 23mm microlens (35mm equivalent) produced really great photos. The X100S, which was released last year, supposedly addresses some of the problems X100 users had with it, including slow auto-focus speed and a finicky focus ring. It's still $1300, which is a steep price for a mirrorless camera when Sony has cameras like the A6000 and RX1 that also combine a large image sensor with a compact body.

    But three years after the X100 was released, there are still features that make the Fuji's rangefinger-tribute unique, such as its hybrid viewfinder and full manual controls. Its those features that got me curious about this camera, since I had never shot with a Leica or true rangefinder before. This felt like it could be a good stepping stone to go from DSLR to rangefinder, when most veteran photographers migrate the other direction. But my interest in the X100S was cemented when Adam relayed an anecdote from a professional photographer friend of his--a portrait photographer for Wired--who now swears by the X100S as his go-to camera. The BorrowLenses rental order was soon on its way, and I've been shooting with this camera for the past week.

    This won't be a review of the X100S, however. You'd do better to find image quality comparisons on DPReview or dedicated photography sites. For this column, I wanted to talk about my new experience shooting with the manual controls of the X100S, and attempts to frame shots using its off-center optical viewfinder. (Yes, I've included some photo samples too.)

    Shooting Amazing 360 Degree Spherical Panos

    Like a real-life rendition of Super Mario Galaxy, Jonas Ginter has built a mechanism that lets him capture 360 degree sperical panoramas using mostly off-the-shelf mechanisms. You can find out a bit more about the process on his site (it's in German) or download a variation of the mount he's using from Thingiverse. I hope he posts more about the process, so we can see how he manages to remove the tripod from the shots. (h/t to )

    Tested In-Depth: Sony a7 Full-Frame Mirrorless Camera

    Will and Norm sit down and chat about testing the Sony a7 mirrorless camera. It's the smallest interchangeable lens camera with a full-frame sensor, which means it's comparable to high-end Canon and Nikon DSLRs, but is much more portable. Find test and sample photos from this review here.

    Testing: Sony a7 Full-Frame Mirrorless Camera

    For the past three weeks, I've been testing the Sony a7 camera. It's not a camera I bought, nor one provided by Sony--BorrowLenses was kind enough to give me a free rental to test both their service and the camera. And during testing, I've been carrying it around in my camera bag along with the Canon 6D. They're both full-frame cameras, but they couldn't be more different. While the 6D is a traditional DSLR, Sony's a7 is a mirrorless camera. It's more akin to Sony's Alpha NEX lineup of compacts--cameras that are physically the size of a large point-and-shoot (at least one from around 10 years ago), but have the large image sensors of DSLRs. And in using the Sony a7, I've been brought back to the fun of shooting with my old NEX-C3.

    We've previously talked at length about the differences between mirrorless cameras and DSLRs, so I won't go into those technical details again. The upshot is that mirrorless cameras have the potential to be much smaller and lighter than traditional DSLRs, but omit a optical viewfinder. (The a7 uses a built-in OLED viewfinder, like the one Sony put in the popular NEX-7 mirrorless camera.) This makes the camera a really interesting comparison with the 6D, which is the smallest full-frame DSLR in Canon's line-up. The 6D weighs 1.7, and when coupled with the awesome 24-70mm f/2.8 zoom lens, weighs almost three and a half pounds. The a7 only weighs 1.3 pounds, and that's with the 35mm f/2.8 Zeiss lens attached.

    F 2.8, ISO 1250, 1/60s

    The a7 being physically smaller than the 6D also meant I was able to take it many more places than I would the 6D--I didn't have to worry about carrying a backpack or shoulder bag to stow a camera when walking around at night. It didn't quite fit in my jacket pocket, but it was light enough to walk around with a strap wrapped around my wrist. At SXSW, I brought the a7 when going out every night to photograph the night light and conference events. At the Game of Thrones prop and costume exhibition, event staff specifically asked people with DSLRs to not take photos, but didn't take a second look at the a7. It's not small enough that I would feel comfortable taking through the mosh pit of a rowdy concert, but it also doesn't scream "I'm here to take photos" like a DSLR with a big lens does.

    We cover a lot of my thoughts about the a7 in the video review (I was really pleased with it), but I wanted to flesh out some of my takeaways from using the camera, and show off some sample and test photos.

    The Art of Photogrammetry: Replicating Hellboy’s Samaritan Pistol!

    We’ve gone over the basic concepts and photography techniques on how to capture ideal images for photogrammetry 3D scanning. Now let's get into the meat of the subject and start processing our data so we can see some results. The case study we're going to use is a replica prop from the movie Hellboy, which I found at the Tested office. I spent an afternoon photographing the prop, and processed it using PhotoScan software. Here's how that process went, and what you can learn from it.

    Step 1: Inspecting Your Photos

    For this photogrammetry scan, I used the turntable method to capture photos of the prop pistol, the “Samaritan” from the movie Hellboy. Since the prop is an irregular shape, I didn't put it on an actual turntable or Lazy Susan. It’s propped up on the end of a C-stand pole, which allowed it to be turned a few degrees between shots. I took one "ring" of pictures from slightly above the prop, and another one below. That gave me about 45 photos total. Click here for an example of how one full rotation of photos looked.

    Since the front and the back of the gun aren’t visible from the main sequence, I took another set of photos of the front, and another of the back of the pistol.

    Superman with a GoPro (Phantom Drone Footage)

    CorridorDigital put a GoPro on a DJI Phantom drone and composited the footage with green screen effects to make it look as if Superman was wearing a GoPro, flying around Southern California. The result is pretty breathtaking, largely due to the smooth transitions and deft drone piloting from DroneFly's Taylor Chien. (We previously interviewed Chien about the DJI Phantom 2 at this year's CES.) You can watch the behind-the-scenes video here.

    Hyperlapse From a Window Seat

    We're flying out to Austin for South by Soutwest tomorrow, and this would be something neat to try out. Photographer Matthew Vandeputte created this "hyperlapse" video from stills shot on a plane ride over Australia. As he explains on his blog, the hyperlapse is similar to a time-lapse, but instead of photographing a scene from a fixed location, his sequential shots are captured from different positions with the camera aimed at the same spot. In his case, the airplane did all the moving for him. With his Canon 5D MKIII, continuous drive shoots up to 6 frames per second, which is plenty fast for stitching together into 24fps video. I also liked the idea of his shutter clicking incessantly during the flight, and am curious how many shots it took to compose each short clip. Something to possibly fly on my plane ride tomorrow!

    Tested: Grinding Peanut M&Ms at 2500 Frames Per Second

    Earlier this week, we showed you what grinding coffee looked like under the Edgertronic high-speed camera. Though mesmerizing, some of you weren't impressed. So here's a step up: grinding colorful peanut M&Ms under the same camera at different frame rates!

    Tested In-Depth: High-Speed Camera Technologies

    After a week of testing high-speed cameras, we sit down to discuss our findings and explain how these cameras actually work. Why is it that resolution has to be reduced to increase frame rate? Learn about the potential and pitfalls of consumer high-speed camera technologies, and our thoughts on the Edgertronic camera.

    Tested: Detonating Airbags at 2500 Frames Per Second

    Time to get down to business with testing the Edgertronic high-speed camera. We set up a few driver's seat airbags and detonate them in front of various cameras recording at different frame rates. The difference in what detail you can see between 480fps and 2500fps is pretty astounding.

    Photographing All of the World's Coral Reefs

    How do you understand global change of a system that’s underwater and impossible to photograph from above? Build a giant submersible camera system controlled by expert dive photographers, of course.

    The world’s reef systems are deteriorating. Corals are going away at a rate of about 1 - 2 percent every year. Some areas are harder hit than others. In the last 27 years, the Great Barrier Reef has lost 53 percent of its corals and the Caribbean has lost 80 percent. That’s a big deal because reef systems are basically cities for fish. One quarter of all the ocean’s life makes their home there. If the ocean’s corals disappear then much of the life in the ocean disappears too. For humans, that means we can no longer depend on reef systems for food, protection from weather, tourism, and medicine.

    So, we know reefs are important. And we know they’re deteriorating. What we don’t have is a visual understanding of how these reef systems are changing and any capability to compare changes to themselves or each other over time. To change that, professional underwater photographers have gotten together with ocean scientists to create the Global Reef Record -- a world-wide Google Maps-like photographic index of all of the coral systems in the entire world.

    “We’re creating a global baseline,” says Richard Vevers, executive director of the survey. “We’ve been travelling around the world using a standard protocol for collection imagery, which allows us to do a global comparison.”

    In order to accurately capture every reef on earth with consistency and 360-degree panoramic views, Vevers, who has a background in professional underwater photography, had to engineer and build a special camera. “Initially it came from an understanding of underwater photography, which is very different. We looked at taking the Google Streetview camera underwater, but we needed much wider angle lenses and we needed to be able to take shots in low visibility and low light. We also needed change exposure as we were moving without having to access the camera.”

    The solution was to build the camera completely from scratch and then mount it on an underwater scooter. The entire $50,000 system is manipulated by a waterproofed tablet, with specially designed apps, that can be controlled by divers who move a magnetic mouse that operates a button inside the tablet’s glass box.

    Tested: Grinding Coffee at 2000 Frames Per Second

    We're testing high-speed cameras this week, and to kick things off, here's a test of the Edgertronic camera, shooting coffee being ground at 2000 frames per second. That turns a ten-second clip into 10 minutes of awesome slow-mo goodness. So grab a cup of coffee, put on your favorite adult contemporary album, and enjoy action.

    National Geographic's Vintage Collection Archivist

    "Bill Bonner presides over eight million images as the longtime keeper of National Geographic's vintage collection. He's a keeper not only of photographs, but memories-and he treats each like it's the greatest treasure in the world." Read more about Bill Bonner and his work here.

    The Art of Photogrammetry: How To Take Your Photos

    Last week, we introduced you to the concept of photogrammetry--using a series of photo images to computationally map a 3D model or space. We discussed the current state of photogrammetry, including what software is available for consumers and what kind of hardware you need. Turns out, photogrammetry is pretty accessible, and you can do a lot even with free tools like Autodesk's 123D Catch and your smartphone camera. Of course, more advanced software, more processing power, and better camera equipment can go a long way to improving your models. But so can the simple act of taking better source photos. Today, I'm going to give you some tips about how to best take your photogrammetry photos to give that computing software the best references to output a clean(ish) 3D mesh.

    By far, the biggest impact on the final output file is what happens in the shooting phase. In fact, it is usually easier to reshoot a new series of source photos of your subject then try to save a computed capture that’s not working right away. Take some time, think, try to visualize the computer aligning your photos. Try to think of angles you have missed. When you are done shooting, I recommend loading up the images and see how well the images align as soon as possible. If certain images are off or are confusing the software, re-shoot them while you still have access to your subject. It may be necessary to reshoot multiple times for one model.

    Ideal Conditions for Photogrammetry Software

    This part isn't too complicated. Your software will want a nice, clean, sharp, evenly lit image, with every surface of your subject visible from three or more angles. The software will also like a good amount of parallax (different positions and angles) between those images to do its calculation. And it'll really be happy if undesired parts of the image, not part of the subject, are masked off, either with a green screen or in an image editor.

    Sounds simple, right? Well that's because it's easier to explain what photogrammetry software likes by giving you examples of what kind of imagery it doesn't like. That's because a lot of photography flourishes--depth of field, dramatic lighting, wide-angle distortion, etc.--are actually counter-productive to the task of photogrammetry. Below, I'll go in-depth through the image qualities that will confuse your software and produce bad models.

    The Art of Photogrammetry: Introduction to Software and Hardware

    Our brains perceive depth by comparing the images that our eyes see. If you alternatively close each of your eyes, you will notice that the object you see will seem to shift left and right. An object that is closer, will seem to shift more than an object that is farther away. That's stereoscopic vision, and the core concept behind creating the illusion of three-dimensional objects and space from two 2D images. Your brain can use this information to subconsciously calculate and tell you how far away an object is supposed to be. In a similar way, photogrammetry is a photography technique using software to map and reconstruct the shape of an object, by comparing two or more photographs. The science of photogrammetry has been around for over 100 years. It was used in World War II by the Allies to construct invasion maps, discover the V2 rocket program, and later by NASA to make topographical maps of the moon for the Apollo missions. This was an expensive, laborious procedure employing a ton of people, and massive specialized cameras and plotting equipment.

    Photogrammetry has come a long way since then, and it has even come along way since I first encountered it in my professional life years ago. Now you can create a 3D model from photos with just a smartphone and a few minutes of processing--what used to take a room of specially trained people many weeks to accomplish. Photogrammetry scanning pioneers like Lee Perry-Smith from Infinite Realities, and TEN24 have turned it into an art form.

    From these photos, there are a few different technologies for making 3D models that are becoming easier to use, and cheap enough for anyone to do. The most popular include laser scanning with software like David 3D Scanner, using a Microsoft Kinect with software like ReconstructMe, or consumer photogrammetry with software like Autodesk’s 123D Catch, or Photoscan.

    The very best 3D scanning I’ve seen has been done with a laser scanner, but photogrammetry is not too far behind. Laser scanning also takes special equipment, whether you have to make it or buy it. Using Microsoft’s Kinect for 3D scanning is neat because it gives you real time feedback, so you know when you’ve missed a spot. It’s pretty cheap, and millions of people already own one for their Xbox 360. However, because the camera in the Kinect is relatively low-res, it is not great for fine detail. I’m excited to see what people will be able to do with the Kinect 2 in the Xbox One, once Microsoft releases developer software for it.

    Compared to the other 3D mapping techniques, photogrammetry with a still camera and most of the work done in computation is relatively easy. Though not as streamlined as using a closed system like Kinect, photogrammetry gives much higher-fidelity results, and makes use of equipment that is available to virtually everyone. Because it employs just a regular digital camera, the quality of photogrammetry modeling scales well as camera technology gets better. Modern digital camera sensors are extremely advanced, and because there is so much demand, they are also very inexpensive for what they do.

    Today, I'm going to give you an overview of how photogrammetry works, what consumer software and hardware is available for you to try it yourself, and how to stage a lighting environment to best conduct your photogrammetry work.