Our brains perceive depth by comparing the images that our eyes see. If you alternatively close each of your eyes, you will notice that the object you see will seem to shift left and right. An object that is closer, will seem to shift more than an object that is farther away. That's stereoscopic vision, and the core concept behind creating the illusion of three-dimensional objects and space from two 2D images. Your brain can use this information to subconsciously calculate and tell you how far away an object is supposed to be. In a similar way, photogrammetry is a photography technique using software to map and reconstruct the shape of an object, by comparing two or more photographs. The science of photogrammetry has been around for over 100 years. It was used in World War II by the Allies to construct invasion maps, discover the V2 rocket program, and later by NASA to make topographical maps of the moon for the Apollo missions. This was an expensive, laborious procedure employing a ton of people, and massive specialized cameras and plotting equipment.
Photogrammetry has come a long way since then, and it has even come along way since I first encountered it in my professional life years ago. Now you can create a 3D model from photos with just a smartphone and a few minutes of processing--what used to take a room of specially trained people many weeks to accomplish. Photogrammetry scanning pioneers like Lee Perry-Smith from Infinite Realities, and TEN24 have turned it into an art form.
From these photos, there are a few different technologies for making 3D models that are becoming easier to use, and cheap enough for anyone to do. The most popular include laser scanning with software like David 3D Scanner, using a Microsoft Kinect with software like ReconstructMe, or consumer photogrammetry with software like Autodesk’s 123D Catch, or Photoscan.
The very best 3D scanning I’ve seen has been done with a laser scanner, but photogrammetry is not too far behind. Laser scanning also takes special equipment, whether you have to make it or buy it. Using Microsoft’s Kinect for 3D scanning is neat because it gives you real time feedback, so you know when you’ve missed a spot. It’s pretty cheap, and millions of people already own one for their Xbox 360. However, because the camera in the Kinect is relatively low-res, it is not great for fine detail. I’m excited to see what people will be able to do with the Kinect 2 in the Xbox One, once Microsoft releases developer software for it.
Compared to the other 3D mapping techniques, photogrammetry with a still camera and most of the work done in computation is relatively easy. Though not as streamlined as using a closed system like Kinect, photogrammetry gives much higher-fidelity results, and makes use of equipment that is available to virtually everyone. Because it employs just a regular digital camera, the quality of photogrammetry modeling scales well as camera technology gets better. Modern digital camera sensors are extremely advanced, and because there is so much demand, they are also very inexpensive for what they do.
Today, I'm going to give you an overview of how photogrammetry works, what consumer software and hardware is available for you to try it yourself, and how to stage a lighting environment to best conduct your photogrammetry work.