Quantcast

Multi-Image Fusion Makes Panoramic Stitching Super Easy

By Will Greenwald

Multi-Image Fusion, a photo-enhancing system that uses multiple photos or video frames to produce a single, enhanced still image.

Microsoft is hosting its fifth Silicon Valley TechFair in Mountain View, California right now, where it's showing off the best of Microsoft Research's projects. One of these projects is Multi-Image Fusion, a photo-enhancing system that uses multiple photos or video frames to produce a single, enhanced still image. The project is being developed by Microsoft Research, Cornell, and the University of Washington. 
 
Multi-Image Fusion offers several ways to produce composite photos. The first and most basic operation is the creation of panoramas through a large number of still images through Microsoft's Image Composite Editor (ICE). Stitched panorama images have been a popular feature for digital cameras and image editors for years, but the newest version of ICE improves upon the process by supporting structured panoramas -- panoramas consisting of a large number of photos producing both height and width. It also speeds up the process by using a thumbnail cache to generate previews of very large panoramas rapidly, and by taking advantage of multi-core processors. According to Microsoft, a preview of a 300-image panorama can be generated in just 3 seconds. ICE is freely available from Microsoft Research.  
 

 




Besides simple still image composition, Multi-Image Fusion focuses on using video to produce an enhanced still image. In its simplest form, it can automatically stitch together a panorama from a video sweeping across a scene. The algorithm automatically captures the appropriate frames of a video and stitches together a panorama. It can greatly streamline the process of creating a panorama, because it doesn't need the user to frame and individually shoot each section of it. 
 

The algorithm can also take a relatively stationary or slowly panning video and create a composite picture that's sharper than any individual frame. Once again, by using multiple frames of reference, it can calculate the best way to reduce blur and sharpen a photo. Video frames are relatively low resolution compared to still cameras (even a high-definition 1080p video frame is just 2 megapixels), but because it creates so many frames per second (24 to 60 frames, depending on the video mode), it has more information to work with. The system can also use this concept to produce composite action photos from a video file. For scenes of fast movement, the algorithm isolates the movement and integrates the moving components with a composite background, enhancing the frames of the video to produce a sharper depiction of the action. 
 
While you can produce your own massive panoramas through the freely available ICE program now, the other functions of Multi-Image Fusion are still in the development stage. The system can potentially show up in image editors, digital cameras, and digital camcorders, and as the Microsoft Research team works on it, we should hear more about consumer availability in the future.