Skip to main content

Draw the Curtains: Gigapixel Cameras Create Highly Revealing Snapshots [Slide Show]

Researchers are developing cameras that can take digital snapshots made up of more than a billion pixels


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Advances in technology tend to spoil us. PCs just a few years old have nothing on today's smart phones, and, whereas megapixel images were once the state of the art in digital photography, gigapixel images (composed of at least one billion pixels, or picture elements) are beginning to show up on the Web in vivid detail.

Gigapixel images also hold tremendous potential for providing law enforcement and the military with detailed reconnaissance and surveillance information. Long-distance images taken today by satellites or unmanned aerial vehicles (UAVs) can capture detail down to a license plate number while flying at altitudes too high for these drones to be spotted from the ground. But these images provide only a narrow view, says Ravi Athale, a consultant to the Defense Advanced Research Projects Agency (DARPA) and a senior principal scientist at MITRE Corp. in McLean, Va. He likens UAV images to seeing a battlefield or city through a "soda straw" and satellite images to an injection needle.

"We are no longer dealing with fixed installations or army tank units or missile silo units,” Athale says. “[Fighting terrorism requires] an awareness of what's going on in a wide area the size of a medium city."

Through its Advanced Wide Field of View Architectures for Image Reconstruction and Exploitation program, DARPA has for the past year been working on ways to develop a camera that can take a gigapixel-quality image in a single snapshot. This approach is novel, given that today's gigapixel images actually consist of several megapixel-sized images pieced together digitally to provide a high level of detail over a large area. This is often done using a long-lens digital single-lens reflex (SLR) camera placed atop a motorized mount. Software controls the movement of the camera, which captures a mosaic of hundreds or even thousands of images that, when placed together, create a single, high-resolution scene that maintains its clarity even when the viewer zooms in on a specific area. DARPA plans to invest $25 million over a three-and-a-half-year period in its  program, which includes a component called Maximally scalable Optical Sensor Array Imaging with Computation (MOSAIC).

The single-snapshot approach to gigapixel digital photography has its drawbacks. The equipment is bulky, expensive and complicated. In addition, because it may take several minutes or even hours for the automated camera to shoot all of the individual images required to create the larger mosaic, lighting conditions may change and objects (cars, people, aircraft, etc.) can move into and out of the frames. And stitching together the individual images requires software that must match overlapping points—any errors must be corrected manually.

Such images also require special viewing software found on Google Earth, 360world.eu, Gigapan.org (created by Pittsburgh's Carnegie Mellon University, NASA and Google) and other Web sites that allow gigapixel digital photographs to be uploaded, viewed and shared across the Web.

Nor are gigapixel images conducive to being captured by a compact, inexpensive camera. The digital processors and memory used in today's cameras are ill-equipped to manage gigapixel images, which contain more than 1,000 times the amount of information as megapixel images. (A 10-gigapixel image would take up more than 30 gigabytes of hard drive space.) And although pixels are often used in reference to image resolution, this attribute can truly only be measured by taking into account an image's overall dimensions and the number of pixels per inch or per centimeter. For example, an image that is 20.3 by 25.4 centimeters at 60 pixels per centimeter has the same resolution as an image that is 10.2 by 12.7 centimeters at 120 pixels per centimeter.

Computational photography
A team of Columbia University researchers in New York led by computer science professor Shree Nayar thinks a single snapshot gigapixel camera is possible if they can reduce the complexity of such images. "Rather than thinking about it as capturing the final image, you're capturing the information you would need to compute the final image," Nayar says.

In a paper to be presented at the April IEEE International Conference on Computational Photography (ICCP) in Pittsburgh, the Columbia researchers propose three relatively compact camera designs (two of which they have actually built as prototypes) for single-shot gigapixel imaging—each design relies on a ball-shaped lens and one or more digital sensors. Such a lens is one of the simplest because it has perfect symmetry (leading to fewer aberrations) and consists of one element rather than several lenses that must be configured to work together, says Oliver Cossairt, a Columbia computer science PhD candidate who works with Nayar.

The first camera Nayar, Cossairt and their team at the Computer Vision Laboratory (part of Columbia Engineering School's Computer Science Department) created is a single-element, monocentric camera that uses a pan/tilt motor to sequentially scan a single sensor to emulate an array of tiled sensors. The second camera is a system that actually uses an array of five sensors arranged side-by-side that produces a contiguous field-of-view (FOV). In the second system the packaging around each sensor leaves some space between them. To account for this, the researchers added five secondary relay lenses placed between the spherical lens and the sensors. This configuration enables each sensor's FOV to overlap slightly so that there are no gaps in data that might distort the final image.

The third design attaches the secondary relay lenses directly to half of the ball-shaped lens (giving it a bumpy rather than a smooth look) and includes a large number of small sensors around that half of the lens. These sensors could be attached to the inside of a spherical half shell slightly larger than the lens itself. The spherical lens would then be positioned inside the half shell so that each sensor would be coupled with a relay lens. Any images viewed by the smooth part of the lens would be captured by the sensors inside the half shell.

"We want to show there is a path to getting to gigapixel cameras, video or still, using the form factor and the weight and the cost of something that would be a camera today," Nayar says. "It was deemed in the past that you could not do that without building a really complex system. What we are saying is that by using computations and simple systems, you can do it."

Athale acknowledges the potential of the work being done by Nayar, Cossairt and their team, saying, "Computational photography is crucial to providing 'persistent wide-area surveillance.'"

Other single snapshot approaches
Microsoft Research Asia is one of a handful of other groups experimenting with single-shot gigapixel imaging. Researchers there have since 2007 been developing a prototype they call the dgCam, whose high-power accordion-style lens configuration gives the device the look of an old-time large-format camera. The dgCam, which takes 1.6-gigapixel images and is not expected to be sold commercially, also uses a sensor much larger that those used in Columbia's prototypes. The dgCam, which is not intended to be a compact camera, is designed to help museums archive, manage and research ancient paintings and drawings.

Large-format cameras—which in their early days required the use of large photographic plates and films and now rely on sensors much larger than those used by the Columbia researchers—are well-suited for taking detailed pictures of small objects, says Moshe Ben-Ezra, a researcher in Microsoft Research Asia's visual computing group who designed and built the dgCam. "The lens does not move during image capture, which is essential for archival quality imaging of any object that is not entirely flat," he says. The dgCam scans images and, like the Columbia project, uses computational algorithms to capture information about those images.

Another large-format approach to taking gigapixel snapshots is the Gigapixl Project, which physicist Graham Flint formed about a decade ago. Gigapixl's camera uses 23-by-46-centimeter film—the same used in military spy planes such as the U-2, to capture images—which is then scanned and digitized to create images up to four gigapixels in size.

Gigapixel digital imagery is still in its infancy but demand for it will grow quickly as the technology develops. "In 1999, megapixel cameras were a dream," says Christopher Hills, a security consultant with Securitas Security Services who also runs the site gigapixel360.com. Now high-end digital cameras can take 25-megapixel images. "I absolutely believe it's going to be the next big step in the evolution of surveillance and video," he adds. "The world is always going to move toward bigger, faster, less expensive pictures and video."

Slide Show: Columbia Researchers' Prototype and Conceptual Gigapixel Cameras