One-billion-pixel snapshots offer researchers high-resolution view of dynamic processes.
A one-gigapixel image (top) shows minute details (bottom) of the skyline in Seattle, Washington.
A camera made from off-the-shelf electronics can take snapshots of one billion pixels each — about one thousand times larger than images taken by conventional cameras.
David Brady, an engineer at Duke University in Durham, North Carolina, and his colleagues are developing the AWARE-2 camera with funding from the United States Defense Advanced Research Projects Agency. The camera’s earliest use will probably be in automated military surveillance systems, but its creators hope eventually to make the technology available to researchers, media companies and consumers.
The camera is described today in Nature1, in a paper that includes some of its images. One snapshot shows a wide view of Pungo Lake, part of the Pocosin Lakes National Wildlife Refuge in North Carolina. In a compressed version of the entire image, no animals are visible. But zooming in reveals a group of swans; going in closer still makes it possible to count every bird on and above the lake.
Researchers including wildlife biologists and archaeologists already use image-stitching software to create similar images from lots of lower-resolution files. But the ability to take the entire picture in one instant rather than taking individual shots over a period of minutes to an hour — during which time those swans might all have flown away — will be useful for capturing dynamic processes.
With such technology, “when you’re in the field, you don’t have to decide what you’re going to study — you can capture as much information as possible and look at it for five years”, says Illah Nourbakhsh, a roboticist at Carnegie Mellon University in Pittsburgh, Pennsylvania, who developed image-stitching software called Gigapan. “That really changes your mindset.”
Bigger and better
In general, taking high-resolution images demands a large lens. Very rapidly, the optics become “the size of a bus”, says Brady. And high-resolution cameras usually have a limited field of view, meaning that they can see only a small slice of the total scene at a time. For example, each of the four 1.4-gigapixel cameras being used in the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) at the University of Hawaii’s Institute for Astronomy, which will scan the night sky for potentially dangerous near-Earth objects such as asteroids, focuses on a view of the sky only three degrees wide. And each uses a 1.8-metre mirror and a large array of light-sensing chips to accomplish the feat.
AWARE-2 sidesteps the size issue by using 98 microcameras, each with a 14-megapixel sensor, grouped around a shared spherical lens. Together, they take in a field of view 120 degrees wide and 50 degrees tall. With all the packaging, data-processing electronics and cooling systems, the entire camera is about 0.75 by 0.75 by 0.5 metres in volume.
The current version of the camera can take images of about one gigapixel; by adding more microcameras, the researchers expect eventually to reach about 50 gigapixels. Each microcamera runs autofocus and exposure algorithms independently, so that every part of the image — near or far, bright or dark — is visible in the final result. Image processing is used to stitch together the 98 sub-images into a single large one at the rate of three frames per minute.
“With this design, they’re changing the game,” says Nourbakhsh.
The Duke group is now building a gigapixel camera with more sophisticated electronics, which takes ten images per second — close to video rate. It should be finished by the end of the year. The cameras can currently be made for about US$100,000 each, and large-scale manufacturing should bring costs down to about $1,000. The researchers are talking to media companies about the technology, which could for example be used to film sports: fans watching gigapixel video of a football game could follow their own interests rather than the camera operator’s.
The challenge, says Michael Cohen, head of the Interactive Visual Media group at Microsoft Research in Redmond, Washington, is dealing with the huge amount of data that these cameras will produce.
The gigapixel camera that takes ten frames per second will generate ten gigabytes of data every second — too much to store in conventional file formats, post on YouTube or e-mail to a friend. Not everything in these huge images is worth displaying or even recording, and researchers will have to write software to determine which data are worth storing and displaying, and create better interfaces for viewing and sharing gigapixel images. “The technology for capturing the world is outpacing our ability to deal with the data,” says Nourbakhsh.