By synchronizing 98 tiny cameras in a single device, electrical engineers from Duke University and the University of Arizona have developed a prototype camera that can create images with unprecedented detail.
The camera’s resolution is five times better than 20/20 human vision over a 120 degree horizontal field.
The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual “dots” of data — the higher the number of pixels, the better resolution of the image.
The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public.
The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering, along with scientists from the University of Arizona, the University of California — San Diego, and Distant Focus Corp.
“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.”
“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”
The software that combines the input from the microcameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.
“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive.”
“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” Gehm said. “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”
via Science Daily
The Latest Streaming News: Gigapixel Camera updated minute-by-minute
2 Comments
Teame Zazzu
The
discussion of Spy Blimps systems needs to address the SOFTWARE
capabilities
rather then focus on the camera sensor or delivery package exclusively.
To that end,
ARGUS, GORGON STARE, KESTREL, (cameras capable of watching cities down
to the person 36 sq miles at a time etc.) all appear to synthesize into
the
Persistics software and “Pursuer Viewer”.
This means a 10gigapixel single drone (persistent reaper 2-day or
blimp) feed capable of recording and making searchable an entire
downtown center. “Like google earth, only live and with Tevo.”….BUT in USA airspace…and armed with missle-drones.
DHS appears to be attempting
to integrate Wide-Area-Persistent-Surveillance and Persistics with systems like “Tentacle” to
coordinate handoff of tracked individuals from WAPS (exteriors) to
tracking individuals inside buildings (interiors), thereby circumventing the
line-of-sight limitations of UAVs in civilian domestic airspace. Analysts
working at ground stations will interact with the transmitted airborne
video data. For example, Persistics has been integrated into the Air
Force Research Laboratory–developed Pursuer viewer to allow analysts to pan, zoom, rewind, query, and overlay maps and other metadata. Queries include triangulation of gunfire, speed, tracking of ALL cars and people and activity recognition such as coordinated movement, brush-pass and dead-drop capabilities a as well as. Local police (PA, etc.) have already used this data to identify crimes is US cities possibly as early as 2006. search: persistent surveillance
or watch “Wide Area Airborn Surveillance: Opportunities and Challenges – Gerard Medioni” on youtube for a look at the dragnet surveillance