Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:


Inexpensive 'nano-camera' can operate at the speed of light

Device could be used in medical imaging, collision-avoidance detectors for cars, and interactive gaming

A $500 "nano-camera" that can operate at the speed of light has been developed by researchers in the MIT Media Lab.

The three-dimensional camera, which was presented last week at Siggraph Asia in Hong Kong, could be used in medical imaging and collision-avoidance detectors for cars, and to improve the accuracy of motion tracking and gesture-recognition devices used in interactive gaming.

The camera is based on "Time of Flight" technology like that used in Microsoft's recently launched second-generation Kinect device, in which the location of objects is calculated by how long it takes a light signal to reflect off a surface and return to the sensor. However, unlike existing devices based on this technology, the new camera is not fooled by rain, fog, or even translucent objects, says co-author Achuta Kadambi, a graduate student at MIT.

"Using the current state of the art, such as the new Kinect, you cannot capture translucent objects in 3-D," Kadambi says. "That is because the light that bounces off the transparent object and the background smear into one pixel on the camera. Using our technique you can generate 3-D models of translucent or near-transparent objects."

In a conventional Time of Flight camera, a light signal is fired at a scene, where it bounces off an object and returns to strike the pixel. Since the speed of light is known, it is then simple for the camera to calculate the distance the signal has travelled and therefore the depth of the object it has been reflected from.

Unfortunately though, changing environmental conditions, semitransparent surfaces, edges, or motion all create multiple reflections that mix with the original signal and return to the camera, making it difficult to determine which is the correct measurement.

Instead, the new device uses an encoding technique commonly used in the telecommunications industry to calculate the distance a signal has travelled, says Ramesh Raskar, an associate professor of media arts and sciences and leader of the Camera Culture group within the Media Lab, who developed the method alongside Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.

"We use a new method that allows us to encode information in time," Raskar says. "So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."

The idea is similar to existing techniques that clear blurring in photographs, says Bhandari, a graduate student in the Media Lab. "People with shaky hands tend to take blurry photographs with their cellphones because several shifted versions of the scene smear together," Bhandari says. "By placing some assumptions on the model — for example that much of this blurring was caused by a jittery hand — the image can be unsmeared to produce a sharper picture."

The new model, which the team has dubbed nanophotography, unsmears the individual optical paths.

In 2011 Raskar's group unveiled a trillion-frame-per-second camera capable of capturing a single pulse of light as it travelled through a scene. The camera does this by probing the scene with a femtosecond impulse of light, then uses fast but expensive laboratory-grade optical equipment to take an image each time. However, this "femto-camera" costs around $500,000 to build.

In contrast, the new "nano-camera" probes the scene with a continuous-wave signal that oscillates at nanosecond periods. This allows the team to use inexpensive hardware — off-the-shelf light-emitting diodes (LEDs) can strobe at nanosecond periods, for example — meaning the camera can reach a time resolution within one order of magnitude of femtophotography while costing just $500.

"By solving the multipath problem, essentially just by changing the code, we are able to unmix the light paths and therefore visualize light moving across the scene," Kadambi says. "So we are able to get similar results to the $500,000 camera, albeit of slightly lower quality, for just $500."

Abby Abazorius | EurekAlert!
Further information:

All articles from Power and Electrical Engineering >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Locomotion control with photopigments

Researchers from Göttingen University discover additional function of opsins

Animal photoreceptors capture light with photopigments. Researchers from the University of Göttingen have now discovered that these photopigments fulfill an...

Im Focus: Surveying the Arctic: Tracking down carbon particles

Researchers embark on aerial campaign over Northeast Greenland

On 15 March, the AWI research aeroplane Polar 5 will depart for Greenland. Concentrating on the furthest northeast region of the island, an international team...

Im Focus: Unique Insights into the Antarctic Ice Shelf System

Data collected on ocean-ice interactions in the little-researched regions of the far south

The world’s second-largest ice shelf was the destination for a Polarstern expedition that ended in Punta Arenas, Chile on 14th March 2018. Oceanographers from...

Im Focus: ILA 2018: Laser alternative to hexavalent chromium coating

At the 2018 ILA Berlin Air Show from April 25–29, the Fraunhofer Institute for Laser Technology ILT is showcasing extreme high-speed Laser Material Deposition (EHLA): A video documents how for metal components that are highly loaded, EHLA has already proved itself as an alternative to hard chrome plating, which is now allowed only under special conditions.

When the EU restricted the use of hexavalent chromium compounds to special applications requiring authorization, the move prompted a rethink in the surface...

Im Focus: Radar for navigation support from autonomous flying drones

At the ILA Berlin, hall 4, booth 202, Fraunhofer FHR will present two radar sensors for navigation support of drones. The sensors are valuable components in the implementation of autonomous flying drones: they function as obstacle detectors to prevent collisions. Radar sensors also operate reliably in restricted visibility, e.g. in foggy or dusty conditions. Due to their ability to measure distances with high precision, the radar sensors can also be used as altimeters when other sources of information such as barometers or GPS are not available or cannot operate optimally.

Drones play an increasingly important role in the area of logistics and services. Well-known logistic companies place great hope in these compact, aerial...

All Focus news of the innovation-report >>>



Industry & Economy
Event News

Ultrafast Wireless and Chip Design at the DATE Conference in Dresden

16.03.2018 | Event News

International Tinnitus Conference of the Tinnitus Research Initiative in Regensburg

13.03.2018 | Event News

International Virtual Reality Conference “IEEE VR 2018” comes to Reutlingen, Germany

08.03.2018 | Event News

Latest News

Wandering greenhouse gas

16.03.2018 | Earth Sciences

'Frequency combs' ID chemicals within the mid-infrared spectral region

16.03.2018 | Physics and Astronomy

Biologists unravel another mystery of what makes DNA go 'loopy'

16.03.2018 | Life Sciences

Science & Research
Overview of more VideoLinks >>>