Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

UCLA-developed artificial intelligence device identifies objects at the speed of light

03.08.2018

The 3D-printed artificial neural network can be used in medicine, robotics and security

A team of UCLA electrical and computer engineers has created a physical artificial neural network -- a device modeled on how the human brain works -- that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the UCLA Samueli School of Engineering.


Schematic showing how the device identifies printed text.

Credit: UCLA Samueli / Ozcan Research Group

Numerous devices in everyday life today use computerized cameras to identify objects -- think of automated teller machines that can "read" handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object, first by "seeing" it with a camera or optical sensor, then processing what it sees into data, and finally using computing programs to figure out what it is.

The UCLA-developed device gets a head start. Called a "diffractive deep neural network," it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply "see" the object. The UCLA device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example, a driverless car using the technology could react instantaneously -- even faster than it does using current technology -- to a stop sign. With a device based on the UCLA system, the car would "read" the sign as soon as the light from the sign hits it, as opposed to having to "wait" for the car's camera to image the object and then use its computers to figure out what the object is.

Technology based on the invention could also be used in microscopic imaging and medicine, for example, to sort through millions of cells for signs of disease.

The study was published online in Science on July 26.

"This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects," said Aydogan Ozcan, the study's principal investigator and the UCLA Chancellor's Professor of Electrical and Computer Engineering. "This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential."

The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces, which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons -- in this case, tiny pixels that the light travels through.

Together, a series of pixelated layers functions as an "optical network" that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The "training" used a branch of artificial intelligence called deep learning, in which machines "learn" through repetition and over time as patterns emerge.

"This is intuitively like a very complex maze of glass and mirrors," Ozcan said. "The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting."

In their experiments, the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing -- both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device "see" those images through optical diffraction.

They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it -- much like how a typical camera lens works, but using artificial intelligence instead of physics.

Because its components can be created by a 3D printer, the artificial neural network can be made with larger and additional layers, resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively -- the device created by the UCLA team could be reproduced for less than $50.

While the study used light in the terahertz frequencies, Ozcan said it would also be possible to create neural networks that use visible, infrared or other frequencies of light. A network could also be made using lithography or other printing techniques, he said.

###

The study's others authors, all from UCLA Samueli, are postdoctoral scholars Xing Lin, Yair Rivenson, and Nezih Yardimci; graduate students Muhammed Veli and Yi Luo; and Mona Jarrahi, UCLA professor of electrical and computer engineering.

The research was supported by the National Science Foundation and the Howard Hughes Medical Institute. Ozcan also has UCLA faculty appointments in bioengineering and in surgery at the David Geffen School of Medicine at UCLA. He is the associate director of the UCLA California NanoSystems Institute and an HHMI professor.

Amy Akmal | EurekAlert!
Further information:
https://samueli.ucla.edu/ucla-engineers-develop-artificial-intelligence-device-that-identifies-objects-at-the-speed-of-light/
http://dx.doi.org/10.1126/science.aat8084

More articles from Information Technology:

nachricht Novel communications architecture for future ultra-high speed wireless networks
17.06.2019 | IMDEA Networks Institute

nachricht Concert of magnetic moments
14.06.2019 | Forschungszentrum Juelich

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: The hidden structure of the periodic system

The well-known representation of chemical elements is just one example of how objects can be arranged and classified

The periodic table of elements that most chemistry books depict is only one special case. This tabular overview of the chemical elements, which goes back to...

Im Focus: MPSD team discovers light-induced ferroelectricity in strontium titanate

Light can be used not only to measure materials’ properties, but also to change them. Especially interesting are those cases in which the function of a material can be modified, such as its ability to conduct electricity or to store information in its magnetic state. A team led by Andrea Cavalleri from the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg used terahertz frequency light pulses to transform a non-ferroelectric material into a ferroelectric one.

Ferroelectricity is a state in which the constituent lattice “looks” in one specific direction, forming a macroscopic electrical polarisation. The ability to...

Im Focus: Determining the Earth’s gravity field more accurately than ever before

Researchers at TU Graz calculate the most accurate gravity field determination of the Earth using 1.16 billion satellite measurements. This yields valuable knowledge for climate research.

The Earth’s gravity fluctuates from place to place. Geodesists use this phenomenon to observe geodynamic and climatological processes. Using...

Im Focus: Tube anemone has the largest animal mitochondrial genome ever sequenced

Discovery by Brazilian and US researchers could change the classification of two species, which appear more akin to jellyfish than was thought.

The tube anemone Isarachnanthus nocturnus is only 15 cm long but has the largest mitochondrial genome of any animal sequenced to date, with 80,923 base pairs....

Im Focus: Tiny light box opens new doors into the nanoworld

Researchers at Chalmers University of Technology, Sweden, have discovered a completely new way of capturing, amplifying and linking light to matter at the nanolevel. Using a tiny box, built from stacked atomically thin material, they have succeeded in creating a type of feedback loop in which light and matter become one. The discovery, which was recently published in Nature Nanotechnology, opens up new possibilities in the world of nanophotonics.

Photonics is concerned with various means of using light. Fibre-optic communication is an example of photonics, as is the technology behind photodetectors and...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

SEMANTiCS 2019 brings together industry leaders and data scientists in Karlsruhe

29.04.2019 | Event News

Revered mathematicians and computer scientists converge with 200 young researchers in Heidelberg!

17.04.2019 | Event News

First dust conference in the Central Asian part of the earth’s dust belt

15.04.2019 | Event News

 
Latest News

Novel communications architecture for future ultra-high speed wireless networks

17.06.2019 | Information Technology

Climate Change in West Africa

17.06.2019 | Earth Sciences

Robotic fish to replace animal testing

17.06.2019 | Ecology, The Environment and Conservation

VideoLinks
Science & Research
Overview of more VideoLinks >>>