New Supercomputer ‘Sees’ Well Enough to Drive a Car Someday

In fact, visually interpreting our environment as quickly as we do is an astonishing feat requiring an enormous number of computations—which is just one reason that coming up with a computer-driven system that can mimic the human brain in visually recognizing objects has proven so difficult.

Now Eugenio Culurciello of Yale’s School of Engineering & Applied Science has developed a supercomputer based on the human visual system that operates much more quickly and efficiently than ever before. Dubbed NeuFlow, the system takes its inspiration from the mammalian visual system, mimicking its neural network to quickly interpret the world around it. Culurciello presented the results Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.

The system uses complex vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One idea—the one Culurciello and LeCun are focusing on, is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road—such as other cars, people, stoplights, sidewalks, not to mention the road itself—NeuFlow processes tens of megapixel images in real time.

The system is also extremely efficient, simultaneously running more than 100 billion operations per second using only a few watts (that’s less than the power a cell phone uses) to accomplish what it takes bench-top computers with multiple graphic processors more than 300 watts to achieve.

“One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks,” Culurciello said.

Culurciello embedded the supercomputer on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. “The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places,” Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Other collaborators include Clement Farabet (Yale University and New York University), Berin Martini, Polina Akselrod, Selcuk Talay (Yale University) and Benoit Corda (New York University).

Find out more about NeuFlow and watch a video of the system in action at http://www.eng.yale.edu/elab/research/svision/svision.html

Media Contact

Suzanne Taylor Muzzin EurekAlert!

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

A universal framework for spatial biology

SpatialData is a freely accessible tool to unify and integrate data from different omics technologies accounting for spatial information, which can provide holistic insights into health and disease. Biological processes…

How complex biological processes arise

A $20 million grant from the U.S. National Science Foundation (NSF) will support the establishment and operation of the National Synthesis Center for Emergence in the Molecular and Cellular Sciences (NCEMS) at…

Airborne single-photon lidar system achieves high-resolution 3D imaging

Compact, low-power system opens doors for photon-efficient drone and satellite-based environmental monitoring and mapping. Researchers have developed a compact and lightweight single-photon airborne lidar system that can acquire high-resolution 3D…

Partners & Sponsors