Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Mimicking how the brain recognizes street scenes

09.02.2007
First computer model based on the brain works well for artificial vision

At last, neuroscience is having an impact on computer science and artificial intelligence (AI). For the first time, scientists in Tomaso Poggio's laboratory at the McGovern Institute for Brain Research at MIT applied a computational model of how the brain processes visual information to a complex, real world task: recognizing the objects in a busy street scene. The researchers were pleasantly surprised at the power of this new approach.

"People have been talking about computers imitating the brain for a long time," said Poggio, who is also the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences and the co-director of the Center for Biological and Computational Learning at MIT. "That was Alan Turing's original motivation in the 1940s. But in the last 50 years, computer science and AI have developed independently of neuroscience. Our work is biologically inspired computer science."

"We developed a model of the visual system that was meant to be useful for neuroscientists in designing and interpreting experiments, but that also could be used for computer science," said Thomas Serre, a former PhD student and now a post-doctoral researcher in Poggio's lab and lead author a paper about the street scene application in the 2007 IEEE Transactions on Pattern Analysis and Machine Intelligence. "We chose street scene recognition as an example because it has a restricted set of object categories, and it has practical social applications."

Near-term applications include surveillance and automobile driver's assistance, and eventually visual search engines, biomedical imaging analysis, robots with realistic vision. On the neuroscience end, this research is essential for designing augmented sensory prostheses, such as one that could replicate the computations carried by damaged nerves from the retina. "And once you have a good model of how the human brain works," Serre explained, "you can break it to mimic a brain disorder." One brain disorder that involves distortions in visual perception is schizophrenia, but nobody understands the neurobiological basis for those distortions.

"The versatility of the biological model turns computer vision from a trick into something really useful," said co-author Stanley Bileschi, a post-doctoral researcher in the Poggio lab. He and co-author Lior Wolf, a former post-doctoral associate who is now on the faculty of the Computer Science Department at Tel-Aviv University, are working with the MIT entrepreneur office, the Deshpande Center in the Sloan School. This center helps MIT students and professors bridge the gap between an intriguing idea or technology and a commercially viable concept.

Recognizing Scenes

The IEEE paper describes how the team "showed" the model randomly selected images so that it could "learn" to identify commonly occurring features in real-word objects, such as trees, cars, and people. In so-called supervised training sessions, the model used those features to label by category the varied examples of objects found in digital photographs of street scenes: buildings, cars, motorcycles, airplanes, faces, pedestrians, roads, skies, trees, and leaves. The photographs derive from a Street Scene Database compiled by Bileschi.

Compared to traditional computer-vision systems, the biological model was surprisingly versatile. Traditional systems are engineered for specific object classes. For instance, systems engineered to detect faces or recognize textures are poor at detecting cars. In the biological model, the same algorithm can learn to detect widely different types of objects.

To test the model, the team presented full street scenes consisting of previously unseen examples from the Street Scene Database. The model scanned the scene and, based on its supervised training, recognized the objects in the scene. The upshot is that the model learned from examples, which, according to Poggio, is a hallmark of artificial intelligence.

Modeling Object Recognition

Teaching a computer how to recognize objects has been exceedingly difficult because a computer model has two paradoxical goals. It needs to create a representation for a particular object that is very specific, such as a horse as opposed to a cow or a unicorn. At the same time the representation must be sufficiently "invariant" so as to discard meaningless changes in pose, illumination, size, position, and many other variations in appearances.

Even a child's brain handles these contradictory tasks easily in rapid object recognition. Pixel-like information enters from the retina and passes in a fast feed-forward, bottom-up sweep through the hierarchical architecture of the visual cortex. What makes the Poggio lab's model so innovative and powerful is that, computationally speaking, it mimics the brain's own hierarchy. Specifically, the "layers" within the model replicate the way neurons process input and output stimuli – according to neural recordings in physiological labs. Like the brain, the model alternates several times between computations that help build an object representation that is increasingly invariant to changes in appearances of an object in the visual field and computations that help build an object representation that is increasingly complex and specific to a given object.

The model's success validates work in physiology labs that have measured the tuning properties of neurons throughout visual cortex. By necessity, most of those experiments are made with simplistic artificial stimuli, such as gratings, bars, and line drawings that bear little resemblance to real-world images. "We put together a system that mimics as closely as possible how cortical cells respond to simple stimuli like the ones that are used in the physiology lab," said Serre. "The fact that this system seems to work on realistic street scene images is a concept proof that the activity of neurons as measured in the lab is sufficient to explain how brains can perform complex recognition tasks."

Making it More Useful

The model used in the street scene application mimics only the computations the brain uses for rapid object recognition. The lab is now elaborating the model to include the brain's feedback loops from the cognitive centers. This slower form of object recognition provides time for context and reflection, such as: if I see a car, it must be on the road not in the sky. Giving the model the ability to recognize such semantic features will empower it for broader applications, including managing seemingly insurmountable amounts of data, work tasks, or even email. The team is also working on a model for recognizing motions and actions, such as walking or talking, which could be used to filter videos for anomalous behaviors – or for smarter movie editing.

Laurie Ledeen | EurekAlert!
Further information:
http://www.mit.edu
http://web.mit.edu/mcgovern/

More articles from Information Technology:

nachricht Stable magnetic bit of three atoms
21.09.2017 | Sonderforschungsbereich 668

nachricht Drones can almost see in the dark
20.09.2017 | Universität Zürich

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: The pyrenoid is a carbon-fixing liquid droplet

Plants and algae use the enzyme Rubisco to fix carbon dioxide, removing it from the atmosphere and converting it into biomass. Algae have figured out a way to increase the efficiency of carbon fixation. They gather most of their Rubisco into a ball-shaped microcompartment called the pyrenoid, which they flood with a high local concentration of carbon dioxide. A team of scientists at Princeton University, the Carnegie Institution for Science, Stanford University and the Max Plank Institute of Biochemistry have unravelled the mysteries of how the pyrenoid is assembled. These insights can help to engineer crops that remove more carbon dioxide from the atmosphere while producing more food.

A warming planet

Im Focus: Highly precise wiring in the Cerebral Cortex

Our brains house extremely complex neuronal circuits, whose detailed structures are still largely unknown. This is especially true for the so-called cerebral cortex of mammals, where among other things vision, thoughts or spatial orientation are being computed. Here the rules by which nerve cells are connected to each other are only partly understood. A team of scientists around Moritz Helmstaedter at the Frankfiurt Max Planck Institute for Brain Research and Helene Schmidt (Humboldt University in Berlin) have now discovered a surprisingly precise nerve cell connectivity pattern in the part of the cerebral cortex that is responsible for orienting the individual animal or human in space.

The researchers report online in Nature (Schmidt et al., 2017. Axonal synapse sorting in medial entorhinal cortex, DOI: 10.1038/nature24005) that synapses in...

Im Focus: Tiny lasers from a gallery of whispers

New technique promises tunable laser devices

Whispering gallery mode (WGM) resonators are used to make tiny micro-lasers, sensors, switches, routers and other devices. These tiny structures rely on a...

Im Focus: Ultrafast snapshots of relaxing electrons in solids

Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids!

When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole...

Im Focus: Quantum Sensors Decipher Magnetic Ordering in a New Semiconducting Material

For the first time, physicists have successfully imaged spiral magnetic ordering in a multiferroic material. These materials are considered highly promising candidates for future data storage media. The researchers were able to prove their findings using unique quantum sensors that were developed at Basel University and that can analyze electromagnetic fields on the nanometer scale. The results – obtained by scientists from the University of Basel’s Department of Physics, the Swiss Nanoscience Institute, the University of Montpellier and several laboratories from University Paris-Saclay – were recently published in the journal Nature.

Multiferroics are materials that simultaneously react to electric and magnetic fields. These two properties are rarely found together, and their combined...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

“Lasers in Composites Symposium” in Aachen – from Science to Application

19.09.2017 | Event News

I-ESA 2018 – Call for Papers

12.09.2017 | Event News

EMBO at Basel Life, a new conference on current and emerging life science research

06.09.2017 | Event News

 
Latest News

Rainbow colors reveal cell history: Uncovering β-cell heterogeneity

22.09.2017 | Life Sciences

Penn first in world to treat patient with new radiation technology

22.09.2017 | Medical Engineering

Calculating quietness

22.09.2017 | Physics and Astronomy

VideoLinks
B2B-VideoLinks
More VideoLinks >>>