Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Computational model sheds light on how the brain recognizes objects

09.06.2010
Could help advance artificial-intelligence research

Researchers at MIT’s McGovern Institute for Brain Research have developed a new mathematical model to describe how the human brain visually identifies objects. The model accurately predicts human performance on certain visual-perception tasks, which suggests that it’s a good indication of what actually happens in the brain, and it could also help improve computer object-recognition systems.

The model was designed to reflect neurological evidence that in the primate brain, object identification — deciding what an object is — and object location — deciding where it is — are handled separately. “Although what and where are processed in two separate parts of the brain, they are integrated during perception to analyze the image,” says Sharat Chikkerur, lead author on a paper appearing this week in the journal Vision Research, which describes the work. “The model that we have tries to explain how this information is integrated.”

The mechanism of integration, the researchers argue, is attention. According to their model, when the brain is confronted by a scene containing a number of different objects, it can’t keep track of all of them at once. So instead it creates a rough map of the scene that simply identifies some regions as being more visually interesting than others. If it’s then called upon to determine whether the scene contains an object of a particular type, it begins by searching — turning its attention toward — the regions of greatest interest.

Chikkerur and Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences and at the Computer Science and Artificial Intelligence Laboratory, together with graduate student Cheston Tan and former postdoc Thomas Serre, implemented the model in software, then tested its predictions against data from experiments with human subjects. The subjects were asked first to simply regard a street scene depicted on a computer screen, then to count the cars in the scene, and then to count the pedestrians, while an eye-tracking system recorded their eye movements. The software predicted with great accuracy which regions of the image the subjects would attend to during each task.

The software’s analysis of an image begins with the identification of interesting features — rudimentary shapes common to a wide variety of images. It then creates a map that depicts which features are found in which parts of the image. But thereafter, shape information and location information are processed separately, as they are in the brain.

The software creates a list of all the interesting features in the feature map, and from that, it creates another list, of all the objects that contain those features. But it doesn’t record any information about where or how frequently the features occur.

At the same time, it creates a spatial map of the image that indicates where interesting features are to be found, but not what sorts of features they are.

It does, however, interpret the “interestingness” of the features probabilistically. If a feature occurs more than once, its interestingness is spread out across all the locations at which it occurs. If another feature occurs at only one location, its interestingness is concentrated at that one location.

Mathematically, this is a natural consequence of separating information about objects’ identity and location and interpreting the results probabilistically. But it ends up predicting another aspect of human perception, a phenomenon called “pop out.” A human subject presented with an image of, say, one square and one star will attend to both objects about equally. But a human subject presented an image of one square and a dozen stars will tend to focus on the square.

Like a human asked to perform a visual-perception task, the software can adjust its object and location models on the fly. If the software is asked to identify only the objects at a particular location in the image, it will cross off its list of possible objects any that don’t contain the features found at that location.

By the same token, if it’s asked to search the image for a particular kind of object, the interestingness of features not found in that object will go to zero, and the interestingness of features found in the object will increase proportionally. This is what allows the system to predict the eye movements of humans viewing a digital image, but it’s also the aspect of the system that could aid the design of computer object-recognition systems. A typical object-recognition system, when asked to search an image for multiple types of objects, will search through the entire image looking for features characteristic of the first object, then search through the entire image looking for features characteristic of the second object, and so on. A system like Poggio and Chikkerur’s, however, could limit successive searches to just those regions of the image that are likely to have features of interest.

Source: “What and where: A Bayesian inference theory of attention.” Sharat S. Chikkerur, Thomas Serre, Cheston Tan, Tomaso Poggio. Vision Research. Week of 7 June, 2010.

Funding: DARPA, the Honda Research Institute USA, NEC, Sony and the Eugene McDermott Foundation

Jennifer Hirsch | EurekAlert!
Further information:
http://www.mit.edu

More articles from Interdisciplinary Research:

nachricht A new method for the 3-D printing of living tissues
16.08.2017 | University of Oxford

nachricht Bergamotene - alluring and lethal for Manduca sexta
21.04.2017 | Max-Planck-Institut für chemische Ökologie

All articles from Interdisciplinary Research >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Fizzy soda water could be key to clean manufacture of flat wonder material: Graphene

Whether you call it effervescent, fizzy, or sparkling, carbonated water is making a comeback as a beverage. Aside from quenching thirst, researchers at the University of Illinois at Urbana-Champaign have discovered a new use for these "bubbly" concoctions that will have major impact on the manufacturer of the world's thinnest, flattest, and one most useful materials -- graphene.

As graphene's popularity grows as an advanced "wonder" material, the speed and quality at which it can be manufactured will be paramount. With that in mind,...

Im Focus: Exotic quantum states made from light: Physicists create optical “wells” for a super-photon

Physicists at the University of Bonn have managed to create optical hollows and more complex patterns into which the light of a Bose-Einstein condensate flows. The creation of such highly low-loss structures for light is a prerequisite for complex light circuits, such as for quantum information processing for a new generation of computers. The researchers are now presenting their results in the journal Nature Photonics.

Light particles (photons) occur as tiny, indivisible portions. Many thousands of these light portions can be merged to form a single super-photon if they are...

Im Focus: Circular RNA linked to brain function

For the first time, scientists have shown that circular RNA is linked to brain function. When a RNA molecule called Cdr1as was deleted from the genome of mice, the animals had problems filtering out unnecessary information – like patients suffering from neuropsychiatric disorders.

While hundreds of circular RNAs (circRNAs) are abundant in mammalian brains, one big question has remained unanswered: What are they actually good for? In the...

Im Focus: RAVAN CubeSat measures Earth's outgoing energy

An experimental small satellite has successfully collected and delivered data on a key measurement for predicting changes in Earth's climate.

The Radiometer Assessment using Vertically Aligned Nanotubes (RAVAN) CubeSat was launched into low-Earth orbit on Nov. 11, 2016, in order to test new...

Im Focus: Scientists shine new light on the “other high temperature superconductor”

A study led by scientists of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science in Hamburg presents evidence of the coexistence of superconductivity and “charge-density-waves” in compounds of the poorly-studied family of bismuthates. This observation opens up new perspectives for a deeper understanding of the phenomenon of high-temperature superconductivity, a topic which is at the core of condensed matter research since more than 30 years. The paper by Nicoletti et al has been published in the PNAS.

Since the beginning of the 20th century, superconductivity had been observed in some metals at temperatures only a few degrees above the absolute zero (minus...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

Call for Papers – ICNFT 2018, 5th International Conference on New Forming Technology

16.08.2017 | Event News

Sustainability is the business model of tomorrow

04.08.2017 | Event News

Clash of Realities 2017: Registration now open. International Conference at TH Köln

26.07.2017 | Event News

 
Latest News

Nagoya physicists resolve long-standing mystery of structure-less transition

21.08.2017 | Materials Sciences

Chronic stress induces fatal organ dysfunctions via a new neural circuit

21.08.2017 | Health and Medicine

Scientists from the MSU studied new liquid-crystalline photochrom

21.08.2017 | Materials Sciences

VideoLinks
B2B-VideoLinks
More VideoLinks >>>