What you see affects what you hear

New research from Baylor College of Medicine in Houston and the City College of New York shows that the visual information you absorb when you see can improve your understanding of the spoken words by as much as sixfold.

Your brain uses the visual information derived from the person's face and lip movements to help you interpret what you hear, and this benefit increases when the sound quality rises to moderately noisy, said Dr. Wei Ji Ma, assistant professor of neuroscience at BCM and the report's lead author, in a report that appears online today in the open access journal PLoS ONE.

“Most people with normal hearing lip-read very well, even though they don't think so,” said Ma. “At certain noise levels, lip-reading can increase word recognition performance from 10 to 60 percent correct.”

However, when the environment is very noisy or when the voice you are trying to understand is very faint, lip-reading is difficult.

“We find that a minimum sound level is needed for lip-reading to be most effective,” said Ma.

This research is the first to study word recognition in a natural setting, where people report freely what they believe is being said. Previous experiments only used limited lists of words for people to choose from.

The lip-reading data help scientists understand how the brain integrates two different kinds of stimuli to come to a conclusion.

Ma and his colleagues constructed a mathematical model that allowed them to predict how successful a person will be at integrating the visual and auditory information.

People actually combine the two stimuli close to optimally, Ma said. What they perceive depends on the reliability of the stimuli.

“Suppose you are a detective,” he said. “You have two witnesses to a crime. One is very precise and believable. The other one is not as believable. You take information from both and weigh the believability of each in your determination of what happened.”

In a way, lip-reading involves the same kind of integration of information in the brain, he said.

In experiments, videos of individuals were shown in which a person said a word. If the person is presented normally, the visual information provides a great benefit when it is integrated with the auditory information, especially when there is moderate background noise. Surprisingly, if the person is just a “cartoon” that does not truly mouth the word, then the visual information is still helpful, though not as much.

In another study, the person mouths one word but the audio projects another, and often the brain integrates the two stimuli into a totally different perceived word.

“The mathematical model can predict how often the person will understand the word correctly in all these contexts,” Ma said.

An example of the visual and audio stimuli used in the experiment can be found at http://bme.engr.ccny.cuny.edu/faculty/parra/bayes-speech/.

Others who took part in this research include Xiang Zhou, Lars A. Ross, John J. Foxe and Lucas C. Parra of The City College of New York in New York City.

When the embargo lifts, the full report can be found at http://dx.plos.org/10.1371/journal.pone.0004638

Media Contact

Glenna Picton EurekAlert!

More Information:

http://www.bcm.edu

All latest news from the category: Studies and Analyses

innovations-report maintains a wealth of in-depth studies and analyses from a variety of subject areas including business and finance, medicine and pharmacology, ecology and the environment, energy, communications and media, transportation, work, family and leisure.

Back to home

Comments (0)

Write a comment

Newest articles

New yttrium-hydrogen compounds discovered

Researchers at the University of Bayreuth have made a significant scientific breakthrough by discovering new yttrium-hydrogen compounds having serious implications for the research on high-pressure superconductivity. High-pressure superconductivity refers to…

New AI model detects ninety percent of lymphatic cancer cases

Medical image analysis using AI has developed rapidly in recent years. Now, one of the largest studies to date has been carried out using AI-assisted image analysis of lymphoma, cancer…

UTA preps giant particle detectors for neutrino project

Excavation of caverns part of Fermilab’s Deep Underground Neutrino Experiment. With excavation work complete at the site where four gigantic particle detectors for the international Deep Underground Neutrino Experiment (DUNE) will be…

Partners & Sponsors