New research from Baylor College of Medicine in Houston and the City College of New York shows that the visual information you absorb when you see can improve your understanding of the spoken words by as much as sixfold.
Your brain uses the visual information derived from the person's face and lip movements to help you interpret what you hear, and this benefit increases when the sound quality rises to moderately noisy, said Dr. Wei Ji Ma, assistant professor of neuroscience at BCM and the report's lead author, in a report that appears online today in the open access journal PLoS ONE.
"Most people with normal hearing lip-read very well, even though they don't think so," said Ma. "At certain noise levels, lip-reading can increase word recognition performance from 10 to 60 percent correct."
However, when the environment is very noisy or when the voice you are trying to understand is very faint, lip-reading is difficult.
"We find that a minimum sound level is needed for lip-reading to be most effective," said Ma.
This research is the first to study word recognition in a natural setting, where people report freely what they believe is being said. Previous experiments only used limited lists of words for people to choose from.
The lip-reading data help scientists understand how the brain integrates two different kinds of stimuli to come to a conclusion.
Ma and his colleagues constructed a mathematical model that allowed them to predict how successful a person will be at integrating the visual and auditory information.
People actually combine the two stimuli close to optimally, Ma said. What they perceive depends on the reliability of the stimuli.
"Suppose you are a detective," he said. "You have two witnesses to a crime. One is very precise and believable. The other one is not as believable. You take information from both and weigh the believability of each in your determination of what happened."
In a way, lip-reading involves the same kind of integration of information in the brain, he said.
In experiments, videos of individuals were shown in which a person said a word. If the person is presented normally, the visual information provides a great benefit when it is integrated with the auditory information, especially when there is moderate background noise. Surprisingly, if the person is just a "cartoon" that does not truly mouth the word, then the visual information is still helpful, though not as much.
In another study, the person mouths one word but the audio projects another, and often the brain integrates the two stimuli into a totally different perceived word.
"The mathematical model can predict how often the person will understand the word correctly in all these contexts," Ma said.
An example of the visual and audio stimuli used in the experiment can be found at http://bme.engr.ccny.cuny.edu/faculty/parra/bayes-speech/.
Others who took part in this research include Xiang Zhou, Lars A. Ross, John J. Foxe and Lucas C. Parra of The City College of New York in New York City.
When the embargo lifts, the full report can be found at http://dx.plos.org/10.1371/journal.pone.0004638
Glenna Picton | EurekAlert!
Win-win strategies for climate and food security
02.10.2017 | International Institute for Applied Systems Analysis (IIASA)
The personality factor: How to foster the sharing of research data
06.09.2017 | ZBW – Leibniz-Informationszentrum Wirtschaft
University of Maryland researchers contribute to historic detection of gravitational waves and light created by event
On August 17, 2017, at 12:41:04 UTC, scientists made the first direct observation of a merger between two neutron stars--the dense, collapsed cores that remain...
Seven new papers describe the first-ever detection of light from a gravitational wave source. The event, caused by two neutron stars colliding and merging together, was dubbed GW170817 because it sent ripples through space-time that reached Earth on 2017 August 17. Around the world, hundreds of excited astronomers mobilized quickly and were able to observe the event using numerous telescopes, providing a wealth of new data.
Previous detections of gravitational waves have all involved the merger of two black holes, a feat that won the 2017 Nobel Prize in Physics earlier this month....
Material defects in end products can quickly result in failures in many areas of industry, and have a massive impact on the safe use of their products. This is why, in the field of quality assurance, intelligent, nondestructive sensor systems play a key role. They allow testing components and parts in a rapid and cost-efficient manner without destroying the actual product or changing its surface. Experts from the Fraunhofer IZFP in Saarbrücken will be presenting two exhibits at the Blechexpo in Stuttgart from 7–10 November 2017 that allow fast, reliable, and automated characterization of materials and detection of defects (Hall 5, Booth 5306).
When quality testing uses time-consuming destructive test methods, it can result in enormous costs due to damaging or destroying the products. And given that...
Using a new cooling technique MPQ scientists succeed at observing collisions in a dense beam of cold and slow dipolar molecules.
How do chemical reactions proceed at extremely low temperatures? The answer requires the investigation of molecular samples that are cold, dense, and slow at...
Scientists from the Max Planck Institute of Quantum Optics, using high precision laser spectroscopy of atomic hydrogen, confirm the surprisingly small value of the proton radius determined from muonic hydrogen.
It was one of the breakthroughs of the year 2010: Laser spectroscopy of muonic hydrogen resulted in a value for the proton charge radius that was significantly...
17.10.2017 | Event News
10.10.2017 | Event News
10.10.2017 | Event News
20.10.2017 | Information Technology
20.10.2017 | Materials Sciences
20.10.2017 | Interdisciplinary Research