New research from Baylor College of Medicine in Houston and the City College of New York shows that the visual information you absorb when you see can improve your understanding of the spoken words by as much as sixfold.
Your brain uses the visual information derived from the person's face and lip movements to help you interpret what you hear, and this benefit increases when the sound quality rises to moderately noisy, said Dr. Wei Ji Ma, assistant professor of neuroscience at BCM and the report's lead author, in a report that appears online today in the open access journal PLoS ONE.
"Most people with normal hearing lip-read very well, even though they don't think so," said Ma. "At certain noise levels, lip-reading can increase word recognition performance from 10 to 60 percent correct."
However, when the environment is very noisy or when the voice you are trying to understand is very faint, lip-reading is difficult.
"We find that a minimum sound level is needed for lip-reading to be most effective," said Ma.
This research is the first to study word recognition in a natural setting, where people report freely what they believe is being said. Previous experiments only used limited lists of words for people to choose from.
The lip-reading data help scientists understand how the brain integrates two different kinds of stimuli to come to a conclusion.
Ma and his colleagues constructed a mathematical model that allowed them to predict how successful a person will be at integrating the visual and auditory information.
People actually combine the two stimuli close to optimally, Ma said. What they perceive depends on the reliability of the stimuli.
"Suppose you are a detective," he said. "You have two witnesses to a crime. One is very precise and believable. The other one is not as believable. You take information from both and weigh the believability of each in your determination of what happened."
In a way, lip-reading involves the same kind of integration of information in the brain, he said.
In experiments, videos of individuals were shown in which a person said a word. If the person is presented normally, the visual information provides a great benefit when it is integrated with the auditory information, especially when there is moderate background noise. Surprisingly, if the person is just a "cartoon" that does not truly mouth the word, then the visual information is still helpful, though not as much.
In another study, the person mouths one word but the audio projects another, and often the brain integrates the two stimuli into a totally different perceived word.
"The mathematical model can predict how often the person will understand the word correctly in all these contexts," Ma said.
An example of the visual and audio stimuli used in the experiment can be found at http://bme.engr.ccny.cuny.edu/faculty/parra/bayes-speech/.
Others who took part in this research include Xiang Zhou, Lars A. Ross, John J. Foxe and Lucas C. Parra of The City College of New York in New York City.
When the embargo lifts, the full report can be found at http://dx.plos.org/10.1371/journal.pone.0004638
Glenna Picton | EurekAlert!
Multi-year study finds 'hotspots' of ammonia over world's major agricultural areas
17.03.2017 | University of Maryland
Diabetes Drug May Improve Bone Fat-induced Defects of Fracture Healing
17.03.2017 | Deutsches Institut für Ernährungsforschung Potsdam-Rehbrücke
Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.
The results will be published on March 22 in the journal „Astronomy & Astrophysics“.
Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the...
Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code.
Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many...
In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating...
Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors.
Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are...
Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to simulate these confined natural conditions in artificial vesicles for the first time. As reported in the academic journal Small, the results are offering better insight into the development of nanoreactors and artificial organelles.
Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to...
20.03.2017 | Event News
14.03.2017 | Event News
07.03.2017 | Event News
24.03.2017 | Materials Sciences
24.03.2017 | Physics and Astronomy
24.03.2017 | Physics and Astronomy