Now, a new study, published in the August issue of Psychological Science, a journal of the Association for Psychological Science, finds that speaking and understanding speech share the same parts of the brain, with one difference: we don't need the brain regions that control the movements of lips, teeth, and so on to understand speech.
Most studies of how speech works in the brain focuses on comprehension. That's mostly because it's easier to image the brains of people who are listening quietly; talking makes the head move, which is a problem when you're measuring the brain. But now, the Donders Institute at the Radboud University Nijmegen, where the study was conducted, has developed technology that allows recording from a moving brain.
Laura Menenti, a Postdoctoral Research Associate at the University of Glasgow, co-wrote the paper along with Peter Hagoort of Radboud University Nijmegen and the Max Planck Institute for Psycholinguistics, Sarah Gierhan and Katrien Segaert. Menenti was initially interested in how the brain produces grammatical sentences and wanted to track the process of producing a sentence in its entirety; looking not only at its grammatical structure but also at its meaning. "What made this particularly exciting to us was that no one had managed to perform such a study before, meaning that we could explore an almost completely new topic," says Menenti.
The authors used functional MRI technology to measure brain activity in people who were either listening to sentences or speaking sentences. The other problem with measuring brain activity in people who are speaking is that you have to get them to say the right kind of sentence. The authors accomplished this with a picture of an action—a man strangling a woman, say—with one person colored green and one colored red to indicate their order in the sentence. This prompted people to say either "The man is strangling the woman" or "The woman is strangled by the man." (The experiments were all carried out in Dutch.)
From this, the researchers were able to tell where in the brain three different speech tasks (computing meaning, coming up with the words, and building a grammatical sentence)—were taking place. They found that the same areas were activated for each of these tasks in people who were speaking and people who were listening to sentences. However, although some studies have suggested that while people are listening to speech, they silently articulate the words in order to understand them, the authors found no involvement of motor regions when people were listening.
According to Menenti, though the study was largely designed to answer a specific theoretical question, it also points towards some useful avenues for treatment of people with language-related problems. It suggests that while it sometimes seems that people with comprehension problems may have intact production, and vice versa, this may not necessarily be the case. According to Menenti, "Our data suggest that these problems would be expected to always at least partly coincide. On the other, our data confirm the idea that many different processes in the language system, such as understanding meaning or grammar, can at least partly, be damaged independently of each other."
For more information about this study, please contact: Laura Menenti at firstname.lastname@example.org.
The APS journal Psychological Science is the highest ranked empirical journal in psychology. For a copy of the article "Shared Language : Overlap and Segregation of the Neuronal Infrastructure for Speaking and Listening Revealed by Functional MRI" and access to other Psychological Science research findings, please contact Divya Menon at 202-293-9300 or email@example.com.
Divya Menon | EurekAlert!
Rutgers-led innovation could spur faster, cheaper, nano-based manufacturing
14.02.2018 | Rutgers University
New study from the University of Halle: How climate change alters plant growth
12.01.2018 | Martin-Luther-Universität Halle-Wittenberg
Animal photoreceptors capture light with photopigments. Researchers from the University of Göttingen have now discovered that these photopigments fulfill an...
On 15 March, the AWI research aeroplane Polar 5 will depart for Greenland. Concentrating on the furthest northeast region of the island, an international team...
The world’s second-largest ice shelf was the destination for a Polarstern expedition that ended in Punta Arenas, Chile on 14th March 2018. Oceanographers from...
At the 2018 ILA Berlin Air Show from April 25–29, the Fraunhofer Institute for Laser Technology ILT is showcasing extreme high-speed Laser Material Deposition (EHLA): A video documents how for metal components that are highly loaded, EHLA has already proved itself as an alternative to hard chrome plating, which is now allowed only under special conditions.
When the EU restricted the use of hexavalent chromium compounds to special applications requiring authorization, the move prompted a rethink in the surface...
At the ILA Berlin, hall 4, booth 202, Fraunhofer FHR will present two radar sensors for navigation support of drones. The sensors are valuable components in the implementation of autonomous flying drones: they function as obstacle detectors to prevent collisions. Radar sensors also operate reliably in restricted visibility, e.g. in foggy or dusty conditions. Due to their ability to measure distances with high precision, the radar sensors can also be used as altimeters when other sources of information such as barometers or GPS are not available or cannot operate optimally.
Drones play an increasingly important role in the area of logistics and services. Well-known logistic companies place great hope in these compact, aerial...
16.03.2018 | Event News
13.03.2018 | Event News
08.03.2018 | Event News
16.03.2018 | Earth Sciences
16.03.2018 | Physics and Astronomy
16.03.2018 | Life Sciences