This new mechanism for visual cognition challenges the currently held model of sight and could change the way neuroscientists study the brain.
The new vision model is called predictive coding. It is more complex and adds an extra dimension to the standard model of sight. The prevailing model has been that neurons process incoming data from the retina through a series of hierarchical layers. In this bottom-up system, the lower neurons first detect an object's features, such as horizontal or vertical lines. The neurons send that information to the next level of brain cells that identify other specific features and feed the emerging image to the next layer of neurons, which add additional details. The image travels up the neuron ladder until it is completely formed.
But new brain imaging data from a study led by Duke researcher Tobias Egner provides "clear and direct evidence" that the standard picture of vision, called feature detection, is incomplete. The data, published Dec. 8 in the Journal of Neuroscience, show that the brain predicts what it will see and edits those predictions in a top-down mechanism, said Egner, who is an assistant professor of psychology and neuroscience.
In this system, the neurons at each level form and send context-sensitive predictions about what an image might be to the next lower neuron level. The predictions are compared with the incoming sensory data. Any mismatches, or prediction errors, between what the neurons expected to see and what they observe are sent up the neuron ladder. Each neuron layer then adjusts its perceptions of an image in order to eliminate prediction error at the next lower layer.
Finally, once all prediction error is eliminated, "the visual cortex has assigned its best guess interpretation of what an object is, and a person actually sees the object," Egner said. He noted that this happens subconsciously in a matter of milliseconds. "You never even really know you're doing it," he said.
Egner and his colleagues wanted to capture the process almost as it happened. The team used functional Magnetic Resonance Imaging, or fMRI, brain scans of the fusiform face area (FFA), a region that deals with recognizing faces. The researchers monitored 16 subjects' brains as they observed faces or houses framed in different colored boxes that predicted the likelihood of the picture being a face or house. Study participants were told to press a button when they observed an inverted image of a face or house, but the researchers were measuring something else. By changing the face-frame or house-frame color combination, the researchers controlled and measured the FFA neural response to tease apart responses to the stimulus, face expectation and error processing.
If the feature detection model were correct, the FFA neural response should be stronger for faces than houses, irrespective of the subjects' expectations. But Egner and his colleagues found that if subjects had a high expectation of seeing a face, their neural response was nearly the same whether they were actually shown a face or a house. The study goes on to use computational modeling to show that this pattern of neural activation can only be explained by a shared contribution from face expectation and prediction error.
This study provides support for a "very different view" of how the visual system works, said Scott Murray, a University of Washington neuroscientist who was not involved in the research. Instead of high neuron firing rates providing information about the presence of a particular feature, high firing rates are instead associated with a deviation from what neurons expect to see, Murray explained. "These deviation signals presumably provide useful tags for something the visual system has to process more to understand."
Egner said that theorists have been developing the predictive coding model for the past 30 years, but no previous studies have directly tested it against the feature detection model. "This paper is provocative and motions toward a change in the preconception of how vision works. In essence, more scientists may become more sympathetic to the new model," he said.
Murray also said that the findings could influence the way neuroscientists continue to study the brain. Most research assumes that if a brain region has a large response to a particular visual image, and then it is somehow responsible for, or specialized for, processing the content of the image. This research "challenges that assumption," he said, explaining that future studies have to take into account expectations that participants have for the visual images being presented.
Ashley Yeager | EurekAlert!
Rainbow colors reveal cell history: Uncovering β-cell heterogeneity
22.09.2017 | DFG-Forschungszentrum für Regenerative Therapien TU Dresden
The pyrenoid is a carbon-fixing liquid droplet
22.09.2017 | Max-Planck-Institut für Biochemie
Plants and algae use the enzyme Rubisco to fix carbon dioxide, removing it from the atmosphere and converting it into biomass. Algae have figured out a way to increase the efficiency of carbon fixation. They gather most of their Rubisco into a ball-shaped microcompartment called the pyrenoid, which they flood with a high local concentration of carbon dioxide. A team of scientists at Princeton University, the Carnegie Institution for Science, Stanford University and the Max Plank Institute of Biochemistry have unravelled the mysteries of how the pyrenoid is assembled. These insights can help to engineer crops that remove more carbon dioxide from the atmosphere while producing more food.
A warming planet
Our brains house extremely complex neuronal circuits, whose detailed structures are still largely unknown. This is especially true for the so-called cerebral cortex of mammals, where among other things vision, thoughts or spatial orientation are being computed. Here the rules by which nerve cells are connected to each other are only partly understood. A team of scientists around Moritz Helmstaedter at the Frankfiurt Max Planck Institute for Brain Research and Helene Schmidt (Humboldt University in Berlin) have now discovered a surprisingly precise nerve cell connectivity pattern in the part of the cerebral cortex that is responsible for orienting the individual animal or human in space.
The researchers report online in Nature (Schmidt et al., 2017. Axonal synapse sorting in medial entorhinal cortex, DOI: 10.1038/nature24005) that synapses in...
Whispering gallery mode (WGM) resonators are used to make tiny micro-lasers, sensors, switches, routers and other devices. These tiny structures rely on a...
Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids!
When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole...
For the first time, physicists have successfully imaged spiral magnetic ordering in a multiferroic material. These materials are considered highly promising candidates for future data storage media. The researchers were able to prove their findings using unique quantum sensors that were developed at Basel University and that can analyze electromagnetic fields on the nanometer scale. The results – obtained by scientists from the University of Basel’s Department of Physics, the Swiss Nanoscience Institute, the University of Montpellier and several laboratories from University Paris-Saclay – were recently published in the journal Nature.
Multiferroics are materials that simultaneously react to electric and magnetic fields. These two properties are rarely found together, and their combined...
19.09.2017 | Event News
12.09.2017 | Event News
06.09.2017 | Event News
22.09.2017 | Life Sciences
22.09.2017 | Medical Engineering
22.09.2017 | Physics and Astronomy