Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Why people 'never forget a face'

13.12.2006
Are you one of those people who never forgets a face?

New research from Vanderbilt University suggests that we can remember more faces than other objects and that faces “stick” the best in our short-term memory. The reason may be that our expertise in remembering faces allows us to package them better for memory.

“Our results show that we can store more faces than other objects in our visual short-term memory,” Gauthier, associate professor of psychology and the study’s co-author, says. “We believe this happens because of the special way in which faces are encoded.”

Kim Curby, the study’s primary author and a post-doctoral researcher at Yale University, likens such encoding to packing a suitcase. “How much you can fit in a bag depends on how well you pack it,” she says. “In the same way, our expertise in ‘packaging’ faces means that we can remember more of them.”

The findings, part of Curby’s dissertation at Vanderbilt, are currently in press at the journal Psychonomic Bulletin and Review.

Curby and Gauthier’s research has practical implications for the way we use visual short-term memory or VSTM. “Being able to store more faces in VSTM may be very useful in complex social situations,” Gauthier says.

“This opens up the possibility of training people to develop similarly superior VSTM for other categories of objects,” Curby adds.

Short-term memory is crucial to our impression of a continuous world, serving as temporary storage for information that we are currently using. For example, in order to understand this sentence, your short-term memory will remember the words in the beginning while you read through to the end. VSTM is a component of short-term memory that helps us process and briefly remember images and objects, rather than words and sounds.

VSTM allows us to remember objects for a few seconds, but its capacity is limited. Curby’s and Gauthier’s new research focuses on whether we can store more faces than other objects in VSTM, and the possible mechanisms underlying this advantage.

Participants studied up to five faces on a screen for varying lengths of time (up to four seconds). A single face was later presented and participants decided if this was a face that was part of the original display. For a comparison, the process was repeated with other objects, like watches or cars.

Curby and Gauthier found that when participants studied the displays for only a brief amount of time (half a second), they could store fewer faces than objects in VSTM. They believe this is because faces are more complex than watches or cars and require more time to be encoded. Surprisingly, when participants were given more time to encode the images (four seconds), an advantage for faces over objects emerged.

The researchers believe that our experience with faces explains this advantage. This theory is supported by the fact that the advantage was only obtained for faces encoded in the upright orientation, with which we are most familiar. Faces that were encoded upside-down showed no advantage over other objects.

“Our work is the first to show an advantage in capacity for faces over other objects,” Gauthier explained. “Our results suggest that because experience leads you to encode upright faces in a different manner (not only using the parts, but using the whole configuration) you can store more faces in VSTM.”

“What’s striking about this is that some of the most prominent, current theories suggest that the capacity of VSTM is set in stone, unalterable by experience,” Curby said. “However, our results clearly show that expert learning impacts VSTM capacity.”

Curby and Gauthier plan to continue their research on VSTM processes. Their next step will focus on comparing VSTM capacity in people who are experts for other categories of complex objects, such as cars. Later, they will utilize brain imaging to pinpoint the mechanisms in the brain by which faces are encoded more efficiently than other objects.

Gauthier is a member of the Vanderbilt Vision Research Center, the Center for Integrative and Cognitive Neuroscience and the Vanderbilt Kennedy Center for Research on Human Development.

This research was supported by funding from the National Institutes of Health, the National Science Foundation and the James S. McDonnell Foundation.

Melanie Moran | EurekAlert!
Further information:
http://www.vanderbilt.edu

More articles from Studies and Analyses:

nachricht Study relating to materials testing Detecting damages in non-magnetic steel through magnetism
23.07.2018 | Technische Universität Kaiserslautern

nachricht Innovative genetic tests for children with developmental disorders and epilepsy
11.07.2018 | Christian-Albrechts-Universität zu Kiel

All articles from Studies and Analyses >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Color effects from transparent 3D-printed nanostructures

New design tool automatically creates nanostructure 3D-print templates for user-given colors
Scientists present work at prestigious SIGGRAPH conference

Most of the objects we see are colored by pigments, but using pigments has disadvantages: such colors can fade, industrial pigments are often toxic, and...

Im Focus: Unraveling the nature of 'whistlers' from space in the lab

A new study sheds light on how ultralow frequency radio waves and plasmas interact

Scientists at the University of California, Los Angeles present new research on a curious cosmic phenomenon known as "whistlers" -- very low frequency packets...

Im Focus: New interactive machine learning tool makes car designs more aerodynamic

Scientists develop first tool to use machine learning methods to compute flow around interactively designable 3D objects. Tool will be presented at this year’s prestigious SIGGRAPH conference.

When engineers or designers want to test the aerodynamic properties of the newly designed shape of a car, airplane, or other object, they would normally model...

Im Focus: Robots as 'pump attendants': TU Graz develops robot-controlled rapid charging system for e-vehicles

Researchers from TU Graz and their industry partners have unveiled a world first: the prototype of a robot-controlled, high-speed combined charging system (CCS) for electric vehicles that enables series charging of cars in various parking positions.

Global demand for electric vehicles is forecast to rise sharply: by 2025, the number of new vehicle registrations is expected to reach 25 million per year....

Im Focus: The “TRiC” to folding actin

Proteins must be folded correctly to fulfill their molecular functions in cells. Molecular assistants called chaperones help proteins exploit their inbuilt folding potential and reach the correct three-dimensional structure. Researchers at the Max Planck Institute of Biochemistry (MPIB) have demonstrated that actin, the most abundant protein in higher developed cells, does not have the inbuilt potential to fold and instead requires special assistance to fold into its active state. The chaperone TRiC uses a previously undescribed mechanism to perform actin folding. The study was recently published in the journal Cell.

Actin is the most abundant protein in highly developed cells and has diverse functions in processes like cell stabilization, cell division and muscle...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

LaserForum 2018 deals with 3D production of components

17.08.2018 | Event News

Within reach of the Universe

08.08.2018 | Event News

A journey through the history of microscopy – new exhibition opens at the MDC

27.07.2018 | Event News

 
Latest News

Smallest transistor worldwide switches current with a single atom in solid electrolyte

17.08.2018 | Physics and Astronomy

Robots as Tools and Partners in Rehabilitation

17.08.2018 | Information Technology

Climate Impact Research in Hannover: Small Plants against Large Waves

17.08.2018 | Life Sciences

VideoLinks
Science & Research
Overview of more VideoLinks >>>