Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Computer Vision Study Links How Brain Recognizes Faces, Moods

01.07.2003


A study at Ohio State University has given new insight to how humans recognize faces and facial expressions. Participants in a study were asked to identify the expressions of faces in a photo database, such as the expressions shown here (in clockwise order, neutral, happy, angry, and screaming). Photos courtesy of Ohio State University.


The human brain combines motion and shape information to recognize faces and facial expressions, a new study suggests.

That new finding, part of an engineer’s quest to design computers that “see” faces the way humans do, provides more evidence concerning a controversy in cognitive psychology.

Were computers to become adept at recognizing faces and moods, they would be more user-friendly, said Aleix Martinez, assistant professor of electrical engineering at Ohio State University. They could also support intelligent video security systems and provide potentially hack-proof computer identification.



Martinez developed a model of how the brain recognizes the faces of people we’ve seen before, and how we discern facial expressions. These two activities take place in different areas of the brain, and some scientists believe that the mental processes involved are completely separate as well; others believe that the two processes are closely linked.

In a recent issue of the journal Vision Research, Martinez reported that the two processes are indeed linked -- indirectly, through the part of the brain that helps us understand motion. We use our knowledge of how facial muscles move when we recognize a smile, for instance, or when we recognize a familiar face regardless of what kind of facial expression he or she has.

Martinez and his colleagues want to use this information to design a computer that recognizes people based on input from a video camera. Most such “computer vision” systems on the market today require many pictures of a person before they can make an identification, and even then the computers can be fooled if the person looks slightly different than in the pictures.

“Ideally, we want a computer that can recognize someone, even though there is only one picture of that person on file, and it was taken at a different angle, in different lighting, or they were wearing sunglasses,” Martinez said.

It’s a tall order, but then again, the goal isn’t 100 percent accuracy.

“When it comes to recognizing faces, people aren’t perfect, but computers are even worse,” said Martinez. “For a computer to interact well with humans and identify people the way we want it to, it would have to make the same errors that humans make.”

Martinez has shown that his model of this brain function -- that we use our knowledge of motion and shape combine together to recognize faces and facial expressions -- closely matches the test results of people he studied while he was at Purdue University.

Martinez photographed the faces of 126 volunteers to create a face database. Each volunteer was photographed with four different facial expressions -- happy, angry, screaming, and neutral -- with different lighting, and with and without different accessories including sunglasses.

For the work just published in Vision Research, Martinez showed photos from the database to two groups of volunteers. The first group was tested to see how fast they could decide if two faces -- one with a neutral expression and one happy, angry, or screaming -- belonged to the same person.

The second group of volunteers was tested to see how fast they could identify the facial expression -- either happy, angry, screaming, or neutral -- shown in a series of photos.

Martinez timed their responses, and compared the results to his computer model. Though the model depicted a much-simplified version of human visual processing, it was unique because it included a module for calculating how much the facial muscles had moved between the different expressions.

If the human brain takes the time to “calculate” movement of the face, he reasoned, then the humans and the computer model would experience similar delays when identifying faces.

Just like the computer model, the human volunteers were quicker to recognize faces and facial expressions that involved little movement, and slower to recognize expressions that involved a lot of movement.

In the first experimental group -- the one that had to decide if two faces belonged to the same person -- volunteers most quickly matched neutral faces to neutral faces (0.8 seconds), followed by neutral to angry (just under 0.9 seconds), neutral to smiling (0.9 seconds), and neutral to screaming faces (just over 1 second).

In the second group -- the one that had to identify which of the four expressions they were looking at -- they most quickly picked out happy faces (1.3 seconds), then neutral (1.5 seconds), angry (1.9 seconds), and screaming faces (barely under 2 seconds).

Although the computer model’s “reaction time” was measured in computer cycles and iterations rather than seconds, it identified faces and expressions in the same order as the human volunteers for both tests.

Martinez model also explained why the human volunteers were able to match angry faces faster in the first test, but identify happy faces faster in the second test.

Except for very subtle features -- such as a furrowed brow, pursed lips, or squinting eyes -- most angry faces aren’t that different from neutral faces. So matching a neutral face to an angry face is easier.

But when the only task is to identify an expression, identifying happiness is easier because in general we need only examine whether one feature -- the mouth -- is smiling.

Ultimately, this work could lead to computers that recognize the faces of authorized users -- eliminating the need for passwords, which sometimes be guessed or obtained by unauthorized users. Computers could also take cues from a user’s emotional state.

“You can imagine a computer saying, ‘you seem upset, what can I do to help?’” Martinez said.

It could improve age-progression software, which is often used by law enforcement to find missing children. Computers would also be able to identify criminals who wear common disguises such as glasses or scarves.

This work was partially supported by the National Science Foundation.



Contact: Aleix Martinez, (614) 688-8225; Martinez.158@osu.edu

Written by Pam Frost Gorder, (614) 292-9475; Gorder.1@osu.edu

Pam Frost Gorder | Ohio State University.
Further information:
http://www.osu.edu/researchnews/archive/compvisn.htm
http://sampl.eng.ohio-state.edu/%7Ealeix/
http://eewww.eng.ohio-state.edu/

More articles from Studies and Analyses:

nachricht New study from the University of Halle: How climate change alters plant growth
12.01.2018 | Martin-Luther-Universität Halle-Wittenberg

nachricht Disarray in the brain
18.12.2017 | Universität zu Lübeck

All articles from Studies and Analyses >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Scientists decipher key principle behind reaction of metalloenzymes

So-called pre-distorted states accelerate photochemical reactions too

What enables electrons to be transferred swiftly, for example during photosynthesis? An interdisciplinary team of researchers has worked out the details of how...

Im Focus: The first precise measurement of a single molecule's effective charge

For the first time, scientists have precisely measured the effective electrical charge of a single molecule in solution. This fundamental insight of an SNSF Professor could also pave the way for future medical diagnostics.

Electrical charge is one of the key properties that allows molecules to interact. Life itself depends on this phenomenon: many biological processes involve...

Im Focus: Paradigm shift in Paris: Encouraging an holistic view of laser machining

At the JEC World Composite Show in Paris in March 2018, the Fraunhofer Institute for Laser Technology ILT will be focusing on the latest trends and innovations in laser machining of composites. Among other things, researchers at the booth shared with the Aachen Center for Integrative Lightweight Production (AZL) will demonstrate how lasers can be used for joining, structuring, cutting and drilling composite materials.

No other industry has attracted as much public attention to composite materials as the automotive industry, which along with the aerospace industry is a driver...

Im Focus: Room-temperature multiferroic thin films and their properties

Scientists at Tokyo Institute of Technology (Tokyo Tech) and Tohoku University have developed high-quality GFO epitaxial films and systematically investigated their ferroelectric and ferromagnetic properties. They also demonstrated the room-temperature magnetocapacitance effects of these GFO thin films.

Multiferroic materials show magnetically driven ferroelectricity. They are attracting increasing attention because of their fascinating properties such as...

Im Focus: A thermometer for the oceans

Measurement of noble gases in Antarctic ice cores

The oceans are the largest global heat reservoir. As a result of man-made global warming, the temperature in the global climate system increases; around 90% of...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

10th International Symposium: “Advanced Battery Power – Kraftwerk Batterie” Münster, 10-11 April 2018

08.01.2018 | Event News

See, understand and experience the work of the future

11.12.2017 | Event News

Innovative strategies to tackle parasitic worms

08.12.2017 | Event News

 
Latest News

Polymers Based on Boron?

18.01.2018 | Life Sciences

Bioengineered soft microfibers improve T-cell production

18.01.2018 | Life Sciences

World’s oldest known oxygen oasis discovered

18.01.2018 | Earth Sciences

VideoLinks
B2B-VideoLinks
More VideoLinks >>>