If you think having your phone identify the nearest bus stop is cool, wait until it identifies your mood.
New research by a team of engineers at the University of Rochester may soon make that possible. At the IEEE Workshop on Spoken Language Technology on Dec. 5, the researchers will describe a new computer program that gauges human feelings through speech, with substantially greater accuracy than existing approaches.
Surprisingly, the program doesn't look at the meaning of the words. "We actually used recordings of actors reading out the date of the month – it really doesn't matter what they say, it's how they're saying it that we're interested in," said Wendi Heinzelman, professor of electrical and computer engineering.
Heinzelman explained that the program analyzes 12 features of speech, such as pitch and volume, to identify one of six emotions from a sound recording. And it achieves 81 percent accuracy – a significant improvement on earlier studies that achieved only about 55 percent accuracy.
The research has already been used to develop a prototype of an app. The app displays either a happy or sad face after it records and analyzes the user's voice. It was built by one of Heinzelman's graduate students, Na Yang, during a summer internship at Microsoft Research. "The research is still in its early days," Heinzelman added, "but it is easy to envision a more complex app that could use this technology for everything from adjusting the colors displayed on your mobile to playing music fitting to how you're feeling after recording your voice."
Heinzelman and her team are collaborating with Rochester psychologists Melissa Sturge-Apple and Patrick Davies, who are currently studying the interactions between teenagers and their parents. "A reliable way of categorizing emotions could be very useful in our research,". Sturge-Apple said. "It would mean that a researcher doesn't have to listen to the conversations and manually input the emotion of different people at different stages."
Teaching a computer to understand emotions begins with recognizing how humans do so.
"You might hear someone speak and think 'oh, he sounds angry!' But what is it that makes you think that?" asks Sturge-Apple. She explained that emotion affects the way people speak by altering the volume, pitch and even the harmonics of their speech. "We don't pay attention to these features individually, we have just come to learn what angry sounds like – particularly for people we know," she adds.
But for a computer to categorize emotion it needs to work with measurable quantities. So the researchers established 12 specific features in speech that were measured in each recording at short intervals. The researchers then categorized each of the recordings and used them to teach the computer program what "sad," "happy," "fearful," "disgusted," or "neutral" sound like.
The system then analyzed new recordings and tried to determine whether the voice in the recording portrayed any of the known emotions. If the computer program was unable to decide between two or more emotions, it just left that recording unclassified.
"We want to be confident that when the computer thinks the recorded speech reflects a particular emotion, it is very likely it is indeed portraying this emotion," Heinzelman explained.
Previous research has shown that emotion classification systems are highly speaker dependent; they work much better if the system is trained by the same voice it will analyze. "This is not ideal for a situation where you want to be able to just run an experiment on a group of people talking and interacting, like the parents and teenagers we work with," Sturge-Apple explained.
Their new results also confirm this finding. If the speech-based emotion classification is used on a voice different from the one that trained the system, the accuracy dropped from 81 percent to about 30 percent. The researchers are now looking at ways of minimizing this effect, for example, by training the system with a voice in the same age group and of the same gender. As Heinzelman said, "there are still challenges to be resolved if we want to use this system in an environment resembling a real-life situation, but we do know that the algorithm we developed is more effective than previous attempts."
Na Yang was awarded a grant by the International Speech Communication Association to attend the SLT Workshop.
For more information on the project visit http://www.ece.rochester.edu/projects/wcng/project_bridge.html.
About the University of Rochester
The University of Rochester is one of the nation's leading private universities. Located in Rochester, N.Y., the University gives students exceptional opportunities for interdisciplinary study and close collaboration with faculty through its unique cluster-based curriculum. Its College, School of Arts and Sciences, and Hajim School of Engineering and Applied Sciences are complemented by its Eastman School of Music, Simon School of Business, Warner School of Education, Laboratory for Laser Energetics, School of Medicine and Dentistry, School of Nursing, Eastman Institute for Oral Health, and the Memorial Art Gallery.
Leonor Sierra | Source: EurekAlert!
Further information: www.rochester.edu
Further Reports about: algorithm > Applied Science > computer engineering > Computer program > emotion classification systems > emotional intelligence > Heinzelman > Smartphone > Smartphones > speech-based emotion classification
More articles from Information Technology:
Wayne State University researcher’s technique helps robotic vehicles find their way, help humans
15.05.2013 | Wayne State University - Office of the Vice President for Research
DMTF, ETSI, OASIS, OCEAN, OGF, OW2 and SNIA announce Cloud Interoperability Week
15.05.2013 | FOKUS - Fraunhofer-Institut für Offene Kommunikationssysteme
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise.
The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ...
About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers.
However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich.
Second sound is a quantum mechanical phenomenon, which has been observed only in superfluid helium.
Physicists from the University of Innsbruck, Austria, in collaboration with colleagues from the University of Trento, Italy, have now proven the propagation of such a temperature wave in a quantum gas. The scientists have published their historic findings in the journal Nature.
Below a critical temperature, certain fluids become superfluid ...
Researchers use synthetic silicate to stimulate stem cells into bone cells
In new research published online May 13, 2013 in Advanced Materials, researchers from Brigham and Women's Hospital (BWH) are the first to report that synthetic silicate nanoplatelets (also known as layered clay) can induce stem cells to become bone cells without the need of additional bone-inducing factors.
Synthetic silicates are made ...
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News