Crime fighting potential for computerised lip-reading

The three-year project, which starts next month, will collect data for lip-reading and use it to create machines that automatically convert videos of lip-motions into text.

It builds on work already carried out at UEA to develop state-of-the-art speech reading systems.

The university is teaming up with the Centre for Vision, Speech & Signal Processing at Surrey University, who have built accurate and reliable face and lip trackers, and the Home Office Scientific Development Branch, who want to investigate the feasibility of using the technology for crime fighting.

The team also hope to carry out computerised lip-reading of other languages.

While it is known that humans can and do lip-read, not much is known about exactly what visual information is needed for effective lip-reading. Human lip-reading can be unreliable, even using trained lip-readers.

Dr Richard Harvey, senior lecturer at UEA’s School of Computing Sciences, is leading the project, which has been awarded £391,814 by the Engineering and Physical Sciences Research Council.

“We all lip read, for example in noisy situations like a bar or party, but even the performance of expert lip readers can be very poor,” he said.

“It appears that the best lip-readers are the ones who learned to speak a language before they lost their hearing and who have been taught lip-reading intensively. It is a very desirable skill.”

Dr Harvey added: “The Home Office Scientific Development Branch is interested in anything that helps the police gather information about criminals or gather evidence.”

As well as crime fighting there could be other potential uses for the technology, such as installing a camera in a mobile phone, or on the dash board for in-car speech recognition systems.

Another reason for developing computerised lip-reading is that the number of trained lip-readers is falling, mainly because people tend to be taught to sign instead.

Dr Harvey said: “To be effective the systems must accurately track the head over a variety of poses, extract numbers, or features, that describe the lips and then learn what features correspond to what text.

“To tackle the problem we will need to use information collected from audio speech. So this project will also investigate how to use the extensive information known about audio speech to recognise visual speech.

“The work will be highly experimental. We hope to have produced a system that will demonstrate the ability to lip-read in more general situations than we have done so far.”

Media Contact

Press Office alfa

Weitere Informationen:

http://www.uea.ac.uk

Alle Nachrichten aus der Kategorie: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Zurück zur Startseite

Kommentare (0)

Schreib Kommentar

Neueste Beiträge

Rotation of a molecule as an “internal clock”

Using a new method, physicists at the Heidelberg Max Planck Institute for Nuclear Physics have investigated the ultrafast fragmentation of hydrogen molecules in intense laser fields in detail. They used…

3D printing the first ever biomimetic tongue surface

Scientists have created synthetic soft surfaces with tongue-like textures for the first time using 3D printing, opening new possibilities for testing oral processing properties of food, nutritional technologies, pharmaceutics and…

How to figure out what you don’t know

Increasingly, biologists are turning to computational modeling to make sense of complex systems. In neuroscience, researchers are adapting the kinds of algorithms used to forecast the weather or filter spam…

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close