Computers do not see things the way we do. Sure, they can manipulate recorded images, for example, but they currently understand very little about what is inside these pictures or videos. All the interpretation work must be done by humans, and that is expensive. But one European project is making computers more similar to us in their ability to interpret images and their surrounds.
Individuals from all walks of life, as well as sectors such as industry, services and education, stand to reap immense benefits from semi-autonomous, more intuitive machines that are able to do things which were, until now, either not possible, super expensive or the preserve of humans.
This has been made possible thanks to the developments in, and convergence of, methods for creating, obtaining and interpreting metadata – at its simplest level this is data about data, or facts about facts – in complex multimedia environments.
MUSCLE, an EU-funded super project which created a pan-European network of excellence involving more than 30 academic and research institutions from 14 countries, has come up not only with new paradigms but a range of practical applications.
The scale of the project was so vast, a special section to showcase its achievements has been set up in the 3D Second Life internet virtual world, which has millions of denizens.
The Virtual MUSCLE experience inside Second Life has been created as a one-stop information centre to ensure the continuation and sustainability of the project’s achievements. Users are impersonated as avatars (computer representations of themselves) enabling them to experience multimedia content by literally walking through it. They are able to hold real-time conversations with other members of the community, exchange experiences, or just simply browse.
After an initial two years of collaborative research across the MUSCLE network, a series of showcases were established with several institutions working together on each one to produce practical applications.
Virtual tongue twister
One of these is an articulatory talking head, developed to help people who have difficulties in pronouncing words and learning vocabulary. This ‘insightful’ head models what is happening inside the human mouth, including where the tongue is positioned to make particular sounds, so the users can copy what they see on screen.
A second showcase functions as a support system for complex assembly tasks, employing a user-friendly multi-modal interface. By augmenting the written assembly instructions with audio and visual prompts much more in line with how humans communicate, the system allows users to easily assemble complex devices without having to continually refer to a written instruction manual.
In another showcase, researchers have developed multi-modal Audio-Visual Automatic Speech Recognition software which takes its cues from human speech patterns and facial structures to provide more reliable results than using audio or visual techniques in isolation.
Similarly, a showcase which has already attracted a lot of publicity, especially in the USA, is one that analyses human emotion using both audio and visual clues.
“It was trialled on US election candidates to see if their emotional states actually matched what they were saying and doing, and it was even tried out, visually only of course, on the enigmatic Mona Lisa,” says MUSCLE project coordinator Nozha Boujemaa.
Horse or strawberry?
Giving computers a better idea of what they are seeing or what the inputs mean, another showcase developed a web-based, real-time object categorisation system able to perform searches based on image recognition - photos including horses, say, or strawberries! It can also automatically categorise and index images based on the objects they contain.
In an application with anti-piracy potential, one showcase came up with copy detection software. “This is an intelligent video method of detecting and preventing piracy. There is a lot of controversy at the moment about copyright film clips being posted on YouTube and other websites. This software is able to detect copies by spotting any variation from original recordings,” Boujemaa explains.
“Another application is for broadcasters to be able to detect if video from their archives is being used without royalties been paid or acknowledgement of the source being made.
Europe’s largest video archive, the French National Audiovisual Archive, has now been able to ascertain that broadcasters are only declaring 70% of the material they are using,” she tells ICT Results.
Other types of recognition software, effectively helping computers see what we see, can remotely monitor, detect and raise the alarm in a variety of scenarios from forest fires to old or sick people living alone falling over. The latter falls under the heading of “unusual behaviour” which also has applications in video security monitoring with “intelligent” cameras able to alert people in real time if they think somebody is suspicious.
“During the course of the project, we produced more than 600 papers for the scientific community, as well as having two books published, one on audiovisual learning techniques for multimedia and the other on the importance of using multimedia rather than just monomedia,” she says.
Although the massive project has now wound down, its legacy remains online, in print and most of all in a host of new applications that will affect the lives of people all over the world.
Christian Nielsen | alfa
Robots as Tools and Partners in Rehabilitation
17.08.2018 | Albert-Ludwigs-Universität Freiburg im Breisgau
Low bandwidth? Use more colors at once
17.08.2018 | Purdue University
New design tool automatically creates nanostructure 3D-print templates for user-given colors
Scientists present work at prestigious SIGGRAPH conference
Most of the objects we see are colored by pigments, but using pigments has disadvantages: such colors can fade, industrial pigments are often toxic, and...
Scientists at the University of California, Los Angeles present new research on a curious cosmic phenomenon known as "whistlers" -- very low frequency packets...
Scientists develop first tool to use machine learning methods to compute flow around interactively designable 3D objects. Tool will be presented at this year’s prestigious SIGGRAPH conference.
When engineers or designers want to test the aerodynamic properties of the newly designed shape of a car, airplane, or other object, they would normally model...
Researchers from TU Graz and their industry partners have unveiled a world first: the prototype of a robot-controlled, high-speed combined charging system (CCS) for electric vehicles that enables series charging of cars in various parking positions.
Global demand for electric vehicles is forecast to rise sharply: by 2025, the number of new vehicle registrations is expected to reach 25 million per year....
Proteins must be folded correctly to fulfill their molecular functions in cells. Molecular assistants called chaperones help proteins exploit their inbuilt folding potential and reach the correct three-dimensional structure. Researchers at the Max Planck Institute of Biochemistry (MPIB) have demonstrated that actin, the most abundant protein in higher developed cells, does not have the inbuilt potential to fold and instead requires special assistance to fold into its active state. The chaperone TRiC uses a previously undescribed mechanism to perform actin folding. The study was recently published in the journal Cell.
Actin is the most abundant protein in highly developed cells and has diverse functions in processes like cell stabilization, cell division and muscle...
17.08.2018 | Event News
08.08.2018 | Event News
27.07.2018 | Event News
17.08.2018 | Physics and Astronomy
17.08.2018 | Information Technology
17.08.2018 | Life Sciences