Computers do not see things the way we do. Sure, they can manipulate recorded images, for example, but they currently understand very little about what is inside these pictures or videos. All the interpretation work must be done by humans, and that is expensive. But one European project is making computers more similar to us in their ability to interpret images and their surrounds.
Individuals from all walks of life, as well as sectors such as industry, services and education, stand to reap immense benefits from semi-autonomous, more intuitive machines that are able to do things which were, until now, either not possible, super expensive or the preserve of humans.
This has been made possible thanks to the developments in, and convergence of, methods for creating, obtaining and interpreting metadata – at its simplest level this is data about data, or facts about facts – in complex multimedia environments.
MUSCLE, an EU-funded super project which created a pan-European network of excellence involving more than 30 academic and research institutions from 14 countries, has come up not only with new paradigms but a range of practical applications.
The scale of the project was so vast, a special section to showcase its achievements has been set up in the 3D Second Life internet virtual world, which has millions of denizens.
The Virtual MUSCLE experience inside Second Life has been created as a one-stop information centre to ensure the continuation and sustainability of the project’s achievements. Users are impersonated as avatars (computer representations of themselves) enabling them to experience multimedia content by literally walking through it. They are able to hold real-time conversations with other members of the community, exchange experiences, or just simply browse.
After an initial two years of collaborative research across the MUSCLE network, a series of showcases were established with several institutions working together on each one to produce practical applications.
Virtual tongue twister
One of these is an articulatory talking head, developed to help people who have difficulties in pronouncing words and learning vocabulary. This ‘insightful’ head models what is happening inside the human mouth, including where the tongue is positioned to make particular sounds, so the users can copy what they see on screen.
A second showcase functions as a support system for complex assembly tasks, employing a user-friendly multi-modal interface. By augmenting the written assembly instructions with audio and visual prompts much more in line with how humans communicate, the system allows users to easily assemble complex devices without having to continually refer to a written instruction manual.
In another showcase, researchers have developed multi-modal Audio-Visual Automatic Speech Recognition software which takes its cues from human speech patterns and facial structures to provide more reliable results than using audio or visual techniques in isolation.
Similarly, a showcase which has already attracted a lot of publicity, especially in the USA, is one that analyses human emotion using both audio and visual clues.
“It was trialled on US election candidates to see if their emotional states actually matched what they were saying and doing, and it was even tried out, visually only of course, on the enigmatic Mona Lisa,” says MUSCLE project coordinator Nozha Boujemaa.
Horse or strawberry?
Giving computers a better idea of what they are seeing or what the inputs mean, another showcase developed a web-based, real-time object categorisation system able to perform searches based on image recognition - photos including horses, say, or strawberries! It can also automatically categorise and index images based on the objects they contain.
In an application with anti-piracy potential, one showcase came up with copy detection software. “This is an intelligent video method of detecting and preventing piracy. There is a lot of controversy at the moment about copyright film clips being posted on YouTube and other websites. This software is able to detect copies by spotting any variation from original recordings,” Boujemaa explains.
“Another application is for broadcasters to be able to detect if video from their archives is being used without royalties been paid or acknowledgement of the source being made.
Europe’s largest video archive, the French National Audiovisual Archive, has now been able to ascertain that broadcasters are only declaring 70% of the material they are using,” she tells ICT Results.
Other types of recognition software, effectively helping computers see what we see, can remotely monitor, detect and raise the alarm in a variety of scenarios from forest fires to old or sick people living alone falling over. The latter falls under the heading of “unusual behaviour” which also has applications in video security monitoring with “intelligent” cameras able to alert people in real time if they think somebody is suspicious.
“During the course of the project, we produced more than 600 papers for the scientific community, as well as having two books published, one on audiovisual learning techniques for multimedia and the other on the importance of using multimedia rather than just monomedia,” she says.
Although the massive project has now wound down, its legacy remains online, in print and most of all in a host of new applications that will affect the lives of people all over the world.
Christian Nielsen | alfa
Construction of practical quantum computers radically simplified
05.12.2016 | University of Sussex
UT professor develops algorithm to improve online mapping of disaster areas
29.11.2016 | University of Tennessee at Knoxville
In recent years, lasers with ultrashort pulses (USP) down to the femtosecond range have become established on an industrial scale. They could advance some applications with the much-lauded “cold ablation” – if that meant they would then achieve more throughput. A new generation of process engineering that will address this issue in particular will be discussed at the “4th UKP Workshop – Ultrafast Laser Technology” in April 2017.
Even back in the 1990s, scientists were comparing materials processing with nanosecond, picosecond and femtosesecond pulses. The result was surprising:...
Have you ever wondered how you see the world? Vision is about photons of light, which are packets of energy, interacting with the atoms or molecules in what...
A multi-institutional research collaboration has created a novel approach for fabricating three-dimensional micro-optics through the shape-defined formation of porous silicon (PSi), with broad impacts in integrated optoelectronics, imaging, and photovoltaics.
Working with colleagues at Stanford and The Dow Chemical Company, researchers at the University of Illinois at Urbana-Champaign fabricated 3-D birefringent...
In experiments with magnetic atoms conducted at extremely low temperatures, scientists have demonstrated a unique phase of matter: The atoms form a new type of quantum liquid or quantum droplet state. These so called quantum droplets may preserve their form in absence of external confinement because of quantum effects. The joint team of experimental physicists from Innsbruck and theoretical physicists from Hannover report on their findings in the journal Physical Review X.
“Our Quantum droplets are in the gas phase but they still drop like a rock,” explains experimental physicist Francesca Ferlaino when talking about the...
The Max Planck Institute for Physics (MPP) is opening up a new research field. A workshop from November 21 - 22, 2016 will mark the start of activities for an innovative axion experiment. Axions are still only purely hypothetical particles. Their detection could solve two fundamental problems in particle physics: What dark matter consists of and why it has not yet been possible to directly observe a CP violation for the strong interaction.
The “MADMAX” project is the MPP’s commitment to axion research. Axions are so far only a theoretical prediction and are difficult to detect: on the one hand,...
16.11.2016 | Event News
01.11.2016 | Event News
14.10.2016 | Event News
08.12.2016 | Life Sciences
08.12.2016 | Physics and Astronomy
08.12.2016 | Materials Sciences