Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Putting pictures into words

18.09.2008
Visual images can contain a wealth of information, but they are difficult to catalogue in a searchable way. European researchers are generating and combining scraps of information to create a searchable picture.

Digital images can open our eyes to the most extraordinary detail and beauty. But there is one major drawback. The information in an image is purely visual. It tells us nothing about when or where the image was taken.

It tells us nothing about the people in the image. We do all that interpretation ourselves. That makes cataloguing and searching for particular images difficult – whether you work for an art gallery or you are updating your family album.

Scientists on a major European research project called aceMedia have taken important steps towards a solution to this problem. They are building an information layer into digital image files. Their vision is that image files will come with content information, metadata (background information for use on the internet) and an intelligence layer that automatically generates word-searchable data about the image.

An extra ‘information layer’, that adds both automatically generated and manually generated information to images, would revolutionise image searching on the internet as well as on your home computer or mobile phone. The technologies developed in the EU-funded aceMedia project have sparked interest from a range of commercial companies, looking to exploit the ideas in a host of directions.

Building a picture puzzle

The project re-used, developed and combined a series of technologies that provide greatly enriched content information about an image.

One of the technologies exploited by an aceMedia team uses software that can identify low-level visual descriptors, such as consistent areas of colour that may be sky, sea, sand or possibly snow, and information about texture, edge and shape.

Combining the low-level descriptors with sets of contextual rules held in domain ontologies (such as the fact that consistent areas of blue at the top of an image are likely to be sky, or that beach and snow are unlikely to appear in the same picture) turns data into a rich information source.

“Turning low-level descriptors into useful information is a very difficult step,” according to Yiannis Kompatsiaris, Head of the Multimedia Knowledge Laboratory at the Informatics and Telematics Institute in Thessaloniki, Greece and one of the lead researchers on aceMedia. His team was involved in structuring knowledge and adding it to the domain ontologies that classified and identified the information provided by the low-level descriptors.

Data from low-level descriptors was also combined with the results from specific detectors, such as the kinds of face detectors that are commercially available on some cameras today. All add further clues or searchable data for image users.

Another layer of information can be added by the individual user. They can add rules defining their personal preferences, profiles and policies to create a personalised filing system. ‘Inferencing’ techniques, filtering and similarity algorithms were used to make that personal filing simpler.

To enable easier searching, some of the aceMedia researchers incorporated natural language processing techniques into the mix, which mean you can use everyday language when searching for an image.

The ace in your hand

AceMedia researchers drew together their full range of technologies in an Autonomous Content Entity (ACE) Framework. The ACE Framework defines APIs to support networking, database management, scalable coding, content pre-processing, content visualisation, knowledge-assisted content analysis, as well as context analysis and modelling modules.

Using the framework, ACEs can be created that contain all of the rules, metadata and content information. They become a part of the image file.

For video, aceMedia researchers developed a scalable video codec, the aceSVC. Pical scalable video coding chain consists of three main modules – an encoder, extractor and decoder. The aceSVC enables video playing, reviewing and video analysis in the compressed domain.

As part of the project, aceMedia researchers demonstrated the benefits of automated content sharing and easier content management that ACEs could provide on a series of home network devices, including PCs, mobile phones and set-top boxes.

While the vision of the aceMedia project was to combine technologies, each delivering a piece to the overall information puzzle, they are not interdependent according to Kompatsiaris.

“The tools we developed in aceMedia are scalable to many concepts and many environments,” he confirms.

Adding time and location

“In five years time, a good number of these technologies will be in common use – combined with a number of technologies that have grown in popularity since the aceMedia project started, such as geo-tagging using GPS receivers. I think cameras in the future will know their position and be able to combine that information with content analysis to give much better results than we are capable of at the moment. For example, if the camera knows it is in a mountainous environment then it can analyse the content of the image much more efficiently.

Christian Nielsen | alfa
Further information:
http://cordis.europa.eu/ictresults
http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/90015

Further reports about: ACE Digital images GPS Mobile phone Visual images digital image files picture puzzle

More articles from Information Technology:

nachricht Cutting edge research for the industries of tomorrow – DFKI and NICT expand cooperation
21.03.2017 | Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, DFKI

nachricht Molecular motor-powered biocomputers
20.03.2017 | Technische Universität Dresden

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Giant Magnetic Fields in the Universe

Astronomers from Bonn and Tautenburg in Thuringia (Germany) used the 100-m radio telescope at Effelsberg to observe several galaxy clusters. At the edges of these large accumulations of dark matter, stellar systems (galaxies), hot gas, and charged particles, they found magnetic fields that are exceptionally ordered over distances of many million light years. This makes them the most extended magnetic fields in the universe known so far.

The results will be published on March 22 in the journal „Astronomy & Astrophysics“.

Galaxy clusters are the largest gravitationally bound structures in the universe. With a typical extent of about 10 million light years, i.e. 100 times the...

Im Focus: Tracing down linear ubiquitination

Researchers at the Goethe University Frankfurt, together with partners from the University of Tübingen in Germany and Queen Mary University as well as Francis Crick Institute from London (UK) have developed a novel technology to decipher the secret ubiquitin code.

Ubiquitin is a small protein that can be linked to other cellular proteins, thereby controlling and modulating their functions. The attachment occurs in many...

Im Focus: Perovskite edges can be tuned for optoelectronic performance

Layered 2D material improves efficiency for solar cells and LEDs

In the eternal search for next generation high-efficiency solar cells and LEDs, scientists at Los Alamos National Laboratory and their partners are creating...

Im Focus: Polymer-coated silicon nanosheets as alternative to graphene: A perfect team for nanoelectronics

Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are less stable. Now researchers at the Technical University of Munich (TUM) have, for the first time ever, produced a composite material combining silicon nanosheets and a polymer that is both UV-resistant and easy to process. This brings the scientists a significant step closer to industrial applications like flexible displays and photosensors.

Silicon nanosheets are thin, two-dimensional layers with exceptional optoelectronic properties very similar to those of graphene. Albeit, the nanosheets are...

Im Focus: Researchers Imitate Molecular Crowding in Cells

Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to simulate these confined natural conditions in artificial vesicles for the first time. As reported in the academic journal Small, the results are offering better insight into the development of nanoreactors and artificial organelles.

Enzymes behave differently in a test tube compared with the molecular scrum of a living cell. Chemists from the University of Basel have now been able to...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

International Land Use Symposium ILUS 2017: Call for Abstracts and Registration open

20.03.2017 | Event News

CONNECT 2017: International congress on connective tissue

14.03.2017 | Event News

ICTM Conference: Turbine Construction between Big Data and Additive Manufacturing

07.03.2017 | Event News

 
Latest News

Northern oceans pumped CO2 into the atmosphere

27.03.2017 | Earth Sciences

Fingerprint' technique spots frog populations at risk from pollution

27.03.2017 | Life Sciences

Big data approach to predict protein structure

27.03.2017 | Life Sciences

VideoLinks
B2B-VideoLinks
More VideoLinks >>>