Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Piecing together the next generation of cognitive robots

05.05.2008
Building robots with anything akin to human intelligence remains a far off vision, but European researchers are making progress on piecing together a new generation of machines that are more aware of their environment and better able to interact with humans.

Making robots more responsive would allow them to be used in a greater variety of sophisticated tasks in the manufacturing and service sectors. Such robots could be used as home helpers and caregivers, for example.

As research into artificial cognitive systems (ACS) has progressed in recent years it has grown into a highly fragmented field. Some researchers and teams have concentrated on machine vision, others on spatial cognition, and on human-robot interaction, among many other disciplines.

All have made progress, but, as the EU-funded project CoSy (Cognitive Systems for Cognitive Assistants) has shown, by working together the researchers can make even more advances in the field.

“We have brought together one of the broadest and most varied teams of researchers in this field,” says Geert-Jan Kruijff, the CoSy project manager at the German Research Centre for Artificial Intelligence. “This has resulted in an ACS architecture that integrates multiple cognitive functions to create robots that are more self-aware, understand their environment and can better interact with humans.”

The CoSy ACS is indeed greater than the sum of its parts. It incorporates a range of technologies from a design for cognitive architecture, spatial cognition, human-robot interaction and situated dialogue processing, to developmental models of visual processing.

“We have learnt how to put the pieces of ACS together, rather than just studying them separately,” adds Jeremy Wyatt, one of the project managers at the UK’s University of Birmingham.

The researchers have made the ACS architecture toolkit they developed available under an open source license. They want to encourage further research. The toolkit has already sparked several spin-off initiatives.

Overcoming the integration challenge
“The integration of different components in an ACS is one of the greatest challenges in robotics,” Kruijff says. “Getting robots to understand their environment from visual inputs and to interact with humans from spoken commands and relate what is said to their environment is enormously complex.”

Because of the complexity most robots developed to date have tended to be reactive. They simply react to their environment rather than act in it autonomously. Similar to a beetle that scuttles away when prodded, many mobile robots back off when they collide with an object, but have little self-awareness or understanding of the space around them and what they can do there.

In comparison, a demonstrator called the Explorer developed by the CoSy team has a more human-like understanding of its environment. Explorer can even talk about its surroundings with a human.

Instead of using just geometric data to create a map of its surroundings, the Explorer also incorporates qualitative, topographical information. Through interaction with humans it can then learn to recognise objects, spaces and their uses. For example, if it sees a coffee machine it may reason that it is in a kitchen. If it sees a sofa it may conclude it is in a living room.

“The robot sees a room much as humans see it because it has a conceptual understanding of space,” Kruijff notes.

Another demonstrator, called the PlayMate, applied machine vision and spatial recognition in a different context. PlayMate uses a robotic arm to manipulate objects in response to human instructions.

In Wyatt’s view the development of machine vision and its integration with other ACS components is still a big obstacle to creating more advanced robots, especially if the goal is to replicate human sight and awareness.

“Don’t underestimate how sophisticated we are…,” he says. “We don’t realise how agile our brains are at interpreting what we see. You can pick out colours from a scene, look at a bottle of water, a packet of cornflakes, or a coffee mug and know what activities each of them allows. You recognise them, see where to grasp them, and how to manipulate them, and you do it all seamlessly. We are still so very, very far from doing that with robots.”

Robotic ‘gofers’
Fortunately, replicating human-like intelligence and awareness, if it is indeed possible, is not necessary when creating robots that are useful to humans.

Kruijff foresees robots akin to those developed in the CoSy project becoming an everyday sight over the coming years in what he describes as ‘gofer scenarios’. Already some robots with a lower level of intelligence are being used to bring medicines to patients in hospitals and could be used to transport documents around office buildings.

Robotic vacuum cleaners are becoming increasingly popular in homes, as too are toys that incorporate artificial intelligence. And the creation of robots that are able to interact with people opens the door to robotic home helpers and caregivers.

“In the future people may all be waited on by robots in their old age,” Wyatt says.

Ahmed ElAmin | alfa
Further information:
http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89704

More articles from Information Technology:

nachricht Stable magnetic bit of three atoms
21.09.2017 | Sonderforschungsbereich 668

nachricht Drones can almost see in the dark
20.09.2017 | Universität Zürich

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: LaserTAB: More efficient and precise contacts thanks to human-robot collaboration

At the productronica trade fair in Munich this November, the Fraunhofer Institute for Laser Technology ILT will be presenting Laser-Based Tape-Automated Bonding, LaserTAB for short. The experts from Aachen will be demonstrating how new battery cells and power electronics can be micro-welded more efficiently and precisely than ever before thanks to new optics and robot support.

Fraunhofer ILT from Aachen relies on a clever combination of robotics and a laser scanner with new optics as well as process monitoring, which it has developed...

Im Focus: The pyrenoid is a carbon-fixing liquid droplet

Plants and algae use the enzyme Rubisco to fix carbon dioxide, removing it from the atmosphere and converting it into biomass. Algae have figured out a way to increase the efficiency of carbon fixation. They gather most of their Rubisco into a ball-shaped microcompartment called the pyrenoid, which they flood with a high local concentration of carbon dioxide. A team of scientists at Princeton University, the Carnegie Institution for Science, Stanford University and the Max Plank Institute of Biochemistry have unravelled the mysteries of how the pyrenoid is assembled. These insights can help to engineer crops that remove more carbon dioxide from the atmosphere while producing more food.

A warming planet

Im Focus: Highly precise wiring in the Cerebral Cortex

Our brains house extremely complex neuronal circuits, whose detailed structures are still largely unknown. This is especially true for the so-called cerebral cortex of mammals, where among other things vision, thoughts or spatial orientation are being computed. Here the rules by which nerve cells are connected to each other are only partly understood. A team of scientists around Moritz Helmstaedter at the Frankfiurt Max Planck Institute for Brain Research and Helene Schmidt (Humboldt University in Berlin) have now discovered a surprisingly precise nerve cell connectivity pattern in the part of the cerebral cortex that is responsible for orienting the individual animal or human in space.

The researchers report online in Nature (Schmidt et al., 2017. Axonal synapse sorting in medial entorhinal cortex, DOI: 10.1038/nature24005) that synapses in...

Im Focus: Tiny lasers from a gallery of whispers

New technique promises tunable laser devices

Whispering gallery mode (WGM) resonators are used to make tiny micro-lasers, sensors, switches, routers and other devices. These tiny structures rely on a...

Im Focus: Ultrafast snapshots of relaxing electrons in solids

Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids!

When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

“Lasers in Composites Symposium” in Aachen – from Science to Application

19.09.2017 | Event News

I-ESA 2018 – Call for Papers

12.09.2017 | Event News

EMBO at Basel Life, a new conference on current and emerging life science research

06.09.2017 | Event News

 
Latest News

Fraunhofer ISE Pushes World Record for Multicrystalline Silicon Solar Cells to 22.3 Percent

25.09.2017 | Power and Electrical Engineering

Usher syndrome: Gene therapy restores hearing and balance

25.09.2017 | Health and Medicine

An international team of physicists a coherent amplification effect in laser excited dielectrics

25.09.2017 | Physics and Astronomy

VideoLinks
B2B-VideoLinks
More VideoLinks >>>