Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:


Robotic minds think alike?

Most schoolchildren struggle to learn geometry, but they are still able to catch a ball without first calculating its parabola. Why should robots be any different? A team of European researchers have developed an artificial cognitive system that learns from experience and observation rather than relying on predefined rules and models.

Led by Linköping University in Sweden, the researchers in the COSPAL project adopted an innovative approach to making robots recognise, indentify and interact with objects, particularly in random, unforeseen situations.

Traditional robotics relies on having the robots carry out complex calculations, such as measuring the geometry of an object and its expected trajectory if moved. But COSPAL has turned this around, making the robots perform tasks based on their own experiences and observations of humans. This trial and error approach could lead to more autonomous robots and even improve our understanding of the human brain.

“Gösta Granlund, head of the Computer Vision Laboratory at Linköping University, came up with the concept that action precedes perception in learning. That may sound counterintuitive, but it is exactly how humans learn,” explains Michael Felsberg, coordinator of the EU-funded COSPAL.

Children, he notes, are “always testing and trying everything” and by performing random actions – poking this object or touching that one – they come to understand cause and effect and can apply that knowledge in the future. By experimenting, they quickly find out, for example, that a ball rolls and that a hole cannot be grasped. Children also learn from observing adults and copying their actions, gaining greater understanding of the world around them.

Learning like, and from, humans
Applied in the context of an artificial cognitive system (ACS), the approach helps to create robots that learn much as humans do and can learn from humans, allowing them to continue to perform tasks even when their environment changes or when objects they are not pre-programmed to recognise are placed in front of them.

“Most artificial intelligence-based ACS architectures are quite successful in recognising objects based on geometric calculations of visual inputs. Some people argue that humans also perform such calculations to identify something, but I don’t think so. I think humans are just very good at recognising the geometry of objects from experience,” Felsberg says.

The COSPAL team’s ACS would seem to bear that theory out. A robot with no pre-programmed geometric knowledge was able to recognise objects simply from experience, even when its surroundings and the position of the camera through which it obtained its visual information changed.

Getting the right peg in the right hole
A shape-sorting puzzle of the sort used to teach small children was used to test the system. Through trial and error and observation, the robot was able to place cubes in square holes and round pegs in round holes with an accuracy of 2mm and 2 degrees. “It showed that, without knowing geometry, it can solve geometric problems,” Felsberg notes.

“In fact, I observed my 11-month-old son solving the same puzzle and the learning process you could see unfolding with both him and the robot was remarkably similar.”

Another test of the robot’s ability to learn from observation involved the use of a robotic arm that copied the movement of a human arm. With as few as 20 to 60 observations, the robotic arm was able to trace the movement of the human arm through a constrained space, avoiding obstacles on the way. In subsequent trials with the same robot, the learning period was greatly reduced, suggesting that the ACS was indeed drawing on memories of past observations.

In addition, by applying concepts akin to fuzzy logic, the team came up with a new means of making the robot identify corresponding signals and symbols such as colours. Instead of specifying three numbers to represent a red, green and blue component, as used in most digital image processing applications, the team made the system learn colours from pairs of images and corresponding sets of reference colour names, such as red, dark red, blue and dark blue in a representation known as channel coding. Similar to how colours are identified by the human brain with sets of neurons firing selectively to differentiate green from black, for example, channel coding offers a biologically inspired way of representing information.

“As humans, we can use reason to deduce what an object is by a process of elimination, i.e. we know that if something has such and such a property it must be this item, not that one. Though this type of machine reasoning has been used before, we have developed an advanced version for object recognition that uses symbolic and visual information to great effect,” Felsberg says.

Ahmed ElAmin | alfa
Further information:

More articles from Information Technology:

nachricht TIB’s Visual Analytics Research Group to develop methods for person detection and visualisation
19.03.2018 | Technische Informationsbibliothek (TIB)

nachricht Green Light for Galaxy Europe
15.03.2018 | Albert-Ludwigs-Universität Freiburg im Breisgau

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Tiny implants for cells are functional in vivo

For the first time, an interdisciplinary team from the University of Basel has succeeded in integrating artificial organelles into the cells of live zebrafish embryos. This innovative approach using artificial organelles as cellular implants offers new potential in treating a range of diseases, as the authors report in an article published in Nature Communications.

In the cells of higher organisms, organelles such as the nucleus or mitochondria perform a range of complex functions necessary for life. In the networks of...

Im Focus: Locomotion control with photopigments

Researchers from Göttingen University discover additional function of opsins

Animal photoreceptors capture light with photopigments. Researchers from the University of Göttingen have now discovered that these photopigments fulfill an...

Im Focus: Surveying the Arctic: Tracking down carbon particles

Researchers embark on aerial campaign over Northeast Greenland

On 15 March, the AWI research aeroplane Polar 5 will depart for Greenland. Concentrating on the furthest northeast region of the island, an international team...

Im Focus: Unique Insights into the Antarctic Ice Shelf System

Data collected on ocean-ice interactions in the little-researched regions of the far south

The world’s second-largest ice shelf was the destination for a Polarstern expedition that ended in Punta Arenas, Chile on 14th March 2018. Oceanographers from...

Im Focus: ILA 2018: Laser alternative to hexavalent chromium coating

At the 2018 ILA Berlin Air Show from April 25–29, the Fraunhofer Institute for Laser Technology ILT is showcasing extreme high-speed Laser Material Deposition (EHLA): A video documents how for metal components that are highly loaded, EHLA has already proved itself as an alternative to hard chrome plating, which is now allowed only under special conditions.

When the EU restricted the use of hexavalent chromium compounds to special applications requiring authorization, the move prompted a rethink in the surface...

All Focus news of the innovation-report >>>



Industry & Economy
Event News

Virtual reality conference comes to Reutlingen

19.03.2018 | Event News

Ultrafast Wireless and Chip Design at the DATE Conference in Dresden

16.03.2018 | Event News

International Tinnitus Conference of the Tinnitus Research Initiative in Regensburg

13.03.2018 | Event News

Latest News

A new kind of quantum bits in two dimensions

19.03.2018 | Physics and Astronomy

Scientists have a new way to gauge the growth of nanowires

19.03.2018 | Materials Sciences

Virtual reality conference comes to Reutlingen

19.03.2018 | Event News

Science & Research
Overview of more VideoLinks >>>