Spatial artificial intelligence: how drones navigate

At the chair of Prof. Leutenegger he and his team doing research on the field of mobile robotics, with focus on robot navigation through potentially unknown environments. They develop algorithms and software, which allow robot (e.g. drone) using ist sensors (e.g. video) to reconstruct 3D structure as well as to categorise it with the help of modern Machine Learning (including Deep Learning). The aim is to enable safe navigation through challenging enivronments together with participation of humans.
Foto: Andreas Heddergott /TUM

People are able to perceive their surroundings in three dimensions and can quickly spot potential danger in everyday situations. Drones have to learn this. Prof. Stefan Leutenegger refers to the intelligence needed for this task as ‘spatial artificial intelligence’, or spatial AI. This new approach will be used by cartographers mapping forests, in ship inspections and when building walls.

  • To accomplish spatial AI, a drone must be able to establish its orientation in space and to generate a map of its surroundings.
  • Through neural networks, the system can learn to recognize 3D objects in space.
  • Prof. Stefan Leutenegger is currently using spatial AI in three research projects.

In humans, it is entirely automatic: they recognize objects and their characteristics, can assess distances and hazards, and interact with other people. Stefan Leutenegger speaks of a coherent 3D representation of the surrounding area, yielding a uniform overall picture. Enabling a drone to distinguish between static and dynamic elements and recognize other actors: that is one of the more important areas for the professor of machine learning in robotics at TUM, who is also the head of the innovation field artificial intelligence at the Munich Institute of Robotics and Machine Intelligence (MIRMI).

Spatial AI, step 1: estimate the robot’s position in space and map it
Prof. Leutenegger uses spatial AI to provide a drone with the necessary on-board intelligence to fly through a forest without crashing into fine branches, to perform 3D printing or to inspect the cargo holds of tankers or freighters. Spatial AI is made up of several components that are adapted to specific tasks. It starts with the selection of sensors:
– Computer vision: The drone uses one or two cameras to see its surroundings. For depth perception, two cameras are required – just as humans need two eyes. Leutenegger uses two sensors and compares the images in order to gain a sense of depth. There are also depth cameras that generate 3D images directly.
– Inertial sensors: These sensors measure acceleration and angular velocity to detect the motion of bodies in space.
“Visual and inertial sensors complement one another very well,” says Leutenegger. That is because merging their data yields a highly precise image of a drone’s movements and its static surroundings. This enables the entire system to assess its own position in space. This is necessary for such applications as the autonomous deployment of robots. It also permits detailed, high-resolution mapping of the robot’s static surroundings – an essential requirement for avoiding obstacles. Initially, mathematical and probabilistic models are used without artificial intelligence. Leutenegger calls this the lowest level of “Spatial AI” – an area where he conducted research at Imperial College in London before coming to TUM.

Spatial AI, step 2: Neural networks for understanding surroundings
Artificial intelligence in the form of neural networks plays an important role in the semantic mapping of the area. This involves a deeper understanding of the robot’s surroundings. By means of deep learning, the information categories that are comprehensible to humans and clearly visible on the image can be captured and digitally mapped. To do this, neural networks use image recognition based on 2D images to represent them in a 3D map. The resources needed for deep learning recognition depend on how many details need to be captured to perform a specific task. Distinguishing a tree from the sky is easier than precisely identifying a tree or determining its state of health. For this kind of specialized image recognition, there is often not enough data for neural networks to learn from. For that reason, one goal of Leutenegger’s research is to develop machine learning methods that can make efficient use of sparse training data and allow robots to learn continually while in operation. In a more advanced form of spatial AI, the aim will be to recognize objects or parts of objects even when they are in motion.

Current AI projects of the MIRMI professor: forest mapping, inspecting ships, construction robotics

Spatial artificial intelligence is already being applied in three research projects:
– Building walls: In construction robotics a robot equipped with grabbers (manipulators) is used. Its task in the SPAICR project, with four years of funding by the Georg Nemetschek Institute, is to build and dismantle structures such as walls. The special challenge in the project, in which Prof. Leutenegger is collaborating with Prof. Kathrin Dörfler (TUM professor of digital fabrication), is to enable robots to work without motion tracking, in other words with no external infrastructure. In contrast to past research projects, which used a clearly marked space in a lab with orientation points, the goal is for the robot to operate with precision on any building site.

– Digitizing the forest: In the EU project Digiforest, the University of Bonn, the University of Oxford, ETH Zürich, the Norwegian University of Science and Technology and TUM are working to develop “digital technology for sustainable forestry”. For that purpose, the forest needs to be mapped. Where is which tree located? How healthy is it? Are there diseases? Where does the forest need to be thinned out and where is new planting needed? “The research will provide the forester with additional information for making decisions,” explains Prof. Leutenegger. TUM’s task: Prof. Leutenegger’s AI drones will fly autonomously through the forest and map it. They will have to find their way around the trees despite wind and small branches to produce a complete map of the wooded area.

– Inspecting ships: In the EU project AUTOASSESS, the goal is to send drones into the interior of tankers and freighters to inspect the inside walls. They will be equipped with ultrasound sensors, among other instruments, to detect cracks. For this task the drones will need to be capable of flying autonomously in enclosed spaces with poor radio connectivity. In this application, too, motion tracking is not possible.

Spatial AI creates basis for decisions
“We’re working to provide people in a wide range of fields with sufficient quantities of data to reach informed decisions,” says Prof. Leutenegger. He adds, however: “Our robots are complementary. They enhance the capabilities of humans and will relieve them of hazardous and repetitive tasks.”

Pictures
Prof. Stefan Leutenegger mit seiner Hightech-Drohne im Labor
http://go.tum.de/780102
Letzte Eingriffe vor dem Flug einer KI-gestützten Drohne von Prof. Stefan Leutenegger
http://go.tum.de/668050
Ein Wissenschaftler aus dem Labor von Prof. Stefan Leutenegger bereitet eine KI-unterstützte Drohne für einen Flug im Labor vor.
http://go.tum.de/138684
Prof. Stefan Leutenegger und sein Team beobachten den Flug seiner KI-unterstützten Drohne im Labor.
http://go.tum.de/904095

Wissenschaftliche Ansprechpartner:

Prof. Dr. Stefan Leutenegger
Professor for Machine Learning in Robotics
Sector Lead Artificial Intelligence in the Munich Institute of Robotics and Machine Intelligence (MIRMI)
Techniical University Munich
E-Mail: stefan.leutenegger@tum.de

Contact in TUM Corporate Communications Center:

Andreas Schmitz
Press Relations Robotics and Maschine Intelligence
Tel. +49 89 289 18198
andreas.schmitz@tum.de
www.tum.de

Originalpublikation:

Z. Landgraf, R. Scona, S. Leutenegger et al: SIMstack: A Generative Shape and Instance Model for Unordered Object Stacks; https://ieeexplore.ieee.org/document/9710412
S. Zhi, T. Laidlow et al: In-Place Scene Labelling and Understanding with Implicit Scene Representation; https://ieeexplore.ieee.org/document/9710936
G. Gallego, T. Delbrück, S. Leutenegger et al: Event-Based Vision: A Survey; https://ieeexplore.ieee.org/document/9138762

Weitere Informationen:

Prof. Stefan Leutenegger is a principal investigator and leader of the innovation field artificial intelligence at the Munich Institute of Robotics and Machine Intelligence (MIRMI). With MIRMI, TUM has established an integrative research center for science and technology to develop innovative and sustainable solutions for the central challenges of our time. The institute has leading expertise in central fields of robotics, perception and data science. See: https://www.mirmi.tum.de/.

Media Contact

Andreas Schmitz Corporate Communications Center
Technische Universität München

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Lighting up the future

New multidisciplinary research from the University of St Andrews could lead to more efficient televisions, computer screens and lighting. Researchers at the Organic Semiconductor Centre in the School of Physics and…

Researchers crack sugarcane’s complex genetic code

Sweet success: Scientists created a highly accurate reference genome for one of the most important modern crops and found a rare example of how genes confer disease resistance in plants….

Evolution of the most powerful ocean current on Earth

The Antarctic Circumpolar Current plays an important part in global overturning circulation, the exchange of heat and CO2 between the ocean and atmosphere, and the stability of Antarctica’s ice sheets….

Partners & Sponsors