At NVIDIA’s GPU Technology Conference (GTC) from October 9th – 11th, 2018 in Munich, Fraunhofer FOKUS researchers will demonstrate on the outdoor exhibition grounds how they can control a vehicle tele-operatively with minimal latency. That helps to bridge the gap from driver-controlled to fully autonomous vehicles. In addition, they will present their semi-automated tool for processing image data from Lidar laser scanners and cameras for training the artificial intelligence (AI) of smart cars.
There will be near-future situations, in which highly automated and autonomous vehicles will also utilize a remote human operator. For example, a truck that can’t drive autonomously from the company premises to a loading ramp because it has no access to digital maps of the private property.
Or an autonomous car that must navigate an unforeseen construction zone. At GTC Europe, the FOKUS researchers from the business unit Smart Mobility together with the Daimler Center for Automotive IT Innovations (DCAITI) at the TU Berlin, will demonstrate how tele-operated driving with low latency can be achieved to help smooth the path to full-autonomy.
The operator, in this case a person running the demonstration, is connected to the highly automated vehicle by means of wireless data and encrypted transmission – via WLAN and in the future via 5G mobile radio. The vehicle is steered with high efficiency by means of its own compressed environment model.
This model interprets image data from up to eight cameras in combination with Lidar laser scanners. The NVIDIA DRIVE platform provides the computing power to run the diverse and redundant algorithms and applications to enable safe highly automated driving. Significantly reduced meta data is then transmitted to the operator. This saves bandwidth and provides near real-time intelligence of the current situation.
The operator can control the vehicle realistically, with a steering wheel and accelerator, or brake pedal. A map with the current route is viewed on a second monitor. The decisive factor for safe tele-operation is that the data transmission of both the environmental data and the control signals produce as little delay as possible.
Tool for semi-automatic annotation of image data
AI is of great importance for the perception of the immediate surroundings by the vehicle. When a highly automated vehicle drives on the road, a large number of cameras and other sensors, such as a Lidar, record information from the surroundings. The data must be analyzed in real time and reliably detecting pedestrians, bicyclists, vehicles and other objects in the scene.
With new AI methods, particularly deep learning, the computer learns what a human or a tree looks like by training on a huge amount of data. The better the learning data is, the more accurately the car will learn to "see". Until now, it was very time-consuming to label the training data, i.e. to mark what a tree, a person, or a car is.
At GTC Europe, Smart Mobility researchers will present their labeling tool for image data from cameras and Lidar laser scanners, powered by the NVIDIA DGX-1 AI supercomputer datacenter solution. With this tool, the annotations can be pre-labeled, checked and corrected in the shortest possible time. Labeling experts will need on average only 10% of the time that is normally required to generate high quality learning data.
With the help of FOKUS' innovative AI algorithms, image data from cameras and laser scanners are divided into object classes such as roadway, vehicle, pedestrian, and individual elements of the objects are separated from each other. In this way, even pedestrians in a crowd can be kept apart. For each object, the algorithms continue to calculate the correct position and rotation in 3D space.
FOKUS’ mobility-data-backend manages the sensor data of hundreds of vehicles and enables labeling experts and developers worldwide to view and correct the data in a standard web browser. This results in an ever-larger pool of high-quality learning data for AI algorithms in the vehicle.
Session at the GTC on Thursday, October 11, 11:00-11:50, room 14c: Deep Learning software applications for Autonomous Vehicles with Dr. Ilja Radusch, head of the business unit Smart Mobility
About the GPU Technology Conference:
Fraunhofer Institute for Open Communication Systems FOKUS
Telephone: +49 30 3463-7517
A High-resolution photo is available on request: email@example.com.
Fraunhofer FOKUS researches digital transformation and its impact on society, economics and technology. Since 1988 it supports commercial enterprises and public administration in the shaping and implementation of the digital transformation. For this purpose, Fraunhofer FOKUS offers research services ranging from requirements analysis to consulting, feasibility studies, technology development right up to prototypes and pilots in the business segments Digital Public Services, Future Applications and Media, Quality Engineering, Smart Mobility, Software-based Networks, Networked Security, Visual Computing und Analytics. With about 430 employees in Berlin and an annual budget of 30 million euros, Fraunhofer FOKUS is the largest ICT institute of the Fraunhofer Society. Around 70% of its budget is generated through projects from industry and the public domain.
Christiane Peters | idw - Informationsdienst Wissenschaft
Three components on one chip
06.12.2018 | Universität Stuttgart
New quantum materials could take computing devices beyond the semiconductor era
04.12.2018 | University of California - Berkeley
What if a sensor sensing a thing could be part of the thing itself? Rice University engineers believe they have a two-dimensional solution to do just that.
Rice engineers led by materials scientists Pulickel Ajayan and Jun Lou have developed a method to make atom-flat sensors that seamlessly integrate with devices...
Scientists at the University of Stuttgart and the Karlsruhe Institute of Technology (KIT) succeed in important further development on the way to quantum Computers.
Quantum computers one day should be able to solve certain computing problems much faster than a classical computer. One of the most promising approaches is...
New Project SNAPSTER: Novel luminescent materials by encapsulating phosphorescent metal clusters with organic liquid crystals
Nowadays energy conversion in lighting and optoelectronic devices requires the use of rare earth oxides.
Scientists have discovered the first synthetic material that becomes thicker - at the molecular level - as it is stretched.
Researchers led by Dr Devesh Mistry from the University of Leeds discovered a new non-porous material that has unique and inherent "auxetic" stretching...
Scientists from the Theory Department of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science (CFEL) in Hamburg have shown through theoretical calculations and computer simulations that the force between electrons and lattice distortions in an atomically thin two-dimensional superconductor can be controlled with virtual photons. This could aid the development of new superconductors for energy-saving devices and many other technical applications.
The vacuum is not empty. It may sound like magic to laypeople but it has occupied physicists since the birth of quantum mechanics.
06.12.2018 | Event News
03.12.2018 | Event News
28.11.2018 | Event News
07.12.2018 | Life Sciences
07.12.2018 | Materials Sciences
07.12.2018 | Physics and Astronomy