Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Teaching machines to see

21.12.2015

New smartphone-based system could accelerate development of driverless cars

Two newly-developed systems for driverless cars can identify a user's location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.


This is an example of SegNet in action: the separate components of the road scene are all labelled in real time.

Credit: Alex Kendall

The separate but complementary systems have been designed by researchers from the University of Cambridge and demonstrations are freely available online. Although the systems cannot currently control a driverless car, the ability to make a machine 'see' and accurately identify where it is and what it's looking at is a vital part of developing autonomous vehicles and robotics.

The first system, called SegNet, can take an image of a street scene it hasn't seen before and classify it, sorting objects into 12 different categories -- such as roads, street signs, pedestrians, buildings and cyclists - in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.

Users can visit the SegNet website and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. The system has been successfully tested on both city roads and motorways.

For the driverless cars currently in development, radar and base sensors are expensive - in fact, they often cost more than the car itself. In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR (a remote sensing technology), SegNet learns by example -- it was 'trained' by an industrious group of Cambridge undergraduate students, who manually labelled every pixel in each of 5000 images, with each image taking about 30 minutes to complete. Once the labelling was finished, the researchers then took two days to 'train' the system before it was put into action.

"It's remarkably good at recognising things in an image, because it's had so much practice," said Alex Kendall, a PhD student in the Department of Engineering. "However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better."

SegNet was primarily trained in highway and urban environments, so it still has some learning to do for rural, snowy or desert environments -- although it has performed well in initial tests for these environments.

The system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.

"Vision is our most powerful sense and driverless cars will also need to see," said Professor Roberto Cipolla, who led the research. "But teaching a machine to see is far more difficult than it sounds."

As children, we learn to recognise objects through example -- if we're shown a toy car several times, we learn to recognise both that specific car and other similar cars as the same type of object. But with a machine, it's not as simple as showing it a single car and then having it be able to recognise all different types of cars. Machines today learn under supervision: sometimes through thousands of labelled examples.

There are three key technological questions that must be answered to design autonomous vehicles: where am I, what's around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.

The localisation system designed by Kendall and Cipolla runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene. The system is far more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.

It has been tested along a kilometre-long stretch of King's Parade in central Cambridge, and it is able to determine both location and orientation within a few metres and a few degrees, which is far more accurate than GPS -- a vital consideration for driverless cars. Users can try out the system for themselves here.

The localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.

"Work in the field of artificial intelligence and robotics has really taken off in the past few years," said Kendall. "But what's cool about our group is that we've developed technology that uses deep learning to determine where you are and what's around you - this is the first time this has been done using deep learning."

"In the short term, we're more likely to see this sort of system on a domestic robot - such as a robotic vacuum cleaner, for instance," said Cipolla. "It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics."

The researchers are presenting details of the two technologies at the International Conference on Computer Vision in Santiago, Chile.

Media Contact

Sarah Collins
sarah.collins@admin.cam.ac.uk
44-012-237-65542

 @Cambridge_Uni

http://www.cam.ac.uk 

Sarah Collins | EurekAlert!

Further reports about: GPs autonomous vehicles deep learning orientation vehicles

More articles from Information Technology:

nachricht Stable magnetic bit of three atoms
21.09.2017 | Sonderforschungsbereich 668

nachricht Drones can almost see in the dark
20.09.2017 | Universität Zürich

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: The pyrenoid is a carbon-fixing liquid droplet

Plants and algae use the enzyme Rubisco to fix carbon dioxide, removing it from the atmosphere and converting it into biomass. Algae have figured out a way to increase the efficiency of carbon fixation. They gather most of their Rubisco into a ball-shaped microcompartment called the pyrenoid, which they flood with a high local concentration of carbon dioxide. A team of scientists at Princeton University, the Carnegie Institution for Science, Stanford University and the Max Plank Institute of Biochemistry have unravelled the mysteries of how the pyrenoid is assembled. These insights can help to engineer crops that remove more carbon dioxide from the atmosphere while producing more food.

A warming planet

Im Focus: Highly precise wiring in the Cerebral Cortex

Our brains house extremely complex neuronal circuits, whose detailed structures are still largely unknown. This is especially true for the so-called cerebral cortex of mammals, where among other things vision, thoughts or spatial orientation are being computed. Here the rules by which nerve cells are connected to each other are only partly understood. A team of scientists around Moritz Helmstaedter at the Frankfiurt Max Planck Institute for Brain Research and Helene Schmidt (Humboldt University in Berlin) have now discovered a surprisingly precise nerve cell connectivity pattern in the part of the cerebral cortex that is responsible for orienting the individual animal or human in space.

The researchers report online in Nature (Schmidt et al., 2017. Axonal synapse sorting in medial entorhinal cortex, DOI: 10.1038/nature24005) that synapses in...

Im Focus: Tiny lasers from a gallery of whispers

New technique promises tunable laser devices

Whispering gallery mode (WGM) resonators are used to make tiny micro-lasers, sensors, switches, routers and other devices. These tiny structures rely on a...

Im Focus: Ultrafast snapshots of relaxing electrons in solids

Using ultrafast flashes of laser and x-ray radiation, scientists at the Max Planck Institute of Quantum Optics (Garching, Germany) took snapshots of the briefest electron motion inside a solid material to date. The electron motion lasted only 750 billionths of the billionth of a second before it fainted, setting a new record of human capability to capture ultrafast processes inside solids!

When x-rays shine onto solid materials or large molecules, an electron is pushed away from its original place near the nucleus of the atom, leaving a hole...

Im Focus: Quantum Sensors Decipher Magnetic Ordering in a New Semiconducting Material

For the first time, physicists have successfully imaged spiral magnetic ordering in a multiferroic material. These materials are considered highly promising candidates for future data storage media. The researchers were able to prove their findings using unique quantum sensors that were developed at Basel University and that can analyze electromagnetic fields on the nanometer scale. The results – obtained by scientists from the University of Basel’s Department of Physics, the Swiss Nanoscience Institute, the University of Montpellier and several laboratories from University Paris-Saclay – were recently published in the journal Nature.

Multiferroics are materials that simultaneously react to electric and magnetic fields. These two properties are rarely found together, and their combined...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

Event News

“Lasers in Composites Symposium” in Aachen – from Science to Application

19.09.2017 | Event News

I-ESA 2018 – Call for Papers

12.09.2017 | Event News

EMBO at Basel Life, a new conference on current and emerging life science research

06.09.2017 | Event News

 
Latest News

Rainbow colors reveal cell history: Uncovering β-cell heterogeneity

22.09.2017 | Life Sciences

Penn first in world to treat patient with new radiation technology

22.09.2017 | Medical Engineering

Calculating quietness

22.09.2017 | Physics and Astronomy

VideoLinks
B2B-VideoLinks
More VideoLinks >>>