Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Helping robots learn to see in 3-D

17.07.2017

Robots need to guess what they're seeing better, even when parts are hidden from view

Autonomous robots can inspect nuclear power plants, clean up oil spills in the ocean, accompany fighter planes into combat and explore the surface of Mars.


When fed 3-D models of household items in bird's-eye view (left), a new algorithm is able to guess what the objects are, and what their overall 3-D shapes should be. This image shows the guess in the center, and the actual 3-D model on the right.

Courtesy of Ben Burchfiel

Yet for all their talents, robots still can't make a cup of tea.

That's because tasks such as turning the stove on, fetching the kettle and finding the milk and sugar require perceptual abilities that, for most machines, are still a fantasy.

Among them is the ability to make sense of 3-D objects. While it's relatively straightforward for robots to "see" objects with cameras and other sensors, interpreting what they see, from a single glimpse, is more difficult.

Duke University graduate student Ben Burchfiel says the most sophisticated robots in the world can't yet do what most children do automatically, but he and his colleagues may be closer to a solution.

Burchfiel and his thesis advisor George Konidaris, now an assistant professor of computer science at Brown University, have developed new technology that enables machines to make sense of 3-D objects in a richer and more human-like way.

A robot that clears dishes off a table, for example, must be able to adapt to an enormous variety of bowls, platters and plates in different sizes and shapes, left in disarray on a cluttered surface.

Humans can glance at a new object and intuitively know what it is, whether it is right side up, upside down or sideways, in full view or partially obscured by other objects.

Even when an object is partially hidden, we mentally fill in the parts we can't see.

Their robot perception algorithm can simultaneously guess what a new object is, and how it's oriented, without examining it from multiple angles first. It can also "imagine" any parts that are out of view.

A robot with this technology wouldn't need to see every side of a teapot, for example, to know that it probably has a handle, a lid and a spout, and whether it is sitting upright or off-kilter on the stove.

The researchers say their approach, which they presented July 12 at the 2017 Robotics: Science and Systems Conference in Cambridge, Massachusetts, makes fewer mistakes and is three times faster than the best current methods.

This is an important step toward robots that function alongside humans in homes and other real-world settings, which are less orderly and predictable than the highly controlled environment of the lab or the factory floor, Burchfiel said.

With their framework, the robot is given a limited number of training examples, and uses them to generalize to new objects.

"It's impractical to assume a robot has a detailed 3-D model of every possible object it might encounter, in advance," Burchfiel said.

The researchers trained their algorithm on a dataset of roughly 4,000 complete 3-D scans of common household objects: an assortment of bathtubs, beds, chairs, desks, dressers, monitors, nightstands, sofas, tables and toilets.

Each 3-D scan was converted into tens of thousands of little cubes, or voxels, stacked on top of each other like LEGO blocks to make them easier to process.

The algorithm learned categories of objects by combing through examples of each one and figuring out how they vary and how they stay the same, using a version of a technique called probabilistic principal component analysis.

When a robot spots something new -- say, a bunk bed -- it doesn't have to sift through its entire mental catalogue for a match. It learns, from prior examples, what characteristics beds tend to have.

Based on that prior knowledge, it has the power to generalize like a person would -- to understand that two objects may be different, yet share properties that make them both a particular type of furniture.

To test the approach, the researchers fed the algorithm 908 new 3-D examples of the same 10 kinds of household items, viewed from the top.

From this single vantage point, the algorithm correctly guessed what most objects were, and what their overall 3-D shapes should be, including the concealed parts, about 75 percent of the time -- compared with just over 50 percent for the state-of-the-art alternative.

It was also capable of recognizing objects that were rotated in various ways, which the best competing approaches can't do.

While the system is reasonably fast -- the whole process takes about a second -- it is still a far cry from human vision, Burchfiel said.

For one, both their algorithm and previous methods were easily fooled by objects that, from certain perspectives, look similar in shape. They might see a table from above, and mistake it for a dresser.

"Overall, we make a mistake a little less than 25 percent of the time, and the best alternative makes a mistake almost half the time, so it is a big improvement," Burchfiel said. "But it still isn't ready to move into your house. You don't want it putting a pillow in the dishwasher."

Now the team is working on scaling up their approach to enable robots to distinguish between thousands of types of objects at a time.

"Researchers have been teaching robots to recognize 3-D objects for a while now," Burchfield said. What's new, he explained, is the ability to both recognize something and fill in the blind spots in its field of vision, to reconstruct the parts it can't see.

"That has the potential to be invaluable in a lot of robotic applications," Burchfiel said.

###

This research was supported in part by The Defense Advanced Research Projects Agency, DARPA (D15AP00104).

CITATION: "Bayesian Eigenobjects: A Unified Framework for 3D Robot Perception," Benjamin Burchfiel and George Konidaris. RSS 2017, July 12-16, 2017, Cambridge, Massachusetts.

Media Contact

Robin Ann Smith
ras10@duke.edu
919-681-8057

 @DukeU

http://www.duke.edu 

Robin Ann Smith | EurekAlert!

More articles from Information Technology:

nachricht Controlling robots with brainwaves and hand gestures
20.06.2018 | Massachusetts Institute of Technology, CSAIL

nachricht Innovative autonomous system for identifying schools of fish
20.06.2018 | IMDEA Networks Institute

All articles from Information Technology >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Temperature-controlled fiber-optic light source with liquid core

In a recent publication in the renowned journal Optica, scientists of Leibniz-Institute of Photonic Technology (Leibniz IPHT) in Jena showed that they can accurately control the optical properties of liquid-core fiber lasers and therefore their spectral band width by temperature and pressure tuning.

Already last year, the researchers provided experimental proof of a new dynamic of hybrid solitons– temporally and spectrally stationary light waves resulting...

Im Focus: Overdosing on Calcium

Nano crystals impact stem cell fate during bone formation

Scientists from the University of Freiburg and the University of Basel identified a master regulator for bone regeneration. Prasad Shastri, Professor of...

Im Focus: AchemAsia 2019 will take place in Shanghai

Moving into its fourth decade, AchemAsia is setting out for new horizons: The International Expo and Innovation Forum for Sustainable Chemical Production will take place from 21-23 May 2019 in Shanghai, China. With an updated event profile, the eleventh edition focusses on topics that are especially relevant for the Chinese process industry, putting a strong emphasis on sustainability and innovation.

Founded in 1989 as a spin-off of ACHEMA to cater to the needs of China’s then developing industry, AchemAsia has since grown into a platform where the latest...

Im Focus: First real-time test of Li-Fi utilization for the industrial Internet of Things

The BMBF-funded OWICELLS project was successfully completed with a final presentation at the BMW plant in Munich. The presentation demonstrated a Li-Fi communication with a mobile robot, while the robot carried out usual production processes (welding, moving and testing parts) in a 5x5m² production cell. The robust, optical wireless transmission is based on spatial diversity; in other words, data is sent and received simultaneously by several LEDs and several photodiodes. The system can transmit data at more than 100 Mbit/s and five milliseconds latency.

Modern production technologies in the automobile industry must become more flexible in order to fulfil individual customer requirements.

Im Focus: Sharp images with flexible fibers

An international team of scientists has discovered a new way to transfer image information through multimodal fibers with almost no distortion - even if the fiber is bent. The results of the study, to which scientist from the Leibniz-Institute of Photonic Technology Jena (Leibniz IPHT) contributed, were published on 6thJune in the highly-cited journal Physical Review Letters.

Endoscopes allow doctors to see into a patient’s body like through a keyhole. Typically, the images are transmitted via a bundle of several hundreds of optical...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

Munich conference on asteroid detection, tracking and defense

13.06.2018 | Event News

2nd International Baltic Earth Conference in Denmark: “The Baltic Sea region in Transition”

08.06.2018 | Event News

ISEKI_Food 2018: Conference with Holistic View of Food Production

05.06.2018 | Event News

 
Latest News

Graphene assembled film shows higher thermal conductivity than graphite film

22.06.2018 | Materials Sciences

Fast rising bedrock below West Antarctica reveals an extremely fluid Earth mantle

22.06.2018 | Earth Sciences

Zebrafish's near 360 degree UV-vision knocks stripes off Google Street View

22.06.2018 | Life Sciences

VideoLinks
Science & Research
Overview of more VideoLinks >>>