Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

More than a good eye: Carnegie Mellon robot uses arms, location and more to discover objects

07.05.2013
HERB, the robot butler, continually improves its understanding of objects

A robot can struggle to discover objects in its surroundings when it relies on computer vision alone. But by taking advantage of all of the information available to it — an object's location, size, shape and even whether it can be lifted — a robot can continually discover and refine its understanding of objects, say researchers at Carnegie Mellon University's Robotics Institute.


Carnegie Mellon University researchers have shown that a two-armed mobile robot, called HERB, can continually discover and refine its understanding of objects by taking advantage of all of the information available, including the object's location, size, shape and even whether it can be lifted.

Credit: Carnegie Mellon University

The Lifelong Robotic Object Discovery (LROD) process developed by the research team enabled a two-armed, mobile robot to use color video, a Kinect depth camera and non-visual information to discover more than 100 objects in a home-like laboratory, including items such as computer monitors, plants and food items. Normally, the CMU researchers build digital models and images of objects and load them into the memory of HERB — the Home-Exploring Robot Butler — so the robot can recognize objects that it needs to manipulate.

Virtually all roboticists do something similar to help their robots recognize objects. With the team's implementation of LROD, called HerbDisc, the robot now can discover these objects on its own. With more time and experience, HerbDisc gradually refines its models of the objects and begins to focus its attention on those that are most relevant to its goal — helping people accomplish tasks of daily living. Findings from the research study will be presented May 8 at the IEEE International Conference on Robotics and Automation in Karlsruhe, Germany.

The robot's ability to discover objects on its own sometimes takes even the researchers by surprise, said Siddhartha Srinivasa, associate professor of robotics and head of the Personal Robotics Lab, where HERB is being developed. In one case, some students left the remains of lunch — a pineapple and a bag of bagels — in the lab when they went home for the evening. The next morning, they returned to find that HERB had built digital models of both the pineapple and the bag and had figured out how it could pick up each one.

"We didn't even know that these objects existed, but HERB did," said Srinivasa, who jointly supervised the research with Martial Hebert, professor of robotics. "That was pretty fascinating."

Discovering and understanding objects in places filled with hundreds or thousands of things will be a crucial capability once robots begin working in the home and expanding their role in the workplace. Manually loading digital models of every object of possible relevance simply isn't feasible, Srinivasa said. "You can't expect Grandma to do all this," he added.

Object recognition has long been a challenging area of inquiry for computer vision researchers. Recognizing objects based on vision alone quickly becomes an intractable computational problem in a cluttered environment, Srinivasa said. But humans don't rely on sight alone to understand objects; babies will squeeze a rubber ducky, beat it against the tub, dunk it — even stick it in their mouth. Robots, too, have a lot of "domain knowledge" about their environment that they can use to discover objects.

Taking advantage of all of HERB's senses required a research team with complementary expertise — Srinivasa's insights on robotic manipulation and Hebert's in-depth knowledge of computer vision. Alvaro Collet, a robotics Ph.D. student they co-advised, led the development of HerbDisc. Collet is now a scientist at Microsoft.

Depth measurements from HERB's Kinect sensors proved to be particularly important, Hebert said, providing three-dimensional shape data that is highly discriminative for household items.

Other domain knowledge available to HERB includes location — whether something is on a table, on the floor or in a cupboard. The robot can see whether a potential object moves on its own, or is moveable at all. It can note whether something is in a particular place at a particular time. And it can use its arms to see if it can lift the object — the ultimate test of its "objectness."

"The first time HERB looks at the video, everything 'lights up' as a possible object," Srinivasa said. But as the robot uses its domain knowledge, it becomes clearer what is and isn't an object. The team found that adding domain knowledge to the video input almost tripled the number of objects HERB could discover and reduced computer processing time by a factor of 190. A HERB's-eye view of objects is available on YouTube.

HERB's definition of an object — something it can lift — is oriented toward its function as an assistive device for people, doing things such as fetching items or microwaving meals. "It's a very natural, robot-driven process," Srinivasa said. "As capabilities and situations change, different things become important." For instance, HERB can't yet pick up a sheet of paper, so it ignores paper. But once HERB has hands capable of manipulating paper, it will learn to recognize sheets of paper as objects.

Though not yet implemented, HERB and other robots could use the Internet to create an even richer understanding of objects. Earlier work by Srinivasa showed that robots can use crowdsourcing via Amazon Mechanical Turk to help understand objects. Likewise, a robot might access image sites, such as RoboEarth, ImageNet or 3D Warehouse, to find the name of an object, or to get images of parts of the object it can't see.

Bo Xiong, a student at Connecticut College, and Corina Gurau, a student at Jacobs University in Bremen, Germany, also contributed to this study.

HERB is a project of the Quality of Life Technology Center, a National Science Foundation engineering research center operated by Carnegie Mellon and the University of Pittsburgh. The center is focused on the development of intelligent systems that improve quality of life for everyone while enabling older adults and people with disabilities.

The Robotics Institute is part of Carnegie Mellon's School of Computer Science. Follow the school on Twitter @SCSatCMU.

About Carnegie Mellon University: Carnegie Mellon is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 12,000 students in the university's seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation. A global university, Carnegie Mellon has campuses in Pittsburgh, Pa., California's Silicon Valley and Qatar, and programs in Africa, Asia, Australia, Europe and Mexico. The university has exceeded its $1 billion campaign, titled "Inspire Innovation: The Campaign for Carnegie Mellon University," which aims to build its endowment, support faculty, students and innovative research, and enhance the physical campus with equipment and facility improvements. The campaign closes June 30, 2013.

Byron Spice | EurekAlert!
Further information:
http://www.cmu.edu

More articles from Interdisciplinary Research:

nachricht Investigating cell membranes: researchers develop a substance mimicking a vital membrane component
25.05.2018 | Westfälische Wilhelms-Universität Münster

nachricht New approach: Researchers succeed in directly labelling and detecting an important RNA modification
30.04.2018 | Westfälische Wilhelms-Universität Münster

All articles from Interdisciplinary Research >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Temperature-controlled fiber-optic light source with liquid core

In a recent publication in the renowned journal Optica, scientists of Leibniz-Institute of Photonic Technology (Leibniz IPHT) in Jena showed that they can accurately control the optical properties of liquid-core fiber lasers and therefore their spectral band width by temperature and pressure tuning.

Already last year, the researchers provided experimental proof of a new dynamic of hybrid solitons– temporally and spectrally stationary light waves resulting...

Im Focus: Overdosing on Calcium

Nano crystals impact stem cell fate during bone formation

Scientists from the University of Freiburg and the University of Basel identified a master regulator for bone regeneration. Prasad Shastri, Professor of...

Im Focus: AchemAsia 2019 will take place in Shanghai

Moving into its fourth decade, AchemAsia is setting out for new horizons: The International Expo and Innovation Forum for Sustainable Chemical Production will take place from 21-23 May 2019 in Shanghai, China. With an updated event profile, the eleventh edition focusses on topics that are especially relevant for the Chinese process industry, putting a strong emphasis on sustainability and innovation.

Founded in 1989 as a spin-off of ACHEMA to cater to the needs of China’s then developing industry, AchemAsia has since grown into a platform where the latest...

Im Focus: First real-time test of Li-Fi utilization for the industrial Internet of Things

The BMBF-funded OWICELLS project was successfully completed with a final presentation at the BMW plant in Munich. The presentation demonstrated a Li-Fi communication with a mobile robot, while the robot carried out usual production processes (welding, moving and testing parts) in a 5x5m² production cell. The robust, optical wireless transmission is based on spatial diversity; in other words, data is sent and received simultaneously by several LEDs and several photodiodes. The system can transmit data at more than 100 Mbit/s and five milliseconds latency.

Modern production technologies in the automobile industry must become more flexible in order to fulfil individual customer requirements.

Im Focus: Sharp images with flexible fibers

An international team of scientists has discovered a new way to transfer image information through multimodal fibers with almost no distortion - even if the fiber is bent. The results of the study, to which scientist from the Leibniz-Institute of Photonic Technology Jena (Leibniz IPHT) contributed, were published on 6thJune in the highly-cited journal Physical Review Letters.

Endoscopes allow doctors to see into a patient’s body like through a keyhole. Typically, the images are transmitted via a bundle of several hundreds of optical...

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

Munich conference on asteroid detection, tracking and defense

13.06.2018 | Event News

2nd International Baltic Earth Conference in Denmark: “The Baltic Sea region in Transition”

08.06.2018 | Event News

ISEKI_Food 2018: Conference with Holistic View of Food Production

05.06.2018 | Event News

 
Latest News

Creating a new composite fuel for new-generation fast reactors

20.06.2018 | Materials Sciences

Game-changing finding pushes 3D-printing to the molecular limit

20.06.2018 | Materials Sciences

Could this material enable autonomous vehicles to come to market sooner?

20.06.2018 | Materials Sciences

VideoLinks
Science & Research
Overview of more VideoLinks >>>