Forum for Science, Industry and Business

Sponsored by:     3M 
Search our Site:

 

Robot can be programmed by casually talking to it

23.06.2014

Robots are getting smarter, but they still need step-by-step instructions for tasks they haven't performed before.

Before you can tell your household robot "Make me a bowl of ramen noodles," you'll have to teach it how to do that. Since we're not all computer programmers, we'd prefer to give those instructions in English, just as we'd lay out a task for a child.

But human language can be ambiguous, and some instructors forget to mention important details. Suppose you told your household robot how to prepare ramen noodles, but forgot to mention heating the water or tell it where the stove is.

In his Robot Learning Lab, Ashutosh Saxena, assistant professor of computer science at Cornell University, is teaching robots to understand instructions in natural language from various speakers, account for missing information, and adapt to the environment at hand.

... more about:
»Robotics »Science »combining

Saxena and graduate students Dipendra K. Misra and Jaeyong Sung will describe their methods at the Robotics: Science and Systems conference at the University of California, Berkeley, July 12-16.

Video and abstract available at http://tellmedave.cs.cornell.edu

The robot may have a built-in programming language with commands like find (pan); grasp (pan); carry (pan, water tap); fill (pan, water); carry (pan, stove) and so on. Saxena's software translates human sentences, such as "Fill a pan with water, put it on the stove, heat the water. When it's boiling, add the noodles." into robot language. Notice that you didn't say, "Turn on the stove." The robot has to be smart enough to fill in that missing step.

Saxena's robot, equipped with a 3-D camera, scans its environment and identifies the objects in it, using computer vision software previously developed in Saxena's lab. The robot has been trained to associate objects with their capabilities: A pan can be poured into or poured from; stoves can have other objects set on them, and can heat things.

So the robot can identify the pan, locate the water faucet and stove and incorporate that information into its procedure. If you tell it to "heat water" it can use the stove or the microwave, depending on which is available. And it can carry out the same actions tomorrow if you've moved the pan, or even moved the robot to a different kitchen.

Other workers have attacked these problems by giving a robot a set of templates for common actions and chewing up sentences one word at a time. Saxena's research group uses techniques computer scientists call "machine learning" to train the robot's computer brain to associate entire commands with flexibly defined actions. The computer is fed animated video simulations of the action –- created by humans in a process similar to playing a video game – accompanied by recorded voice commands from several different speakers.

The computer stores the combination of many similar commands as a flexible pattern that can match many variations, so when it hears "Take the pot to the stove," "Carry the pot to the stove," "Put the pot on the stove," "Go to the stove and heat the pot" and so on, it calculates the probability of a match with what it has heard before, and if the probability is high enough, it declares a match. A similarly fuzzy version of the video simulation supplies a plan for the action: Wherever the sink and the stove are, the path can be matched to the recorded action of carrying the pot of water from one to the other.

Of course the robot still doesn't get it right all the time. To test, the researchers gave instructions for preparing ramen noodles and for making affogato – an Italian dessert combining coffee and ice cream: "Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture."

The robot performed correctly up to 64 percent of the time even when the commands were varied or the environment was changed, and it was able to fill in missing steps. That was three to four times better than previous methods, the researchers reported, but "There is still room for improvement."

You can teach a simulated robot to perform a kitchen task at the "Tell me Dave" website, and your input there will become part of a crowdsourced library of instructions for the Cornell robots. Aditya Jami, visiting researcher at Cornell, is helping Tell Me Dave to scale the library to millions of examples. "With crowdsourcing at such a scale, robots will learn at a much faster rate," Saxena said.

###

Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews.

Syl Kacapyr | Eurek Alert!

Further reports about: Robotics Science combining

More articles from Power and Electrical Engineering:

nachricht Agricultural insecticide contamination threatens U.S. surface water integrity at the national scale
06.12.2018 | Universität Koblenz-Landau

nachricht Improving hydropower through long-range drought forecasts
06.12.2018 | Schweizerischer Nationalfonds SNF

All articles from Power and Electrical Engineering >>>

The most recent press releases about innovation >>>

Die letzten 5 Focus-News des innovations-reports im Überblick:

Im Focus: Researchers develop method to transfer entire 2D circuits to any smooth surface

What if a sensor sensing a thing could be part of the thing itself? Rice University engineers believe they have a two-dimensional solution to do just that.

Rice engineers led by materials scientists Pulickel Ajayan and Jun Lou have developed a method to make atom-flat sensors that seamlessly integrate with devices...

Im Focus: Three components on one chip

Scientists at the University of Stuttgart and the Karlsruhe Institute of Technology (KIT) succeed in important further development on the way to quantum Computers.

Quantum computers one day should be able to solve certain computing problems much faster than a classical computer. One of the most promising approaches is...

Im Focus: Substitute for rare earth metal oxides

New Project SNAPSTER: Novel luminescent materials by encapsulating phosphorescent metal clusters with organic liquid crystals

Nowadays energy conversion in lighting and optoelectronic devices requires the use of rare earth oxides.

Im Focus: A bit of a stretch... material that thickens as it's pulled

Scientists have discovered the first synthetic material that becomes thicker - at the molecular level - as it is stretched.

Researchers led by Dr Devesh Mistry from the University of Leeds discovered a new non-porous material that has unique and inherent "auxetic" stretching...

Im Focus: The force of the vacuum

Scientists from the Theory Department of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science (CFEL) in Hamburg have shown through theoretical calculations and computer simulations that the force between electrons and lattice distortions in an atomically thin two-dimensional superconductor can be controlled with virtual photons. This could aid the development of new superconductors for energy-saving devices and many other technical applications.

The vacuum is not empty. It may sound like magic to laypeople but it has occupied physicists since the birth of quantum mechanics.

All Focus news of the innovation-report >>>

Anzeige

Anzeige

VideoLinks
Industry & Economy
Event News

EGU 2019 meeting: Media registration now open

06.12.2018 | Event News

Expert Panel on the Future of HPC in Engineering

03.12.2018 | Event News

Inaugural "Virtual World Tour" scheduled for december

28.11.2018 | Event News

 
Latest News

A new molecular player involved in T cell activation

07.12.2018 | Life Sciences

High-temperature electronics? That's hot

07.12.2018 | Materials Sciences

Supercomputers without waste heat

07.12.2018 | Physics and Astronomy

VideoLinks
Science & Research
Overview of more VideoLinks >>>