Robots are getting smarter, but they still need step-by-step instructions for tasks they haven't performed before.
Before you can tell your household robot "Make me a bowl of ramen noodles," you'll have to teach it how to do that. Since we're not all computer programmers, we'd prefer to give those instructions in English, just as we'd lay out a task for a child.
But human language can be ambiguous, and some instructors forget to mention important details. Suppose you told your household robot how to prepare ramen noodles, but forgot to mention heating the water or tell it where the stove is.
In his Robot Learning Lab, Ashutosh Saxena, assistant professor of computer science at Cornell University, is teaching robots to understand instructions in natural language from various speakers, account for missing information, and adapt to the environment at hand.
Saxena and graduate students Dipendra K. Misra and Jaeyong Sung will describe their methods at the Robotics: Science and Systems conference at the University of California, Berkeley, July 12-16.
Video and abstract available at http://tellmedave.cs.cornell.edu
The robot may have a built-in programming language with commands like find (pan); grasp (pan); carry (pan, water tap); fill (pan, water); carry (pan, stove) and so on. Saxena's software translates human sentences, such as "Fill a pan with water, put it on the stove, heat the water. When it's boiling, add the noodles." into robot language. Notice that you didn't say, "Turn on the stove." The robot has to be smart enough to fill in that missing step.
Saxena's robot, equipped with a 3-D camera, scans its environment and identifies the objects in it, using computer vision software previously developed in Saxena's lab. The robot has been trained to associate objects with their capabilities: A pan can be poured into or poured from; stoves can have other objects set on them, and can heat things.
So the robot can identify the pan, locate the water faucet and stove and incorporate that information into its procedure. If you tell it to "heat water" it can use the stove or the microwave, depending on which is available. And it can carry out the same actions tomorrow if you've moved the pan, or even moved the robot to a different kitchen.
Other workers have attacked these problems by giving a robot a set of templates for common actions and chewing up sentences one word at a time. Saxena's research group uses techniques computer scientists call "machine learning" to train the robot's computer brain to associate entire commands with flexibly defined actions. The computer is fed animated video simulations of the action –- created by humans in a process similar to playing a video game – accompanied by recorded voice commands from several different speakers.
The computer stores the combination of many similar commands as a flexible pattern that can match many variations, so when it hears "Take the pot to the stove," "Carry the pot to the stove," "Put the pot on the stove," "Go to the stove and heat the pot" and so on, it calculates the probability of a match with what it has heard before, and if the probability is high enough, it declares a match. A similarly fuzzy version of the video simulation supplies a plan for the action: Wherever the sink and the stove are, the path can be matched to the recorded action of carrying the pot of water from one to the other.
Of course the robot still doesn't get it right all the time. To test, the researchers gave instructions for preparing ramen noodles and for making affogato – an Italian dessert combining coffee and ice cream: "Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture."
The robot performed correctly up to 64 percent of the time even when the commands were varied or the environment was changed, and it was able to fill in missing steps. That was three to four times better than previous methods, the researchers reported, but "There is still room for improvement."
You can teach a simulated robot to perform a kitchen task at the "Tell me Dave" website, and your input there will become part of a crowdsourced library of instructions for the Cornell robots. Aditya Jami, visiting researcher at Cornell, is helping Tell Me Dave to scale the library to millions of examples. "With crowdsourcing at such a scale, robots will learn at a much faster rate," Saxena said.
Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews.
Syl Kacapyr | Eurek Alert!
Philippines’ microsatellite captures best-in-class high-resolution images
22.09.2016 | Hokkaido University
OLED microdisplays in data glasses for improved human-machine interaction
22.09.2016 | Fraunhofer-Institut für Organische Elektronik, Elektronenstrahl- und Plasmatechnik FEP
The Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology FEP has been developing various applications for OLED microdisplays based on organic semiconductors. By integrating the capabilities of an image sensor directly into the microdisplay, eye movements can be recorded by the smart glasses and utilized for guidance and control functions, as one example. The new design will be debuted at Augmented World Expo Europe (AWE) in Berlin at Booth B25, October 18th – 19th.
“Augmented-reality” and “wearables” have become terms we encounter almost daily. Both can make daily life a little simpler and provide valuable assistance for...
With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. They report on their findings in the scientific journal Physical Review Letters.
Elpasolite is a glassy, transparent, shiny and soft mineral with a cubic crystal structure. First discovered in El Paso County (Colorado, USA), it can also be...
For the first time, Fraunhofer IKTS shows additively manufactured hardmetal tools at WorldPM 2016 in Hamburg. Mechanical, chemical as well as a high heat resistance and extreme hardness are required from tools that are used in mechanical and automotive engineering or in plastics and building materials industry. Researchers at the Fraunhofer Institute for Ceramic Technologies and Systems IKTS in Dresden managed the production of complex hardmetal tools via 3D printing in a quality that are in no way inferior to conventionally produced high-performance tools.
Fraunhofer IKTS counts decades of proven expertise in the development of hardmetals. To date, reliable cutting, drilling, pressing and stamping tools made of...
At AKL’16, the International Laser Technology Congress held in May this year, interest in the topic of process control was greater than expected. Appropriately, the event was also used to launch the Industry Working Group for Process Control in Laser Material Processing. The group provides a forum for representatives from industry and research to initiate pre-competitive projects and discuss issues such as standards, potential cost savings and feasibility.
In the age of industry 4.0, laser technology is firmly established within manufacturing. A wide variety of laser techniques – from USP ablation and additive...
Every three years, the plastics industry gathers at K, the international trade fair for plastics and rubber in Düsseldorf. The Fraunhofer Institute for Laser Technology ILT will also be attending again and presenting many innovative technologies, such as for joining plastics and metals using ultrashort pulse lasers. From October 19 to 26, you can find the Fraunhofer ILT at the joint Fraunhofer booth SC01 in Hall 7.
K is the world’s largest trade fair for the plastics and rubber industry. As in previous years, the organizers are expecting 3,000 exhibitors and more than...
23.09.2016 | Event News
20.09.2016 | Event News
16.09.2016 | Event News
23.09.2016 | Life Sciences
23.09.2016 | Health and Medicine
23.09.2016 | Life Sciences