The researchers' aim is to enhance a human operator's ability to perform precise tasks using a multi-jointed robotic device such as an articulated mechanical arm. The new approach has been shown to be easier and faster than older methods, especially when the robot is controlled by an operator who is watching it in a video monitor.
Known as Uncalibrated Visual Servoing for Intuitive Human Guidance of Robots, the new method uses a special implementation of an existing vision-guided control method called visual servoing (VS). By applying visual-servoing technology in innovative ways, the researchers have constructed a robotic system that responds to human commands more directly and intuitively than older techniques.
"Our approach exploits 3-D video technology to let an operator guide a robotic device in ways that are more natural and time-saving, yet are still very precise," said Ai-Ping Hu, a GTRI senior research engineer who is leading the effort. "This capability could have numerous applications – especially in situations where directly observing the robot's operation is hazardous or not possible – including bomb disposal, handling of hazardous materials and search-and-rescue missions."
A paper on this technology was presented at the 2012 IEEE International Conference on Robotics and Automation held in St. Paul, Minn.
For decades articulated robots have been used by industry to perform precision tasks such as welding vehicle seams or assembling electronics, Hu explained. The user develops a software program that enables the device to cycle through the required series of motions, using feedback from sensors built into the robot.
But such programming can be complex and time-consuming. The robot must typically be maneuvered joint by joint through the numerous actions required to complete a task. Moreover, such technology works only in a structured and unchanging environment, such as a factory assembly line, where spatial relationships are constant.
The Human Operator
In recent years, new techniques have enabled human operators to freely guide remote robots through unstructured and unfamiliar environments, to perform such challenging tasks as bomb disposal, Hu said. Operators have controlled the device in one of two ways: by "line of sight" – direct user observation – or by means of conventional, two-dimensional camera that is mounted on the robot to send back an image of both the robot and its target.
But humans guiding robots via either method face some of the same complexities that challenge those who program industrial robots, he added. Manipulating a remote robot into place is generally slow and laborious.
That's especially true when the operator must depend on the imprecise images provided by 2-D video feedback. Manipulating separate controls for each of the robot's multiple joint axes, users have only limited visual information to help them and must maneuver to the target by trial and error.
"Essentially, the user is trying to visualize and reconstruct a 3-D scenario from flat 2-D camera images," Hu said. "The process can become particularly confusing when operators are facing in a different direction from the robot and must mentally reorient themselves to try to distinguish right from left. It's somewhat similar to backing up a vehicle with an attached trailer – you have to turn the steering wheel to the left to get the trailer to move right, which is decidedly non-intuitive."
The Visual Servoing Advantage
To simplify user control, the Georgia Tech team turned to visual servoing (a term synonymous with visual activation). Visual servoing has been studied for years as a way to use video cameras to help robots re-orient themselves within a structured environment such as an assembly line.
Traditional visual servoing is calibrated, meaning that position information generated by a video camera can be transformed into data meaningful to the robot. Using these data, the robot can adjust itself to stay in a correct spatial relationship with target objects.
"Say a conveyor line is accidently moved a few millimeters," Hu said. "A robot with a calibrated visual servoing capability can automatically detect the movement using the video image and a fixed reference point, and then readjust to compensate."
But visual servoing offers additional possibilities. The research team – which includes Hu, associate professor Harvey Lipkin of the School of Mechanical Engineering, graduate student Matthew Marshall, GTRI research engineer Michael Matthews and GTRI principal research engineer Gary McMurray -- has adapted visual-servoing technology in ways that facilitate human control of remote robots.
The new technique takes advantage of both calibrated and uncalibrated techniques. A calibrated 3-D "time of flight" camera is mounted on the robot – typically at the end of a robotic arm, in a gripping device called an end-effector. This approach is sometimes called an eye-in-hand system, because of the camera's location in the robot's "hand."
The camera utilizes an active sensor that detects depth data, allowing it to send back 3-D coordinates that pinpoint the end-effector's spatial location. At the same time, the eye-in-hand camera also supplies a standard, uncalibrated 2-D grayscale video image to the operator's monitor.
The result is that the operator, without seeing the robot, now has a robot's-eye view of the target. Watching this image in a monitor, an operator can visually guide the robot using a gamepad, in a manner somewhat reminiscent of a first-person 3-D video game.
In addition, visual-servoing technology now automatically actuates all the joints needed to complete whatever action the user indicates on the gamepad – rather than the user having to manipulate those joints one by one. In the background, the Georgia Tech system performs the complex computation needed to coordinate the monitor image, the 3-D camera information, the robot's spatial position and the user's gamepad commands.
Testing System Usability
"The guidance process is now very intuitive – pressing 'left' on the gamepad will actuate all the requisite robot joints to effect a leftward displacement," Hu said. "What's more, the robot could be upside down and the controls will still respond in the same intuitive way – left is still left and right is still right."
To judge system usability, the Georgia Tech research team recently conducted trials to test whether the visual-servoing approach enabled faster task-completion times. Using a gamepad that controls an articulated-arm robot with six degrees of freedom, subjects performed four tests: they used visual-servoing guidance as well as conventional joint-based guidance, in both line-of-sight and camera-view modes.
In the line-of-sight test, volunteer participants using visual-servoing guidance averaged task-completion times that were 15 percent faster than when they used joint-based guidance. However, in camera-view mode, participants using visual-servoing guidance averaged 227 percent faster results than with the joint-based technique.
Hu noted that the visual-servoing system used in this test scenario was only one of numerous possible applications of the technology. The research team's plans include testing a mobile platform with a VS-guided robotic arm mounted on it. Also underway is a proof-of-concept effort that incorporates visual-servoing control into a low-cost, consumer-level robot.
"Our ultimate goal is to develop a generic, uncalibrated control framework that is able to use image data to guide many different kinds of robots," he said.Research News & Publications Office
John Toon | Newswise Science News
Researchers take next step toward fusion energy
16.11.2017 | Texas A&M University
Desert solar to fuel centuries of air travel
16.11.2017 | SolarPACES
The WHO reports an estimated 429,000 malaria deaths each year. The disease mostly affects tropical and subtropical regions and in particular the African continent. The Fraunhofer Institute for Silicate Research ISC teamed up with the Fraunhofer Institute for Molecular Biology and Applied Ecology IME and the Institute of Tropical Medicine at the University of Tübingen for a new test method to detect malaria parasites in blood. The idea of the research project “NanoFRET” is to develop a highly sensitive and reliable rapid diagnostic test so that patient treatment can begin as early as possible.
Malaria is caused by parasites transmitted by mosquito bite. The most dangerous form of malaria is malaria tropica. Left untreated, it is fatal in most cases....
The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake.
Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a...
Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.
That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these...
Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm.
During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles....
The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications.
Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,...
15.11.2017 | Event News
15.11.2017 | Event News
30.10.2017 | Event News
22.11.2017 | Business and Finance
22.11.2017 | Physics and Astronomy
22.11.2017 | Physics and Astronomy