Pianists, surgeons, typists, gamers and baton-twirlers all learn to use their hands more skillfully as they ply their trade, but what about robots? Researchers at the University of Washington say they’ve developed a five-fingered robot hand that’s more capable than ours, and can learn to handle objects better and better without human intervention.
The ADROIT Manipulation Platform draws upon machine learning and real-world feedback to improve its performance, rather than relying on its programmers to specify its every move.
“Such dynamic dexterous manipulation with free objects has never been demonstrated before even in simulation, let along the physical hardware results we have,” Vikash Kumar, a UW doctoral student in computer science and engineering, told GeekWire in an email. Kumar and his colleagues discuss the project in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.
The robotics team started out with a Shadow Dexterous Hand, and added a custom-designed actuator system that can move the fingers faster than we humans can move our own fingers. The opposable thumb has five degrees of freedom, just like human thumbs but with a superhuman range of motion.
The cost of the hardware is roughly $300,000, but the real value added is in the software. The researchers developed algorithms that allowed a computer to simulate complex five-fingered behaviors – such as typing on a keyboard, or letting go of a stick and catching it again in midair.
The software took charge of the robotic hand, and analyzed the feedback from sensors and motion-capture cameras as the hand took on real-world tasks. That feedback was incorporated into the updated control algorithms.
“It’s like sitting through a lesson, going home and doing your homework to understand things better, and then coming back to school a little more intelligent the next day,” Kumar explained in news release.
The team has demonstrated that the hand can improve its manipulation of a given object over time. In a video demonstration, the robot becomes increasingly adept at twirling a plastic tube filled with coffee beans between its fingers. (Hey, this is Seattle, right?)
The next challenge is to see whether the system can figure out how best to handle objects or scenarios it hasn’t encountered before. So if you ever see a bean-juggling robotic barista behind the espresso machine, you’ll know how that all got started.
In addition to Kumar, the authors of “Optimal Control With Learned Local Models: Application to Dexterous Manipulation” include Emanuel Todorov and Sergey Levine of the UW Movement Control Laboratory. The research was funded by the National Science Foundation and the National Institutes of Health.