Virtual robot dog software
In this simulation, a virtual dog has to be taught to move the red bag to the blue room. (Credit: Peng et al. / WSU)

To figure out the best way for a robot to move, designers have turned to snakes, cheetahs, fish and even mermaids for inspiration. But to figure out the best way for a robot to learn, they’re going to the dogs.

A team led by computer scientists at Washington State University’s Intelligent Robot Learning Laboratory set up a robot training program that builds in the kinds of fits and starts that a dog might employ when it’s learning a task from its human master. When the virtual robot is unsure what to do, it slows down and looks for feedback. But once it’s figured out the task, it runs through the job lickety-split.

The “Strategy-Aware Bayesian Learning” model, which was laid out in Singapore last week at the International Conference on Autonomous Agents and Multi-agent Systems, was developed in anticipation of an age when regular folks rather than programmers would have to teach robots what to do.

“We want everyone to be able to program, but that’s probably not going to happen,” WSU Professor Matthew Taylor said today in a news release. “So we needed to provide a way for everyone to train robots – without programming.”

Taylor, WSU doctoral student Bei Peng and their colleagues from Brown University and North Carolina State University worked out a software simulation with a virtual robot dog that had to be taught to transport items between rooms of different colors. The training task was left in the hands of non-programmer test subjects, who used a red “punishment” button and a green “reward” button as their teaching tools.

The researchers could vary the speed of the virtual dog’s movements, from a half-second to two seconds for each step in the task. Slower movements let the trainer know that the dog wasn’t sure how to proceed, and provided more of a chance to give positive or negative feedback.

“At the beginning, the virtual dog moves slowly,” Peng said. “But as it receives more feedback and becomes more confident in what to do, it speeds up.”

Peng and her colleagues found that their adaptive slow-to-fast model produced the best overall results, in terms of training accuracy and the amount of time required for training. Now the team is conducting experiments with physical robots as well as virtual agents.

The potential payoff isn’t limited to building more teachable robots. The researchers say their software could help animal trainers learn to be more effective in their jobs as well. Good researchers!

In addition to Peng and Taylor, the authors of “A Need for Speed: Adapting Agent Action Speed to Improve Task Learning From Non-Expert Humans” include Brown University’s James MacGlashan and Michael Littman, and NCSU’s Robert Loftin and David Roberts.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.