Boston Dynamics’ scary-smart robots make use of sophisticated computer vision, but MIT is following a different strategy with its Cheetah 3 robot.
The vision-free version of MIT’s 80-pound, Labrador-sized Cheetah 3 can find its way across a pitch-black room and up an obstacle-littered stairway without the use of cameras or environmental sensors. Instead, it relies on what engineers call “blind locomotion” — that is, the feedback from its robotic legs and its algorithm-based sense of balance as it scrambles through the dark.
“There are many unexpected behaviors the robot should be able to handle without relying too much on vision,” designer Sangbae Kim, an associate professor of mechanical engineering at MIT, said today in a news release.
“Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow,” Kim said. “So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast.”
The strategy is well-suited for getting around disaster zones or other risky environments.
“Cheetah 3 is designed to do versatile tasks such as power plant inspection, which involves various terrain conditions including stairs, curbs and obstacles on the ground,” Kim says. “I think there are countless occasions where we want to send robots to do simple tasks instead of humans. Dangerous, dirty, and difficult work can be done much more safely through remotely controlled robots.”
Kim’s team developed two new types of algorithms for the vision-free Cheetah.
A contact detection algorithm helps the robot determine the best time for a given leg to switch from swinging in the air to stepping on the ground, depending on the resistance it senses when it puts its foot down. The algorithm makes use of readings from gyroscopes, accelerometers and the relative positions of the legs with respect to the ground.
“If humans close our eyes and make a step, we have a mental model for where the ground might be, and can prepare for it. But we also rely on the feel of touch of the ground,” Kim explained. “We are sort of doing the same thing by combining multiple information to determine the transition time.”
The researchers tested the algorithm by forcing Cheetah 3 to deal with debris such as wooden blocks and rolls of tape as it trotted on a treadmill and climbed a staircase.
A model-predictive control algorithm, meanwhile, anticipates how the robot’s body and legs should be positioned a half-second in the future, depending on what kind of force is applied by a given leg as it makes contact with the ground.
“Say someone kicks the robot sideways,” Kim said. “When the foot is already on the ground, the algorithm decides, ‘How should I specify the forces on the foot? Because I have an undesirable velocity on the left, so I want to apply a force in the opposite direction to kill that velocity. If I apply 100 newtons in this opposite direction, what will happen a half-second later?”
The predictive algorithm is updated 20 times a second. To test its performance, researchers kicked, shoved and yanked the robot while it traveled on the treadmill or staircase — then adjusted the algorithm accordingly. (Let’s hope Cheetah 3 doesn’t hold a grudge.)
Eventually, Kim and his colleagues will add computer vision to the mix, but for Cheetah 3, they wanted to work on blind locomotion first.
A robot that can walk, run or climb in the dark, much more quickly and surely than a human? That’s just the thing you’d want to see in the aftermath of the earthquake — and just the thing you wouldn’t want to see in the aftermath of a robot uprising.
The vision-free technology, plus other enhancements to the Cheetah model, will be the subject of a presentation in October at the International Conference on Intelligent Robots and Systems. The research was supported, in part, by Naver, Toyota Research Institute, Foxconn and the Air Force Office of Scientific Research.