Computer vision and navigation have improved by leaps and bounds, thanks to artificial intelligence, but how well do the computer models work in the real world?
That’s the challenge that Seattle’s Allen Institute for Artificial Intelligence is setting for AI researchers over the next few months, with geek fame and glory as the prize.
Ani Kembhavi, a research scientist at AI2, says RoboTHOR focuses on the next step. “If you can train a deep-learning, computer vision model to do something in an embodied environment … how well would this model work when deployed in an actual robot?” he told GeekWire.
That’s a crucial step for putting such models to work, in applications ranging from self-driving cars to robotic caregivers. But testing the models in actual robots, or cars, is expensive. “A lot of the research in this important topic can only be done by an organization with a lot of funding,” Kembhavi said.
He said AI2’s RoboTHOR Challenge aims to “democratize” the development of computer models that translate more easily to the real world, in line with the nonprofit institute’s mission to advance the state of artificial intelligence for the common good.
The challenge sets up a virtual world with 89 different apartments in it. Inside each simulated apartment is a collection of everyday objects — including, say, chairs and a table, a sofa and a lamp, even a computer manual.
AI2 will make the simulation software and training data for 75 of the apartments available to all comers via the RoboTHOR website. (For what it’s worth, “THOR” stands for The House Of inteRactions.)
“We provide all the help that you need to start training the model,” Kembhavi said.
The computer models will be judged on how well they’re able to navigate to an object of a specified category from an arbitrary location. For example, if the model is given the category “apple,” it will have to identify the apple in the room, plot a course to get close to the apple and then signal that it’s been found. Think of it as a virtual scavenger hunt.
The real trick is to see how well the model does when it’s uploaded into a real robot, conducting a scavenger hunt in a real apartment. For that phase of the challenge, AI2 has set up a room that can be quickly configured to look like any of the apartments in the repertoire.
Teams will be able to test their models on 14 apartment scenarios, using LoCoBot, a robot that was developed specifically for these kinds of experiments.
Roozbeh Mottaghi, a senior research scientist at AI2 who’s also a computer science professor at the University of Washington, said LoCoBot was selected because it has all the sensors and mobility needed for the challenge. It’s also relatively cheap, with a price tag in the range of $5,000.
Organizers of the challenge have designed the environment so that teams can construct their own real-world RoboTHOR rooms if they want, using IKEA furniture and standard objects to spread around the room. The cost of replicating the RoboTHOR room in all its incarnations would be roughly $10,000.
The top-scoring teams will be invited to demonstrate their models in a RoboTHOR room that’ll be set up for an Embodied AI workshop at the Conference on Computer Vision and Pattern Recognition, scheduled June 14-15 in Seattle.
Winners will also be able to exercise their bragging rights in the research papers and presentations that will no doubt come out of the RoboTHOR Challenge.
“Those are the prizes,” Kembhavi joked.
Check out this video to see LoCoBot at work in a different type of experiment: