Artificial intelligence and robots are hot topics right now, but will we ever get to the stage we saw 50 years ago on “The Jetsons,” where your typical household could have a robotic maid named Rosie?
Robotics pioneer David Hanson says yes, and he thinks it’ll take less than 50 more years. That’s the prediction he delivered on Wednesday during a Skype-enabled panel presentation on the future of AI and robotics in Seattle, sponsored by the MIT Enterprise Forum of the Northwest.
A veteran of Disney’s imagineering operation, Hanson has produced custom-made robot heads that are capable of eerily humanlike expressions. Now Hanson has relocated to Hong Kong, where he’s gearing up to unveil a line of production-model robots that take advantage of recent AI advances as well as the toymaking prowess of the Pearl River Delta.
He’s not yet ready to say how much those robots will cost. That will come later this year. But he foresees a day when humanoid robots will cost as much as cars.
“It is possible that we can get the cost down to tens of thousands of dollars for walking, humanoid robots that can grasp and manipulate,” he said from Hong Kong. “They can perform as well as the robots like the KAIST DARPA Robotics Challenge grand prize winner.”
That robot, called DRC-HUBO, cost somewhere around $500,000 to $1 million to build in the lab, according to Jun Ho Oh, the leader of the research team at the Korea Advanced Institute of Science and Technology.
“If you scale mass production and you start addressing real market demand, it is possible that we could get the cost of similar robots down to tens of thousands of dollars,” Hanson said. “The kinds of faces we make could also be done for a similar low cost. In the coming years you could see that.”
What’s really interesting to Hanson is what will happen when friendly-looking animatronic robots are matched up with increasingly capable artificial intelligence.
“We’d be talking about transforming toy robots into character interfaces,” he said. “Imagine if you took, say, a Teddy Ruxpin or something like Furby from 2005. … That was a $29 product. Imagine that’s connected to this kind of super-intelligence in the product, interacting with people.”
He estimated that the price tag for those kinds of next-generation AI toybots could range from tens of dollars to tens of thousands of dollars.
“What I expect is that as more and more killer apps appear, we’ll see more and more generalization of a platform,” Hanson said. “So you’ll see more and more general-purpose personal robots. You had a personal computer revolution; you’ll have a personal robotics revolution.”
In addition to working on the robotics for that revolution, Hanson Robotics is working on the tools aimed at fostering artificial general intelligence, or AGI. One of those tools is a platform called MindOS, which is being developed through the OpenCog software initiative.
“Together we can create this grassroots moonshot for super-intelligence in machines,” Hanson said.
Hanson argues that there’s already a demand for humanlike robots, to simulate humans for medical training purposes. One report estimates that the medical simulator market will surpass $2 billion by 2019.
And that’s just the start. Other breeds of robots can help autistic children learn social skills, serve as tutors for college students or point customers to the right aisle in a hardware store. (The robot-parts aisle?)
Does artificial intelligence necessarily lead to super-intelligent machines? That’s the way Hanson thinks things should go. He sees the development of robots that care about humans as the best way to make sure the machines don’t kill us. But other AI researchers aren’t so sure the field needs to go that far.
“You can have a machine do intelligent behavior without giving it awareness,” said Mark Hammond, the founder and CEO of Berkeley-based Bonsai AI.
During Wednesday’s panel, Hammond acknowledged that luminaries such as Elon Musk and Stephen Hawking see the rise of artificial intelligence as a “very scary, dangerous thing.” But he said most experts in the field draw a distinction between narrow AI and strong AI.
Narrow AI is advancing quickly. That’s the kind of limited intelligence that’s used by your smartphone assistant to figure out your voice commands and find a nearby restaurant, or used by Google DeepMind’s AlphaGo program to figure out how to win a board game.
Hammond said strong AI, or artificial general intelligence – which involves the ability to take independent action in unpredictable circumstances – isn’t the main focus for AI researchers right now. “So when people come and say, ‘What about all these existential risks, and what happens when the machines become super-intelligent?’ … They say, ‘Well, I’m not doing that,'” he said.
That could change in the future. Mark Greaves, technical director of analytics for the Pacific Northwest National Laboratory, said a key turning point could come around the year 2040, by which time it’s “pretty reasonable” to assume that humans and robots will be having sophisticated interactions.
“Natural conversation will be a game-changer,” Greaves said. But it’s not at all clear whether the rapid rise of AI will accelerate exponentially between now and then – in a scenario that futurist Ray Kurzweil calls the singularity – or level off instead.
Capping off Wednesday’s panel discussion, Google researcher Illia Polosukhin said AI tools could lead to the development of personalized teacher-bots, doctor-bots – and even voter-bots.
“We can actually make a vote on any issue, just by using this virtual representative,” Polosukhin said. “There’s no way we can do electronic votes for every question, if the person actually needs to vote. There are physical issues with that. But if you have an electronic representative that knows everything about you, it can interact on your behalf. It can provide enough information to figure out what the country in general wants.”
As Donald Trump might say, that’d be yuuuuge. And at least as controversial as Trump himself.