At times, modern artificial intelligence still feels like science fiction. A few decades ago, the kind of AI programs of today would have seemed almost outrageous — self-driving cars, systems that have mastered the most challenging game in the world, and even programs that could alert doctors to medical errors before they happen.
Despite the incredible progress and potential, public opinion of AI remains rooted in science fiction — evil entities, out to destroy mankind. The area gets a bad rap in the press, in Hollywood, and even from tech and science leaders like Stephen Hawking and Elon Musk.
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) and longtime AI researcher, says this depiction of an “evil AI” is far off the mark from the reality of today’s tech. And worse, it distracts our discussion from very real concerns surrounding AI research, like the loss of jobs to AI and the possibility of AI-fueled autonomous weapons.
Speaking at TEDxSeattle this weekend, Etzioni said that, smart as AI systems are, they are still incredibly limited — deeply intelligent in one area, but without the basic autonomy or breadth of knowledge that would allow them to go rogue.
Take AlphaGo as an example. The program was designed by Google engineers to master the Chinese board game Go, recognized as the most difficult game in the world to master. This spring, it defeated the world Go champion, Lee Sedol, four games to one.
But that shouldn’t worry us, Etzioni said. After all, AlphaGo may be the Go world champion, but that’s about the only thing it’s good at.
“AlphaGo doesn’t understand the game, it can’t explain itself — it doesn’t even know it won,” he said. AlphaGo is “nothing more than a fancy calculator,” and in this respect, it is identical to every other AI system we have built so far.
Etzioni said his 6-year-old son “is more autonomous than any AI system. … He can make his choices, he can cross the street, he can explain himself, he can understand English — when he wants to. That’s the difference.”
So we shouldn’t worry about AI rising up and eradicating the human race. However, there are real concerns about AI development, Etzioni said.
“How about jobs? That’s a concern,” he said. AI development will almost certainly eliminate jobs, while creating more in other sectors. But how long will that new job creation take? What about people without the training to work in high-tech fields?
“I stay up at night worrying about this. The point is, though, that the doomsday headlines, the ‘Terminator’ scenario, those are distractions from the real concerns like AI and jobs, concerns that we ought to be thinking about,” Etzioni said.
Another concern is the possibility of building AI with autonomy, particularly autonomous weapons. He and other AI scientists are united in condemning the creation of autonomous weapons, and have sent President Obama a letter to oppose their development. But even if this were to happen, the tech is still a ways off.
And despite the hype and headlines generated around doomsday scenarios, AI’s potential benefits are already having a real impact on our world, and we can already project that AI will save thousands of lives in the coming years.
Etzioni said AI2 is attempting to unlock those potential benefits, including AI systems in hospitals that could prevent fatal medical errors. AI-fueled automated cars could save thousands of lives each year.
So what’s the takeaway?
“AI is neither good nor evil. It’s a tool. It’s a technology for us to use,” Etzioni said. Instead of fear mongering, we should be advocating for the responsible development and use of AI, and leveraging its potential to find solutions that will benefit us all.