Imagine a future where life’s most boring or dangerous tasks are handled by machines. Time otherwise spent commuting, scheduling appointments, sifting through mail, could be devoted to human passions instead.
That’s the best-case scenario for noted computer scientist Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence, also known as “AI2,” founded by Microsoft co-founder Paul Allen.
“An AI utopia is a place where people have income guaranteed because their machines are working for them,” he explains on a new episode of GeekWire’s radio show. “Instead, they focus on activities that they want to do, that are personally meaningful like art or where human creativity still shines, in science. They’re engaged in those activities because of the interaction. Another one would be, of course, interaction between people and not because they need to make a buck.”
It’s a nice picture, but Etzioni is the first to recognize that his vision of what should be is not necessarily what will be. Technology often promises to free people up but — perhaps through flaws in ourselves rather than flaws in the machines — we tend to use these advances to extract even more work out of our already busy lives.
“I do think that in American society, in particular, we do make that choice,” he says. “By the way, we might be working harder than ever, even in this utopia, but it would be on things that have real significance to us, as opposed to paying the rent.”
Etzioni’s vision also involves machines taking over jobs that are too dangerous for humans, like cleaning up after a nuclear disaster or working in a coal mine. But despite his optimistic view of AI, Etzioni is also deeply concerned about the short-term impact automation will have on jobs.
“There will be very real disruption,” he says. “Jobs will be taken away and those people need to be taken care of. People have floated the idea of universal basic income, of negative income tax, of training programs. We have an obligation to figure out how to help people cope with the rapidly changing nature of technology.”
Despite these concerns, Etzioni believes that in the long-run, AI has the potential to make the world a better place and solve some of society’s most troubling problems. That, he says, is the impetus for AI2, which is preparing to grow its 50-person team to 75 in the next year. The company’s headquarters, in Seattle’s Fremont neighborhood, are expanding by 5,000 square feet to accommodate new hires.
“We hire people who are sometimes fascinated by AI, fascinated by doing new things as opposed to maintaining software and solving technical problems that aren’t that interesting, frankly, at bigger companies,” says Etzioni of Seattle’s competitive job market. “That draws a bunch of people to us. Then a bunch of people are mission-oriented. Our motto is AI for the common good.”
It’s a goal shared by OpenAI, a non-profit backed by Tesla and SpaceX founder Elon Musk. But the two luminaries have expressed vastly different perspectives on artificial intelligence. Last summer, Musk penned an open letter with Stephen Hawking, Steve Wozniak, and other industry leaders, about the threat of a global AI arms race.
“I think we should be very careful about artificial intelligence,” Musk said during an interview at the AeroAstro Centennial Symposium. “If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.”
The differences in the two organizations extend beyond the perspectives of their leaders. AI2 focuses on training computer programs in common sense and natural language. OpenAI, Etzioni says, is more concerned with “deep learning,” studying neural networks in the brain and creating pattern-recognizing algorithms.
Etzioni says he’s dubious about OpenAI’s true ambitions:
For Etzioni’s part, he thinks much of the hand-wringing about intelligent machines is overblown. Pop culture, he says, fosters a misunderstanding of the status of AI and where it’s headed.”They say, ‘for the safety of humanity.’ I have to admit, I’m a little bit skeptical of that comment, because they say, ‘well, if we find something valuable, we’re going to patent it.’ If you’re going to patent it, then you’re putting the information out there. Is that really for the safety or maybe other considerations that are at play?”
“I’m not trying to summon the demon,” he says, referencing Musk’s comments. “I’m trying to use AI to make the world a better place. To help scientists. To help us communicate more effectively with machines and collaborate with them.”
Listen to the GeekWire radio show above, and check back later this week for GeekWire’s full Q&A with Etzioni.