Artificial intelligence will change the world, for better and for worse — and we need to be prepared.
That’s the takeaway from Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, University of Washington computer science professor, and our guest on this episode of the GeekWire Podcast.
Etzioni is optimistic about the positives impacts of AI, like self-driving vehicles that could prevent thousands of fatal car crashes each year. But he says AI will also fundamentally change the world’s economy, killing millions of jobs and potentially causing an economic revolution larger than we can predict. So what should we do about it?
AI will greatly reduce the demand for jobs — what does that mean for the economy? “Let’s talk about what you’re going to live off of, because that’s a very bottom line question. Some people have proposed universal basic income, UBI, basically making sure that everybody gets a certain amount of money to live off of. I think that’s a wonderful idea. The problem is, we haven’t been able to guarantee universal healthcare in this country. That’s under siege. It’s really hard to see how we’re going to have UBI if we don’t have UHI, universal health insurance. I think that’s really the issue. It’s not about what I want. It’s what will the electorate and the body politic go for. My hope is that if we give people money in exchange for services that are currently often not available or where they’re volunteers, that will be a more acceptable tradeoff.”
Why tech jobs aren’t the answer: “What are we going to do as automation increases, as computers get more sophisticated? One thing that people say is we’ll retrain people, right? We’ll take coal miners and turn them into data miners. Of course, we do need to retrain people technically. We need to increase technical literacy, but that’s not going to work for everybody. We have a very large set of people in this country who don’t have a college degree, who didn’t even finish high school. I don’t think that all the coal miners — or even more realistically, say, the truck drivers whose jobs may be put out by self-driving cars and trucks — they’re all going to go and become web designers and programmers.”
[Oren Etzioni’s Wired essay: Workers displaced by automation should try a new job: Caregiver.]
Why retraining people as caregivers is one option: “By ‘caregivers,’ we mean all the range from babysitters all the way to people working as companions to the elderly, or even nurses and highly trained individuals. The observation that I made is that it’s true that robots could pick up some of that, but there’s an essential element of caregivers that’s uniquely human. That’s the connection part. Even if my companion is a robot, wouldn’t I rather my companion be a person so that connection is genuine? It’s warm. It’s more human… The biggest problem, first of all, is cost. Let’s say our population is growing older. We have special needs children. We have various people who need help. The question is, who’s going to pay for it? We have to have the will as a society and as a government to cover at least part of the cost of these services. If we do that, then I think we have really an opportunity to take folks who, say, worked in trucking, or say, worked at McDonald’s, and help provide them with pretty limited retraining to become people who deliver food, but with a smile — who have a cup of coffee with an elderly person, who takes him or her for a walk.”
So why bother? Is it even worth preparing for the AI revolution? “We do need to prepare. Let me give you an analogy, but it makes the point. Do we really know what’s going to happen when the big one hits Seattle? What is going to be the nature of that earthquake? What’s going to break down, and so on? No, we don’t. Do we need to go through exercises? Do we need to prepare? Do we need to make sure we have water and food? Heck yeah. These are slow processes, if we’re talking about retraining, if we’re talking about creating programs. If we don’t start having the conversation now and start preparing, we just rely on, “Hey, it’ll work out, it worked out before.” The one other point is — I’m not an expert historian — but my understanding is that in the past, there was a lot of disruption and a lot of pain in that process. It’s easy for us now, decades or even more than a century later, to say, “Look, it all worked out.” Can we minimize the pain involved, and for the most vulnerable people? I think it’s going to be a long time before GeekWire is fully automated. I think it’s going to be a lot shorter time before millions of truck drivers are out of a job.”
On companies using personal data: “Really the people who are enabling that are you and me by continuing to provide Google and Facebook with our information. It’s a little bit like the war on drugs, right? We can do whatever we want, build a wall, interdict shipments, and so on. I’m not saying that we shouldn’t do at least the shipment case, but the bottom line is so long as there’s a huge demand for cocaine in this country, then people from outside the country will find a way to supply it. So long as you and I are willing to trade our most private information and queries for a free search engine, and a free social service, then there are going to be people that are going to figure out how to monetize that.”
[Etzioni in The New York Times: How to Regulate Artificial Intelligence]
On how we should restrict and regulate AI: “First of all, I said, ‘Don’t regulate AI. It’s such a vibrant, fast-moving field.’ It’s not even clear exactly where the boundary is between AI and computer science. I said, ‘Regulate the applications of AI.’ When we think about those applications, let’s say in cars or in shopping, there are some sensible rules we should think about. One of them, for example, is around liability. If my AI car, god forbid, runs over your kitten, whose fault is that? I wanted to say that ‘My AI did it,’ is not an excuse. I can’t say, ‘Hey, look. This is sophisticated technology. I don’t understand it. Don’t blame me, blame the AI.’ It has to be the case that we take responsibility. If we use AI technologies, we take responsibility for those actions. That’s rule number one.
The second one is that it’s becoming harder and harder to tell, is this a person or is this a Twitter bot? We saw that, of course, in the election. I think it’s important for us to have a rule that if a system is really an AI bot, it ought to be labeled as such. ‘AI inside.’ It shouldn’t pretend to be a person. It’s bad enough to have a person calling you and harassing you, or emailing you. What if they’re bots? An army of bots constantly haranguing you, that’s terrible.
Then, the third piece is around privacy that we talked about earlier. AI has even bigger capabilities to draw out information from us. Two quick examples. One is AI Barbie. We have toys, including Barbie, that have a chip and they start to interact with our kids. Well, what happens with the information that they elicit from innocent kids? Can they just ship that to the cloud, and use it to target and sell us things even more? That seems highly problematic. Even something seemingly innocent, like the Roomba that’s going around your house cleaning the floors. Surely that’s not a big deal, except it was reported in The New York Times that the company running that robot was considering selling the map information that the Roomba automatically constructs — the map of your house — selling that to third parties. You would never dream of it. Again, as the threats to our privacy increase, we need to have clear rules about what AI is allowed to disclose about us, and what it should keep private.”