The rise of artificial intelligence is posing interesting challenges for society, raising questions about ethics in a modern world where robots have intelligent decision-making power.  (Bigstock Photo)

PITTSBURGH — David Danks thinks a lot about the implications of artificial intelligence. In fact, the Carnegie Mellon University philosophy and psychology professor presented his very first research paper at an artificial intelligence conference in Seattle in 2001.

Now, 17 years later, Danks sits at the center of one of the most fascinating (and some might say terrifying debates): How will artificial intelligence effect the human species?

Or, put another way, should we be scared of the robotic future?

There’s certainly enough sci-fi writing — not to mention press coverage, including a recent piece in The New Yorker with the ominous title Welcoming our New Robot Overlords — to increase anxiety levels about artificial intelligence. And Danks, an expert in studying the complex dynamics between humans and autonomous systems, is not one to diminish those fears.

However, he’s optimistic that we can figure things out —  introducing ethics into the AI product development process earlier and requiring more transparency about the values-based decisions that went into creating the technologies.

Ethics discussions shouldn’t get added to the computer scientists’ playbook as a “plug-in at the end, where it’s like: ‘Yeah, but don’t kill people’” Danks notes.

There’s certainly a heightened fear today around automation. And the anxiety is real. Tasks historically handled by humans are now being replaced by machines. There’s a 50 percent chance that AI will be able to outperform humans in all jobs in the next 45 years, with full automation potentially occurring in 120 years, according to recent research. Some jobs — like retail sales — are projected to be fully automated in less than 20 years.

Given the fears around AI, I sat down with Danks in his first floor office in Baker Hall on the Carnegie Mellon campus to explain what’s going on. Let’s just say, I needed reassured that us humans will be OK.

Carnegie Mellon University philosophy professor believes humans can learn to live with AI, especially if computer scientists think about ethics questions early. (GeekWire photo / John Cook)

An easy-going Ohioan with a boyish face, Danks doesn’t look the part of the grizzled philosophy professor.

But Danks — whose research papers have titles such as Trust But Verify: The Difficulty of Trusting Autonomous Weapons Systems and Algorithmic Bias in Autonomous Systems— is in a perfect place to discuss weighty topics of AI and automation, in part because his counterparts at the nearby School of Computer Science at Carnegie Mellon are some of the best in the world at developing intelligent robots. In that regard, Danks has a front row seat to the coming revolution, one that Barack Obama even warned about in his last public interview as president.

Automation is not a new phenomenon, but in the past five years AI has really begun to “impact people’s lives in ways they can notice,” says Danks.

Autonomous vehicles appeared on the roads. Chat-bots showed up. People lost jobs. AI was everywhere, it seemed.

“The technology — in the last three to five years —  has gotten good enough that we are seeing a lot more replacement of human cognitive labor, rather than augmentation of human labor,” he said.

What does Danks mean by that?

Take, for example, word processing software. That was a classic example of augmenting human cognition, since the previous practice of hand writing a paper and then handing it to someone to be typewritten was painfully laborious.

Word processing software might suggest a better sentence structure or point out misspellings, but it does not actually write the paper for you, said Danks. In other words, writers — journalists, academics, marketers, etc. — still need to use their brains.

Contrast that with new AI-based software tools that now write complete news articles.

“That looks like a replacement of the human, not an augmentation of the human,” said Danks. “And, I think, that is a lot scarier for people.” (Editor’s note: For the record, this news report was generated by a human).

Danks admits that — even with word processing software — some cognitive labor is being offloaded to the machine, it is “just not the part you really care about.”

What’s changed is the stuff that really matters — the things that we thought made humans special — “that’s now getting replaced” because of advances in technology. That’s hitting people close to home.

But isn’t the AI revolution similar to the transition from an agrarian society to an industrialized society when farm workers were displaced by technological change?

Not necessarily, says Danks. Even though his economist friends often disagree, Danks says replacing jobs doesn’t matter so much. It goes much deeper than that, something President Obama mentioned in his final interview when he discussed automation and said jobs are “about dignity and feeling like you’ve got a place in the world.”

Danks agrees. What really worries people about AI is that a part of their identity is being replaced. People fear robots are taking a piece of the human experience.

He continued:

Historically, all of those previous revolutions you talked about were replacements of human physical labor. What we are talking about now is the replacement of human cognitive labor. Or thinking. If you look back to Aristotle — man is the rational animal — and if you look back through the history of say the last 2,000 to 2,500 years of human history, over and over the thing that makes us different, people believed, is the things we build, we think we are the rational ones. We think we can invent new solutions. We have these amazing brains that are literally, there is nothing else like them in the animal world. That’s now being threatened.

The thing that made us as a species special, that’s now being replaced. The fact that I could plow a row. That did not make me special — because an oxen could do that. The fact that I could figure out which row to plow, that was special because the animal couldn’t do that. That’s where this is different, because what’s being replaced, this time, is something that we always thought made us unique on this planet.

For those interested in the ethical dilemmas posed by AI, here’s more from GeekWire’s interview with Danks.

GeekWire: I am really interested in this idea of how you proactively shape technology so that it recognizes our values and society. How do you do that with AI?

Danks: “One important element is actually a kind of education and outreach, which is, having the technology developers become comfortable with the fact that values are being implemented in their technologies and that’s okay. Sometimes the view, even by the people in computer science, and I say this as somebody who interacts with a lot of them. (They) have the view that they are building value-neutral technology and it’s how it gets used — that’s where the values are coming in. Traditionally, there is this divide. Science is neutral and just trying to capture the world. Technology — the things that we build with science — that’s where ethics start to come in, professional ethics and that sort of thing. And, so I think there has actually been a bit of challenge in AI because it does historically come out of computer science, which has been thought of as this relatively value-neutral thing. As, we start to build products that are going out into the world, as we build technologies that are interacting with the world, I think it gets hard to do that.”

Danks: “But here’s a much more prosaic, but more important ethical problem that faces somebody trying to build an autonomous vehicle. Should your vehicle … always follow the law or always drive as safely as possible? So, if everybody around you is going 45, the safest speed to drive is around 40. Slightly slower than the prevailing traffic. What happens if that’s the case in a 25 mile per hour speed limit zone? Which, I don’t know about Seattle, but certainly here in Pittsburgh frequently the prevailing traffic speed is much higher than the speed limit.
Well you have a choice as a designer, does the car go 25, which is less safe or does it go 40 and break the law? That’s an ethical challenge. That’s a question about what you’re going to prioritize in your values. That’s not a problem that requires these extreme weird cases, that’s something you as a designer have to decide when you’re basically giving a function that is going to be optimized in the planning procedure of the autonomous vehicle. For it to go one block, you have had to solve this…. It’s okay to ask that question because those questions can’t be avoided.”

GeekWire: It’s hard enough to design these AI-based systems. Now, you’re talking about these ethical issues — you can get tied on these issues for a long time. And people have different social values, people have different ways of looking at this.

Danks: “Absolutely, so step one, is just getting people to recognize this. Step two is to recognize people and methods that we have, whether in university or think tanks or elsewhere for trying to figure these things out, for trying to tackle these problems. Ethics is not guess work. I think sometimes people have the view that ethics is what happens at midnight in your dorm room after a few beers. We actually have ways of thinking about ethics. We have a lot of things about which people think hard about these issues have consensus. There’s also ways to be more and less transparent. A company could be transparent about the kinds of values that they are putting into their system without thereby revealing technological details that are proprietary, confidential corporate information. Obviously, Uber (Advanced Technologies Group) doesn’t want to let everyone know what the underlying source code is, but they could say things like: ‘here are the factors that are provided as part of the input to the value function.'”

GeekWire: Did they do that?

Danks: “No, they did not.”

GeekWire: Why not?

Danks: “Because they are not required to and I think they do not want to do that…. Right now, there is very little regulation. They don’t have to disclose these things. They don’t have to say what they are doing… As you said, you’ve gotta make a choice, that’s okay. It’s like putting together a menu. We know that how you assemble a menu in a restaurant biases people to purchase one item over another. You have to have a menu. Just recognize that that’s the case. If you care about how you might be biasing people, either to make them buy the thing that makes you a lot of money, or to buy more healthy things, then talk to people who study this. Talk to the psychologists who study these things.
It’s okay, but try and do it in a sensible way… It’s probably utopian to think that’s going to happen without some sort of regulatory nudge in a lot of domains for companies, but … various companies are starting to bring people in to think about the ethical issues that their technologies might create and trying to get ahead of it.”

Danks: “In the long run, I am a firm believer that that will lead to corporate success as well. If the technology does what people want it to and they feel comfortable using it and they trust it, that’s how you get increased adoption rates. That’s how you get people willing to tolerate occasional failures. When technology is glitchy, as it often is, it really comes back to how much the user trusts the technology. Trust is more than just reliability.”

GeekWire What do you say to those who are pessimistic about the prospects of doing AI in an ethical way?

Danks: “I think that history tells us that it’s never one or the other….  I’m ultimately an optimist, but at the end of the day I suppose I am a realist that says, we’re gonna muddle our way through it. The key is to be able to engage with the people developing the technology. Help them understand that it is not a zero sum gain. It’s not ethical or profitable, where those are mutually exclusive. It’s not ethical or fast, where those are mutually exclusive. I think that often the rhetoric in these debates seems to fall into this — either there’s no regulation at all, let the free market run everything or it’s entirely top down. And, so this, binary thinking is a mistake…. Sometimes there will be these tensions, but I think moving past this sort of oppositional binary, zero sum rhetoric is really the first step towards trying to realize the optimistic goal.”

GeekWire: How do people get more comfortable with these technologies?

Danks:  “There’s a lot of movement for transparent AI or explainable AI. I think that that is a bit of a missed target. I think that the right target we should want is trustworthy AI. Explainability and transparency might be one route towards trustworthy AI, but it’s not the only one. It’s not the perfect route… You don’t have to understand it to trust. I don’t understand how my car works, but I trust it in lots of ways. On the other side, think about explainable AI. It’s a fictional example, but HAL is an explainable AI, how can it explain to Dave why he won’t open the pod bay door, or whatever the exact phrase is. Doesn’t mean anyone should trust HAL, so explainability is a route to trust, but it’s not a guarantee. I think trust is what really matters. If you trust the system, if you trust that it’s going to perform the way you want or expect it to. Trust is difficult because it is a many dimensional thing. The way that I trust my car is different from the way I trust my wife… It’s not the same kind of trust.

One of the things that’s nice is if you look in social psychology and organizational behavior, people have done a lot of research over the last 40 years on the nature of human trust, how it works in organizations, how trust works among relationships between people, how people come to trust technology. So we actually have a pretty rich literature that we can draw on. I think, that is, that’s really at the heart of it. If I trust the technology, even if I don’t understand it, that’s okay. It can be a bit glitchy and that’s okay, I will still continue to use it. I’ll be willing to try to ride that out as that we’re. If I don’t trust it, even if I understand it, I’m going to go, I don’t trust it. I am not going to want to use it… I think that trust is the thing really that we care about. I worry that all the focus currently on transparency and explainability is actually potentially pulling us away from the thing that actually matters to us.”

GeekWire: Just bringing the trust conversation to current events, a big theme today is that there is a mistrust of big companies like Apple, Google, Facebook, Microsoft, and Amazon, some of the leaders in AI. It seems there’s less trust with these big companies.  Where does that leave us?

Danks: “That leads to a difficult situation. I do sometimes wonder whether that would be a business model for company. In some sense go back to the original Google idea of don’t be evil and basically say: ‘always be trustworthy…. Whatever we do, people are going to be able to trust us.’ Whether that means disclosing algorithms. Whether that means bringing in outside third parties to audit what we do….

There’s been a shift in a lot of tech companies, from thinking about users as the target audience who you are trying to serve to the users becoming data that you are trying to provide to others. I think that is often where the shift is occurring, because … people say: ‘wait a second, we’re not the ones you are trying to help. You’re trying to get something from us to give to somebody else.’ I think my own outsiders’ view is a lot of the mistrust of Google dates exactly to people coming to realize that Google was not trying to help them. Google was trying to collect information to sell to other people, whether directly or indirectly selling to them. That is a similar sort of thing is happening with Amazon. People are going wait, Amazon might be displaying different prices to different people? Why would that be? Oh, because they’re not actually trying to help me get the things I want cheaply and quickly.

So, I agree, there’s very large distrust. If I were counseling those CEOs I would say: If you really want to last for the long run, you really need to think about how can people trust you. I think that’s actually, historically, something Apple has done well at. Arguably, it has been riding it as my view, their technology has not been as good over the last few years…. Everyone just sort of trusts (Apple). You know, it all just works. Why did my parents switch to Apple computers a few years ago? Because, in part, I told them, it just works. You can trust that it will just work.”

GeekWire: Talk a bit about jobs and AI. It is a big part of this discussion.

Danks: “It is big and I want to start off with the caveat, I don’t have any training in economics.”

GeekWire: Does a company that’s developing AI have an ethical responsibility to make sure jobs are provided?

Danks: “A lot of the discussion often falls back on sort of measurable economic impact. I think what’s important to recognize that there is also — in many cases — a significant psychological impact. If there were an AI developed that could take my job as a professor. I would not merely suffer an economic loss, I would suffer a real psychological loss. Part of my identity (is being) a professor… I think that one of the things that often gets lost … something that is missing (are) those psychological costs. And those psychological costs have broader social costs, both in the sense that if you have bunch of people in your community who have felt a loss of identity or loss of part of their identity, that’s going have a social impact. Also, it’s part of what defines a community sometimes… Think about the psychological/social cost to Pittsburgh when the steel mills shut down. That was part of what it was to be from Pittsburgh. You were part of this steel community and suddenly that town didn’t have that anymore. One thing for us to recognize is the costs are not just economic, they are about identity, about community identity, about personal identity, about who we are. Let’s face it, in the United States, in Western capitalist countries a huge part of identity is bound up in your job.”

Danks: Certainly, every presentation I’ve ever heard (about AI), all the materials I’ve ever seen are framed in how that can be used to cut costs. Not how it can be used to increase benefits or increase services that can be provided. It’s framed as how do you keep the same level of productivity for cheaper, rather than how do you keep costs the same and increase productivity. So, already there’s a kind of interesting framing effect that’s occurring, when it’s framed as keep productivity the same, cut costs. That’s going to lead to a loss of jobs. It’s probably hopeless thought experiment, but I sort of wonder how different things would be if all the AI companies walked into meetings with CEOs and said: ‘we’re going to tell you how to increase your productivity, while keeping your costs the same.’

Danks: “Suddenly, as a CEO, if my costs are staying the same, that means I’m not firing people because costs are staying the same. Or maybe, I am having to let go of a few people, but I am not having large job losses. Already, there has been a framing that has occurred, in terms of everything is done by cutting costs…. It’s not an ethically neutral framing. It’s another place where I think tech company consultants who sell these things just don’t realize, I think they honestly don’t realize that they are using an ethically load framing, when they say same productivity/cut costs. I do think that companies have social obligations just beyond maximized profit. That’s not necessarily a popular view among companies and I understand that in current the political regulatory social environment, that certainly is no obligation for them to recognize that. Nobody is forcing them too. I think that it is also the case where nobody is forcing them not to. I think we are in a difficult situation right now…. Companies ought to recognize the impact of their choices. I also recognize that there is no regulation that forces them to. In those sorts of cases, I think that the only response — short of being able to change the zeitgeist of western capitalism — is to have certain kinds of more top down pressures. So, whether it’s shareholder initiatives that say things like: ‘we are going to commit as a company that if we introduce a technology that will put a 100 people out of work, we will only phase in those layoffs over a two year span. Or we will commit to taking half of the additional profits and for two years providing those to the people who were laid off to help ease their transition.'”

Danks: “So, you could have shareholder initiatives that bind companies’ hands in those ways. So, the people who would be getting the profits, coming up and saying: ‘wait a second, no, we understand we could get more profits, we don’t want those.’ There have been these sorts of things happening in the past. So, the response to Apartheid in South Africa, the divestment movements. That was a case where people looked and said: ‘we know that we could be making more money, but we think it would be morally wrong to do so. So, we will voluntarily get a little less money, to do the morally right thing.’ That’s happened with various social responsible mutual funds, that, for example, won’t invest in oil, gas, tobacco stocks, those kinds of things….I think that that’s one route.”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.