A visual summary of Microsoft President Brad Smith’s GeekWire Summit session, by Guillaume Wiatr of MetaHelm.

Artificial intelligence might sound like a futuristic concept, and it may be true that we’re years or decades away from a generalized form of AI that can match or exceed the capabilities of the human brain across a wide range of topics.

But the implications of machine learning, facial recognition and other early forms of the technology are already playing out for companies, governmental agencies and people around the world. This is raising questions about everything from privacy to jobs to law enforcement to the future of humanity.

On this episode of the GeekWire Podcast, we hear several different takes from people grappling right now with AI and its implications for business, technology and society, recorded across different sessions at the recent GeekWire Summit in Seattle.

Listen to the episode above, or subscribe in your favorite podcast app, and continue reading for edited excerpts.

First up, Microsoft President Brad Smith, co-author of the book Tools and Weapons, putting AI into perspective.

Smith: I think it’s fair to say that artificial intelligence will reshape the global economy over the next three decades probably more than any other single technological force, probably as much as the combustion engine reshaped the global economy in the first half of the 20th century.

One of our chapters is about AI in the workforce, and we actually start it by talking about the role of horses, the last run of the fire of horses in Brooklyn in 1922. And we trace how the transition from the horse to the automobile changed every aspect of the economy. I think the same thing will be true of AI, so we should get that right.

Microsoft President Brad Smith at the GeekWire Summit. (GeekWire Photo / Dan DeLong)

But if that’s not enough to grab your attention, I think there’s something else that we talk a little bit about in the book. In the entire history of humanity, we, those of us, all of us, who are alive at this moment in time are the first generation in history to endow machines with the capability to make decisions that have only been made by people. So we better get that right.

We are the first generation that will decide how machines will make these decisions, what kind of ethical principles will guide their decision making. No pressure, but we better get it right.

And it’s also interesting to put that in the context that every generation of Americans, and most people around the world, has actually grown up going to multiple movies in the movie theater that had the same basic plot. Humans create machines that can think for themselves. Machines think for themselves. Machines decide to enslave or kill all the humans. That’s called the Terminator movie 1 through N, including the one that’s about to come out, and many other movies as well.

And so one of the things we’ve learned is that this resonates with the public. When they look at the tech sector and they see us creating machines that can make decisions, they come to it with a point-of-view, and it’s less than sheer enthusiasm.

Todd Bishop: So what does “getting it right” look like?

Brad Smith: I think that it first calls on us to devise a set of ethical principles. We have as a company. We described the process that we went through to create those principles. And interestingly enough, those principals and others very similar to them have pretty much spread around the world.

It means that tech companies and ultimately every company, every institution that deploys AI, has to figure out how to operationalize the principles, and that’s a big challenge. We talk a little bit about that. It means as societies we need to operationalize the principles. How do we do that? Well, that’s called law. That’s called regulation. So there will be all of those elements as well. So yeah, when you put it all together, it’s really quite a formidable challenge.

If that seems complicated, I think that there’s a really interesting lesson for all of us in 2019, and maybe it should speak to us especially in Seattle, the most fundamental ethical principle, as we describe it, is what we call accountability. You need to ensure that machines remain accountable to people.

Well, at the end of the day, what does that mean? It means that you need to be able to turn the machine off if it’s doing something that isn’t working properly or isn’t following the ethical approach that you envisioned.

What is the biggest software-related issue to impact the economy in Puget Sound in 2019? Software in a cockpit of an airplane. Software that the pilots couldn’t turn off. That should speak to us. That is not just something that should speak to one company or just one industry. I think it should speak to everyone who creates technology, who uses technology in every part of society, and that’s a lesson we should remember. We’ve got to be able to create great technology, and we really do want to be able to turn it off when we don’t want it on.

When you’re in the tech sector, one of the points we make is that in the year 2010 we really reached a technology inflection point because, much more than in the past, not only individuals, but also companies, started to store their data in the cloud, in a data center. It made us the stewards of other people’s data, not the owners of that data, the stewards of it. I think our first responsibility is to protect their data, to protect the people, to protect the people’s rights that could be implicated based on what happens to the data.

TB: You’ve cited, in the realm of artificial intelligence, facial recognition as one of the first areas where we can really figure it out, at least take some first steps. Jeff Bezos, Amazon CEO, said two weeks ago in a scrum with reporters over there at the Spheres that they’re working on it. Amazon’s public policy team is working on it. Are you working on it with them?

Brad Smith: We’re not working on it with them, but if they want to talk, we’d be thrilled. I mean, anytime we have a chance to talk with our friends at Amazon, we always welcome the opportunity. And we talk to them or with them about a lot, so I’m guessing that opportunity will arise.

There’s one thing first that I would encourage all of us to think about in facial recognition. So many of you work with technology or work in the tech sector, in the 26 years that I’ve been at Microsoft, I have never seen a public policy issue explode like facial recognition. As we describe in the book, we gave a lot of thought and Satya and I talked a good deal about what we were going to say on this issue just last July, July of 2018. We published a blog. I wrote it, and it said, “This is technology that will need to be governed by law and regulation because it’s got great promise, but it’s potentially subject to great abuse.”

GeekWire editor Todd Bishop and Microsoft President Brad Smith at the 2019 GeekWire Summit. (GeekWire Photo / Dan DeLong)

And when we first wrote that, the reaction by so many in the tech sector, perhaps especially in Silicon Valley, but up here in Seattle, as well, was like, “What did they put in the water on the other side of Lake Washington? What’s wrong with you folks? Why are you saying this needs to be regulated? There’s no problem here.” And then within a year, the city of San Francisco passed an ordinance that bars the city of San Francisco from using facial recognition. You see concerns about it literally spreading around the world. So that’s happened very quickly.

We do need two things. One, we do need laws. I gave a speech at the Brookings Institution last December. We shared a proposal for what we thought would make some parts of a good law, and I think it’s great for Amazon and everybody to be thinking about this. I do think we should all be transparent.

I think in the country today, and in most countries today, people are generally OK if tech companies want to say, “Hey, here’s an idea. We have an idea. This is how this issue should be approached.” People will give us at least a fair hearing. They don’t want to see us taking ideas and just going into the proverbial back room and giving them to legislators without sharing them with the public. So I think we’ll all benefit if we’re all sharing our ideas with that kind of transparency.

But then the second thing is also important, and I think this is the other place where this conversation needs to go. I don’t think that companies should be getting a pass on this issue by simply saying, “We hope there will be a law. We’ll have some ideas for it, and once it’s passed we’ll comply with it.”

I think that we have a responsibility to be more proactive than that. And that’s why when we in December said, “Here are six principles that we think should go into a law.” We said, “And we’re going to start applying these to ourselves. Things like we’ll share and we will ensure that our technology can be tested, so people can look at it, assess it, and decide for themselves quantitatively in comparison with others is it biased or not?”

TB: You compared it to a Consumer Reports for AI.

Brad Smith: Exactly. There should be a Consumer Reports for customers that want to deploy facial recognition. We said, for example, that we won’t sell it in scenarios where it would be used in a manner that will lead to biased results. And we’ve explained how we turned down a licensing opportunity with a law enforcement agency in California where we thought it would be used that way.

We’ve said that it shouldn’t be used by authoritarian governments to engage in mass surveillance. And we’ve explained how we have turned down opportunities to license our technology when we thought there was a risk that it will be used in that manner. And we put restrictions in place in countries around the world, so that it won’t be used inadvertently in that manner.

So I applaud Amazon for saying it’s going to raise ideas. That’ll be welcome. It will contribute positively. But Amazon, in my view, should not be permitted to just say, “Great. When the law is passed, we’ll abide by it.” We all in this industry have a responsibility, in my view, to think more broadly about our own sense of what is ethical and proper and not wait for somebody in a government somewhere to tell us what to do.

Next up in our look of the state of AI, we explore the world of privacy, surveillance and law enforcement. GeekWire civic editor Monica Nickelsburg moderated this session at the GeekWire Summit with U.S. Representative Pramila Jayapal, Seattle Police Department Chief Carmen Best, and Luke Larson, president of Axon, the maker of Taser and body camera technology.

GeekWire civic editor Monica Nickelsburg, Seattle Police Department Chief Carmen Best, U.S. Rep. Pramila Jayapal, and  Luke Larson, president of Axon at the GeekWire Summit. (GeekWire Photo / Dan DeLong.)

Monica Nickelsburg: We’re living in a moment of very intense anxiety about the way that the government is using technology. Everything from the email provider to ICE down to police watching Ring surveillance footage. And I’m curious, why now? What is driving this increased skepticism and even fear from the public?

U.S. Rep. Pramila Jayapal: Well, for me, I think one of the things is that the pace of technology is faster than the pace of government regulation, and understanding the effects. So when a technology is developed, it’s developed for that need that we talked about. It fills that need, and it keeps getting perfected.

But then there are all these other ancillary uses of it that weren’t necessarily the reason it was developed, but they are some of the ways in which the technology is being used. I think facial recognition is one example of that. But we have to catch up on the government side, and we have to have a system that can adapt to whatever technology it is that we end up regulating.

So if you look at election security right now, we have some regulations in place around that, but it’s based on outdated technology. We need to make sure that technology keeps getting updated and then that our regulation keeps pace.

We are in an age of mass surveillance, and I think that’s the concern we see across the country on how our data’s being used, what’s happening to it, who has access to it, how do you control it as the person whose technology is out there. We really need to work fast to get there.

Chief Carmen Best: I would agree with that. The issue is what are we doing with that technology when we find it. And the regulations and the rules around it aren’t keeping up with the fast development of it. So while we think it’s really important, we want to make sure that we protect people’s privacy, and those issues that come into sometimes the conflict with the technology that we have.

At the Seattle Police Department, we have an attorney whose full time job is, she’s a privacy expert, to make sure that we’re working with our information technology in other places in the city, that we’re not violating people’s privacy rights. So it’s a very huge issue for all of us.

MN: And that really speaks to the need for regulatory action. But as we all know, government by design moves slowly. Is there also an impetus on the companies to be more transparent as well? Because many of them are not known for telling their story or explaining these technologies. Many of the deals that they strike with law enforcement agencies happen in secrecy, and so how can there be public confidence in this innovation and how it’s being used and what it means for them if the people building it aren’t really telling them what it means?

Chief Carmen Best: I might question that because I think from my experience, I’ve only been in law enforcement 28 years, but in my experience, most of the time when we get stuff, we have to have a search warrant. There’s parameters built in as to how we get that information, how we utilize it. They can’t go out and just arbitrarily do these things without having judicial review and that type of thing.

I’d be more concerned about less regulated private industry. I would imagine that Google has a lot more information about you than the Seattle Police Department ever will. There is that arena that is probably much more invasive and has much more information than most of us.

Luke Larson: As we think about developing new products at Axon, we’ve created a new advisory body called our Ethics Board. And we created this to advise us on how we should utilize artificial intelligence, but also broad technology applications. And so I don’t think companies should create these solutions in a vacuum. I think they need to seek guidance from the different stakeholders.

Specifically with law enforcement, it’s really important to understand the communities that they serve and what are the different demographics and stakeholders in those communities. And make sure they’re represented when we make these decisions.

So on our ethics advisory board, we’ve got technologists. We also have police leaders. We also have ethical academics. And when we’re wrestling with the implications of these decisions, we’re talking about, even though we could do something, is this the right thing to do, and should we put measures in place to protect things like people’s identity, downstream effects? And ultimately I don’t think that’s the company’s job. I do think we would look to the government to regulate those decisions.

U.S. Rep. Pramila Jayapal: I do agree that the companies have a role to play and a responsibility to play because they’re the ones that understand the technology the most. And so I really appreciate your advisory group. And I think the other part of it is not only thinking about it as the technology is developed, but as you see the effects of what is happening, how do you immediately respond?

And I will tell you that, on facial recognition as an example, there are some companies, Microsoft is one of them that has been very good about working with us to really think about what needs to be done. And even if you saw Brad Smith’s excellent article calling for regulation, which is not always the case with a lot of companies, but I think that there are other companies that have just poo-pooed the studies and said, “Oh, there’s no evidence. There’s no proof. We don’t need to do anything about this.”

So I do think that there’s a real responsibility for the companies who develop these technologies because once the technology is out there, you can’t put the genie back in the bottle. You can’t control what China does with that surveillance technology against the Uyghurs or putting that same technology onto cell phones as people go across a land border.

So I really do think that there’s responsibility all around, and government needs to be quicker in how we respond to things. We need to build in to our regulation the knowledge that technology is going to change, and we need to work with the companies and all the stakeholders to make sure that we’re getting the regulation right.

Next up, Dave Limp, the senior vice president in charge of Amazon’s devices and services business.

TB: Amazon and other companies were caught up in controversy this year when reports emerged that humans were reviewing audio recordings from a variety of voice assistants without the knowledge generally of the customers. What have you and your team at Amazon learned from the customer reaction to those revelations? How are those reactions shaping what you’re doing now?

Amazon's Dave Limp
Dave Limp, Amazon’s devices and services chief, chats with GeekWire co-founder Todd Bishop during the GeekWire Summit. (GeekWire Photo / Dan DeLong)

Dave Limp: Well, the first thing that you want to do when you hear the response from customers is fix the issue. And so, it’s not always perfect in these areas, but we were ready with the feature within 24 hours to give customers the ability to opt out of human annotation. So I think we were the first to be able to offer that. And at the same time we did that, we also clarified our terms of service and the messaging we have within our privacy hub. We have a central location for Alexa where you go for all things privacy to be very clear that we are doing that.

If I could go back in time, that would be the thing I would do better. I would have been more transparent about why and when we are using human annotation. The fact of the matter is that’s a well known thing in the industry of AI at least. Whether it was well known amongst press or customers, that’s pretty clear that we weren’t good enough there.

And it’s not just speech. Speech is one area that humans are annotating, but your maps, people are annotating the routes you take. With permission, your medical records to make AI better for x-rays, are getting annotated. The state of the art of AI is in such a place today that human annotation’s still required.

I imagine a future, and I believe it will happen, and if you read the latest research papers on where AI is going, there will be a day where combinations of federated learning and unsupervised learning and transfer learning and algorithms yet not invented will alleviate the need for human annotation and ground truthing. But that day, we don’t believe, is today.

TB: As you said, you were the first to offer customers the option to opt-out of human review of voice recordings. But Apple actually then later went a step further and said, “By default, that will not be in place. Humans will not review your recordings. Customers can opt-in.” Why not go that further step?

Dave Limp: Yeah. We also added a feature that allowed customers to turn on auto-deletion too. So we added those two things at about the same time, about a month apart. We sat around the room and talked about the difference between opt-in and opt-out. It’s important, the first foundational thing, and I know people don’t always believe this, but we don’t want data for data’s sake at Amazon. I’ve been there nine years. I can assure you this is the case. We want to use data where we can actually improve the experience on behalf of the customer.

And so we got in a room, had exactly that debate. And in many, many examples, and I can give you a couple if you’re interested, the data that we do annotate, and it’s a small fraction of 1% by the way, is incredibly important to making Alexa and the service better.

I’ll give you one example, which is we just launched two weeks ago in India with a new language, Hindi. And we know within the first 90 days of a launch, with the annotation’s help and other other algorithmic efforts that we do, that the accuracy of Alexa in Hindi will be 30 to 35% better. And so you sit around that table and you go, “Would we want to keep that improvement from the customer and not make it 35% better?” The answer is no. And you do need a corpus of data that’s broad enough from all sorts of different geographies and demographics to make that advancement. And so that’s where we landed. I don’t know how other companies are planning on doing it. Better questions for them.

TB: Should there be though a broad industry principle that you would use either inside the company or across the industry to guide those decisions so you’re not making them on an ad hoc basis?

Dave Limp: I certainly think that there is room for regulation in lots of areas, of places that we are in today. We’ve been very public about that. I think around ethics in AI, there is broad room for regulation, whether that’s through policy or laws themselves. I think that’s a good question for us to debate.

I think secondarily, around privacy, there’s all sorts of good room for regulation, as well. I don’t agree with every nuance of the law, but GDPR, which is Europeans’ effort to add more stringent privacy policies on balance is very, very good. And then the things that aren’t necessarily good.

But on balance, I think it’s very, very good. Where it’s probably not as good is where it’s just ambiguous. The problem is sometimes there’s gray areas and then you don’t know how to write code, and you want it to be clear regulation wherever possible. But so I think in both those places, there’s a lot of room for regulation and we’re open to that for sure.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.