MIT Enterprise Forum Northwest event
Speakers at a Seattle University event organized by the MIT Enterprise Forum Northwest discuss human-machine interaction with a word cloud displayed on the screen behind them. The words were provided by the audience to answer a question: “What scares you the most about technology?” (GeekWire Photo / Alan Boyle)

We have heard the voice of our future AI overlord — and it’s making hair appointments for us.

Last week, Google wowed the world by demonstrating a voice assistant called Duplex that sounds eerily human on the telephone, right down the um’s and mm-hmm’s that it uses during its chat with a scheduler at a hair salon.

Some are now questioning how true-to-life the demo actually was. But even if some liberties were taken, Google Duplex was an eye-opener for experts who gathered at Seattle University on Wednesday night for an AI-centric event presented by MIT Enterprise Forum Northwest.

“Seeing that happen so quickly, I think, was a real shock for some people,” said Kat Holmes, a Microsoft veteran who’s the founder of the design company Kata and the author of “Mismatch: How Inclusion Shapes Design.”

Miles Coleman, an instructor at Seattle University who specializes in digital technology and culture, counts himself in that category. “The Duplex thing blew my mind,” he said.

Google says the secret to Duplex’s conversational skill is a recurrent neural network that’s been trained on anonymized phone data. It has learned not only to deal with the sometimes-ambiguous context of human conversations, but also to drop in an occasional umm (technically known as a speech disfluency) and match the cadence of its speech to human expectations.

This summer, Duplex will be deployed as part of Google’s voice assistant for mobile devices. But is humanity ready for artificially intelligent agents that are hard to distinguish from actual humans?

Coleman said the concerns are similar to those posed by social-media bots. Automated postings turned out to be a big factor in the “fake news” controversies stemming from the 2016 presidential campaign. In response, Twitter purged tens of thousands of Russia-linked bots and revised its terms of service to rule out high-volume automated tweeting.

Some want to go even further: In California, for example, there’s a legislative move to force Twitter, Facebook and other social-media outlets to ban bots from impersonating humans, and force the holder of a bot account to disclose that the account is not actually a human.

It might make sense to require Duplex and similar consumer-grade chatbots to provide a similar disclosure, along the lines of the “This Call May Be Recorded” disclaimer we’ve grown used to hearing when calling customer service.

[Update for May 19: Google plans to incorporate such disclaimers, according to Bloomberg News. Google executives reportedly told employees that the Duplex system would identify itself as a Google assistant. And in some jurisdictions, including Washington state, Duplex would also let the people on the other end of the phone line know that the call is being recorded.]

“We’re faking a human, which we give a particular value,” Coleman said. “And when those entities speak, we give a particular kind of weight that we wouldn’t give to just a machine. It’s like the new version of robo-calls, except it’s working for individuals instead of companies.”

Which poses a puzzler: What happens when a robo-caller calls a robo-answerer? Maybe it’s something like this, in which case we won’t have to worry about the AI apocalypse:

More sound bites from the MIT Enterprise Forum discussion, titled “Human Machine Interfaces and the Future of Interaction”:

  • Holmes wondered about the process for determining human responsibility for the mistakes made by an AI systems. “Who’s accountable if that system creates harm to another human being?” she asked. One option might be to require digital signatures for the code that goes into AI software, both to create accountability and to ensure against hacking. Another option might be for companies to require training in ethical standards for AI programmers.
  • Much has been made about AI’s potential to absorb all-too-human biases, but AI assistants can also make humans behave better. For example, Amazon has tweaked Alexa to take issue with sexist comments, and has added a feature that rewards kids for saying “please.” Trond Nilsen, director of development for Virtual Therapeutics, would like to see the trend go further: “How can we use technology to teach each other to be more compassionate?” he asked.
  • Coleman thinks having smarter AI agents would be a plus, not a minus. “I don’t think robots are scary because they’re really smart. I think they’re scary because they’re really dumb,” he said. “I don’t think that they’re going to end humanity. I think that they’re going to end our individual lives in little, small, idiot accidents.”
Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.