Brandon Lee, Canadian Consul General to the Pacific Northwest; Ken Malott, a campus minister at the University of Washington, and Marcie Kellegrew, of Seattle (L-R) decide which topics should be discussed at Saturday’s AI and faith conference at Seattle Pacific University. (Tony Lystra Photo)

Can a computer become God? Or, more to the point, could humans invent AI that can convincingly impersonate God — and if so, would humans bother to worship it?

That was one of the questions explored Saturday by technology experts, the faithful and everyone in between at a conference devoted to artificial intelligence and faith at Seattle Pacific University.

The concept of “AI Almighty” might not be as outlandish as it seems. Last year, a former Uber engineer founded a nonprofit religious organization called The Way of the Future. Its mission: creating an AI deity. New York Magazine writer Andrew Sullivan wrote Friday that America’s religious inclinations are succumbing to other devotions.

Michael Paulus, Seattle Pacific University’s director and associate professor of Information Studies and one of the organizers of Saturday’s Technology and Faith conference, helps the audience deliberate on the day’s topics. (Tony Lystra)

And at least one participant at Saturday’s Technology and Faith conference in Seattle noted that AI might fit modern day interpretations of the Antichrist, as described in the New Testament: An all-worshiped leader, sometimes described as “the Beast,” who aims to supplant faith with temporal pursuits. Theology aside, it’s not much of a stretch to see today’s culture as wound to near-religious fervor over the next technological wonder and looking to it for guidance every hour of the day.

“That actually sounds like AI could fit hand-in-glove,” when it comes to Revelation, said Christopher Lim, a former Amazon engineer who founded the Christian entrepreneurship firm TheoTech, as well as an AI-driven speech translation product called spf.io. Broadly speaking, Lim said he has no problem with technology, and is neither convinced of nor ruling out connections between AI and Revelation. But he pointed out that Tesla founder Elon Musk once referred to the advent of AI as “summoning the demon.”

Much of Saturday’s conference was more pragmatic, focusing how religious leaders and software engineers of faith can help develop ethical AI, or whether it should be disclosed when a human is interacting with AI instead of another human.

“I don’t think we’re ever going to have AI that wakes up,” said Daniel Rasmus, a technology analyst and former Microsoft business insights director who is a board member of the Seattle organization AI and Faith, which helped put on Saturday’s event. “So I think this is more about us practicing what we preach than it is AI coming to terms with consciousness and its epistemology and all that stuff.”

The “un-conference” was loosely structured and intended to spark informal conversation; there were no scheduled presenters and participants decided together on the topics of the day then met in small groups to discuss them. Themes approved by the more than 40 participants included: “Virtual Places of Worship,” “Personification of AI,” “Development of AI Without Bias” and “Is AI a Danger to Human Rights and Dignity?”

Those in attendance included software engineers, religious leaders and members from various faiths as well as academics.

One Amazon Alexa engineer, who asked not to be identified because he wasn’t authorized to speak on behalf of the company, said he came to the conference to learn how Alexa “can be more beneficial to churches.”

One example, he said, is helping autistic kids who have trouble communicating and participating in Sunday school. “Those things can be turned into Skills (Amazon’s name for Alexa’s know-how) the autistic children can relate to,” he said, adding that such a service is not yet available from Alexa. “I have ideas, some of it I cannot share yet,” the engineer said.

One theme that emerged during Saturday’s conversations is that AI’s development could go in two directions: It can remain controlled by humans and, one should hope, used for good much of the time. Or it could be turned loose to learn on its own, beyond human control, and possibly become destructive.

Brandon A. Lee, the Canadian consul general to the Pacific Northwest and former Canadian ambassador to Silicon Valley, said he came to Saturday’s event to learn what the local technology and faith communities are thinking about AI, values and ethics. He said he’s particularly interested in the human biases that might be embedded into the code within neural networks — and how those biases shape the ethical decisions AI will inevitably have to make. Lee cited the example of a self-driving car having to decide which way to veer when sliding on black ice — toward the crowded sidewalk or the mother walking with her baby?

The possible involvement of faith traditions in the development of AI is only partly reassuring, he said. Lee, who describes himself as “deeply inspired by Buddhism,” pointed out that every religious tradition has in its history poor decisions and ethical blunders.

“Of course, we want an ethical AI,” he said. But the question is, “How do we train the AI — and by what religion?”

That’s especially important because computing power compared to the human mind is roughly “a million to one,” Lee said. As an AI application learns more and more about each member of a religious group, it could tailor religious messages to individual people. Those who are illiterate could get an AI-driven sermon consisting only of pictures, while longer, more nuanced sermons could be distributed to the highly literate. The point is that, in this scenario, the AI knows who can read, who can’t and endless more intimate details about its human congregation.

“It will become fully more dynamic than any human can be, and this is really a little scary,” Lee said.

Lim, the former Amazon engineer who has given a lecture titled, “Artifical Intelligence for the Kingdom,” said he plugged the sermons of the 18th century preacher and theologian Jonathan Edwards into a neural network created by Andrej Karpathy, now director of AI at Tesla. The neural network could churn out its own sermons based on the Edwards material, Lim said. Some of the AI sermons were nonsensical, he said, and the AI would sometimes make up Bible verses that don’t exist. Even so, the AI sermons had a way of sounding like the long-dead Edwards — and a lot of them weren’t much more nonsensical than sermons from real pastors, Lim said.

More interestingly, though, many of the AI-written sermons often sound authentic, he said, like they’d come from a flesh-and-blood minister.

“For a large class of pastors,” Lim said, “they may be out of a job.”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.