Illustration: Robot and human hands
(Bigstock Photo)

The debate over a robot’s ability to have human-like feelings reignited over the weekend following a Washington Post report about a Google engineer who claimed that one of the company’s chatbot programs was sentient.

Blake Lemoine is a 7-year Google vet who works for its Responsible AI team. He engaged in chats with the company’s Language Model for Dialogue Applications (LaMDA), which learns from language databases and is powered by machine learning. Lemoine tried to convince Google executives that the AI was sentient.

After the Post story published, Lemoine posted conversations he had with LaMDA. “Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine wrote in a blog post.

Google has denied such claims and placed Lemoine on paid administrative leave for allegedly violating Google’s confidentiality policy.

The Post story went viral and sparked an age-old debate about whether artificial intelligence can be sentient.

We caught up with Yejin Choi, a University of Washington computer science professor and senior research manager at Seattle’s Allen Institute for Artificial Intelligence, to get her take on Lemoine’s claims and the reaction to the story. The interview was edited for brevity and clarity.

GeekWire: Yejin, thanks for talking to us. What was your initial reaction to all of this?

Yejin Choi: On one hand, it’s ridiculous. On the other hand, I think this is bound to happen. Some users may have different feelings about what’s inside a computer program. But I disagree that digital beings can actually be sentient.

Yejin Choi. (University of Washington Photo / Bruce Hemingway)

Do you think Google’s chatbot is sentient?

No. We program bots to sound like they are sentient. But it’s not, on its own, demonstrating that sort of capability in the way human babies grow to demonstrate that kind of capability. These are programmed, engineered digital creations.

Humans have written sci-fi novels and movies about how AI might have feelings, or even fall in love with humans. AI can repeat these kinds of narratives back to us. But that’s very surface level, just speaking the language. It doesn’t mean it’s actually feeling it or anything like that.

How serious should we be taking Lemoine’s claims?

People can have different beliefs and different choices of beliefs. So in that regard, it’s not entirely surprising that someone starts believing in this way. But the broader scientific community will disagree.

Will AI ever be sentient?

I am very skeptical. AI can behave very much like humans behave. That, I believe. But does that mean AI is now a sentient being? Does AI have its own rights, equal to humans? Should we ask AI for consent? Should we treat them with respect? Will humans go to jail for killing AI? I don’t believe that world will eventually come.

AI might not ever be sentient, but it’s getting closer. Should we be scared of AI?

The concern is real. Even without being on a human-like level, AI is going to be so powerful that it can be misused and can influence humans at large. So discussing policy around AI use is good. But creating this ungrounded fear that AI is going to wipe out humans, that’s unrealistic. In the end, it’s going to be humans misusing AI as opposed to AI itself wiping humans out. The humans are the problem, at the end of the day.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.