'Life 3.0' cover illustration
“Life 3.0” focuses on what lies ahead in AI. (Suvadip Das Illustration, based on Netfalls Remy Musser)

Do we need to be concerned about the rapid rise of artificial intelligence? Some people say there’s nothing to worry about, while others warn that a Terminator-level nightmare is dead ahead.

MIT physicist Max Tegmark says both sides of that argument are exaggerations.

In his newly published book, “Life 3.0: Being Human in the Age of Artificial Intelligence,” Tegmark lays out a case for what he calls “mindful optimism” about beneficial AI — artificial intelligence that will make life dramatically better for humans rather than going off in unintended directions.

Tegmark, who’s also the co-founder and president of the Future of Life Institute, says AI won’t be beneficial unless it incorporates safety measures yet to be developed. That’s not because the machines are destined to turn against their masters. It’s because those masters are subject to the vagaries of human nature.

“To me, the really interesting question isn’t quibbling about whether to be optimistic or pessimistic,” he told GeekWire, “but rather to ask, ‘What useful things can we do today to create the best possible future?'”

“Life 3.0” touches upon the implications of AI for employment and economics, transportation and entertainment, even warfare and religion. Will the first superhuman intelligence rise up from Amazon’s or Microsoft’s cloud? Tegmark will explore those possibilities and much, much more on Wednesday during a talk and book signing presented by Town Hall Seattle at University Lutheran Church.

In advance of his Seattle stopover, GeekWire talked with Tegmark about Life 3.0 — the AI age that follows the emergence of simple biology (Life 1.0) and human culture (Life 2.0). Here are some edited excerpts from the interview.

SpaceX founder Elon Musk’s perspective on AI

“I find it hilarious that some people portray Elon Musk as some kind of doomsayer, when in fact he’s an optimistic person who wants to create a great future with technology. This process of thinking through what might go wrong so you can avoid it isn’t scaremongering – it’s basic engineering. That’s how we got people to the moon successfully, by carefully thinking through what could have gone wrong and thereby making sure it didn’t happen.”

More funding for AI safety research

“The vast majority of the money is spent on simply making AI more powerful, not on developing the wisdom with which to manage it. I’m optimistic that we can create a great future with AI, as long as we win the race between the growing power of AI and the growing wisdom with which we manage it.

“To win that race, there are two strategies: You can either slow down your competitor … but I think it’s both unrealistic and perhaps undesirable to try to slow down the progress of technology … or you can try to run faster yourself by speeding up the safety research. I think any government that funds computer science research should fund AI safety research as a standard part of that.”

How we’ll know the age of beneficial AI has begun

“I hope consumers will start noticing that their computers get hacked less often and crash less often. … As AI gets more and more control over infrastructure in society, over our stock market trading and everything else, crashes go from being frustrating to being unacceptable.

“We have to up our game, both in cybersecurity and with respect to bugs. Most hacks are enabled by bugs. There is a lot of great research being done on provably bug-free software, but it has still not reached the point where it’s powerful enough and easy enough to use and has widespread adoption.

“Today there’s a little ‘lock’ icon when your [Web browser] connection is secure, and maybe in the future there’ll be a little shield or something when you’re running software on an operating system that’s completely, provably bug-free. It would be wonderful if that became the new standard for safety-critical applications. You know, you sit down in your self-driving car and you say, ‘Wait a minute, this isn’t provably safe? I’m taking a different Uber.'”

The politics of AI

“If we start seeing AI have a major impact on society, at some point it will probably make sense also to have some oversight to keep that safe. But before you can have useful government oversight, you have to have useful government insight. And at this point, I feel that politicians are by and large asleep at the wheel. They don’t have enough understanding of what’s actually happening.

“In the last presidential election we had here in the U.S., neither candidate talked about AI at all, even though it’s the biggest thing that’s happening over the next few decades, completely transforming our economy and our lives. So, step 1 is to get a lot of AI experts into government. If you have a government that starts to enact laws without insights, it’s going to do more harm than good.”

A policy agenda for beneficial AI

“Anyone doing policy needs to have someone on board who understands AI. I wrote the book as a helpful survey of what’s going on. In terms of the to-do list, I would suggest three things:

“Universal basic income is a very interesting idea. There are a lot of ideas being discussed. I think everybody should be part of the discussion. It’s completely naïve to think that everybody’s going to be better off if we don’t discuss it, because then we’re just going to continue along the path we’re on now, with ever-increasing income inequalities and an ever more polarized, unstable world.”

What you can do to get ready for Life 3.0

“I’d encourage people, the next time they’re in a bar or hanging out with friends, to think of this as a very legitimate conversation topic to bring up. What kind of future do people want? It makes for great after-dinner conversation. Everybody cares about it. The more people discuss it, the more good ideas will be created. If we have no idea what we want, we’re less likely to get it.

“I would also encourage everybody to think about who they want to be. Do they want to be a person who owns technology, or is owned by technology? Do they want to be a person who uses technology to help them be what they want to be, or do they want to be the one who compulsively keeps interrupting all their conversations by looking at their phone?”

Read more: AI experts lay out safety principles

Check out Tegmark’s website to learn more about “Life 3.0.” Tickets to Tegmark’s talk at University Lutheran Church, 1604 NE 50th St. in Seattle, are available through the Town Hall Seattle website.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.