Future of Life AI Graphic
The Beneficial AI conference developed 23 guiding principles for AI. (Future of Life Institute)

Hundreds of AI researchers, business leaders and just plain geniuses have signed onto a statement of cautionary principles for artificial intelligence, including a requirement to build in the ability for human authorities to audit how an AI platform works.

The 23 Asilomar AI Principles were drawn up this month at the Beneficial AI conference, conducted in the same California locale where a famous meeting to define the limits of biotech was held in 1975.

This Asilomar conference focused on concerns about the rapid rise of AI, voiced by luminaries ranging from British physicist Stephen Hawking to Elon Musk, the billionaire CEO of SpaceX and Tesla.

Musk called attention to the findings today in a series of tweets that ended up endorsing the idea of building AI tools into devices that interface with the human brain:

The Asilomar principles, published on the Future of Life Institute’s website, don’t address human augmentation directly. Rather, they hew closely to common-sense cautions about the potential pitfalls of artificial intelligence – for example, the need to avoid an arms race in AI-enabled weapons, and the need to build in safety standards that align with human values.

Some principles do touch on thorny issues that have yet to be resolved, relating to data privacy, transparency and built-in biases. AI programs designed to provide guidance to judges on criminal sentences, and deliver personalized job notices, already have generated controversy over potential racial and gender bias.

One of the Asilomar principles addresses that controversy, declaring that “any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.”

Other principles refer to concerns about a Terminator-style takeover by intelligent machines. The first principle says the goal of AI research should be “to create not undirected intelligence, but beneficial intelligence.” And the last two principles sound as if they could inspire science-fiction plots:

“AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

“Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

More than 800 people have lent their names in support of the principles, including Musk and Hawking as well as Google DeepMind’s Demis Hassabis (who helped develop the game-playing Alpha Go AI program) and futurist Ray Kurzweil (who has said machine intelligence could spark a technological singularity in the 2040s.)

The signers also include the University of Washington’s Ryan Calo and Microsoft’s Ho John Lee.

The Future of Life Institute received a $10 million contribution from Musk in 2015, but it’s only one of several groups that focus on the ethics of AI. Other efforts include:

In the waning days of the Obama administration, White House officials issued a series of reports and policy recommendations on AI and its potential soclal effects. It’s not clear, however, how many of those recommendations will be picked up by President Donald Trump and his aides.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.