Hundreds of AI researchers, business leaders and just plain geniuses have signed onto a statement of cautionary principles for artificial intelligence, including a requirement to build in the ability for human authorities to audit how an AI platform works.
The 23 Asilomar AI Principles were drawn up this month at the Beneficial AI conference, conducted in the same California locale where a famous meeting to define the limits of biotech was held in 1975.
This Asilomar conference focused on concerns about the rapid rise of AI, voiced by luminaries ranging from British physicist Stephen Hawking to Elon Musk, the billionaire CEO of SpaceX and Tesla.
Musk called attention to the findings today in a series of tweets that ended up endorsing the idea of building AI tools into devices that interface with the human brain:
Top AI researchers agree on principles for developing benefical AI https://t.co/CATbd4oidF
— Elon Musk (@elonmusk) January 30, 2017
@elonmusk @FLIxrisk If AI is made as an human augmentation, there will never be a risk of “them vs us”.
— Mark Rees-Andersen (@ReesAndersen) January 30, 2017
@ReesAndersen @FLIxrisk Yes, I believe that is critical to ensure a good future for humanity
— Elon Musk (@elonmusk) January 30, 2017
The Asilomar principles, published on the Future of Life Institute’s website, don’t address human augmentation directly. Rather, they hew closely to common-sense cautions about the potential pitfalls of artificial intelligence – for example, the need to avoid an arms race in AI-enabled weapons, and the need to build in safety standards that align with human values.
Some principles do touch on thorny issues that have yet to be resolved, relating to data privacy, transparency and built-in biases. AI programs designed to provide guidance to judges on criminal sentences, and deliver personalized job notices, already have generated controversy over potential racial and gender bias.
One of the Asilomar principles addresses that controversy, declaring that “any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.”
Other principles refer to concerns about a Terminator-style takeover by intelligent machines. The first principle says the goal of AI research should be “to create not undirected intelligence, but beneficial intelligence.” And the last two principles sound as if they could inspire science-fiction plots:
“AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
“Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”
More than 800 people have lent their names in support of the principles, including Musk and Hawking as well as Google DeepMind’s Demis Hassabis (who helped develop the game-playing Alpha Go AI program) and futurist Ray Kurzweil (who has said machine intelligence could spark a technological singularity in the 2040s.)
The signers also include the University of Washington’s Ryan Calo and Microsoft’s Ho John Lee.
The Future of Life Institute received a $10 million contribution from Musk in 2015, but it’s only one of several groups that focus on the ethics of AI. Other efforts include:
- The Ethics and Governance of Artificial Intelligence Fund, backed by $27 million from LinkedIn co-founder Reid Hoffman, Omidyar Network, the Knight Foundation, the William and Flora Hewlett Foundation and Raptor Group founder James Pallotta.
- Allen Institute for Artificial Intelligence, whose motto is “AI for the common good.” The Seattle-based institute was founded in 2013 with backing from Microsoft co-founder Paul Allen.
- OpenAI, a nonprofit research company with a mission “to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possibie.” In 2015, Hoffman joined PayPal co-founders Elon Musk and Peter Thiel, plus other backers, in committing $1 billion to OpenAI.
- AI100, also known as the One Hundred Year Study on Artificial Intelligence. AI100 is a Stanford research project organized by Microsoft executive Eric Horvitz to monitor the development of AI and its effects over the course of the next century. AI100 issued its first report last September.
- The Partnership on AI is a nonprofit organization that brings together Amazon, Microsoft, Facebook, Google DeepMind and IBM to advance public understanding of AI technologies and formulate best practices.
In the waning days of the Obama administration, White House officials issued a series of reports and policy recommendations on AI and its potential soclal effects. It’s not clear, however, how many of those recommendations will be picked up by President Donald Trump and his aides.