Regulations for the proper use of artificial intelligence are almost as inevitable as the rise of AI itself — but the way it’ll be done is far from clear.
“This isn’t as simple as just ‘trust,’ ” said Kay Firth-Butterfield, project head for AI and machine learning at the World Economic Forum’s Center for the Fourth Industrial Revolution. “This is more complex, because the technology itself is very fast, changing all the time, and is complex as well.”
Firth-Butterfield and other policy experts weighed in on the challenges of regulating AI, and dropped some hints about the road ahead, today at the Carnegie Mellon University – K&L Gates Conference on Ethics and AI in Pittsburgh.
Lorrie Faith Cranor, a CMU professor who spent a stint as the Federal Trade Commission’s chief technologist, said the most likely scenario for a serious push toward AI regulation would be “that somebody dies — and I think we’re already starting to see that with self-driving cars.”
Concerns about a potential AI apocalypse are also a driver. This week, diplomats from more than 120 nations are meeting in Geneva around the world are meeting in Geneva to discuss the potential risks posed by lethal autonomous weapon systems, known colloquially as “killer robots.”
The meeting’s chairman, Indian ambassador Amandeep Gill, said heading off the use of such unconventional weapons will require unconventional methods.
“No one is going to be allowing international inspectors to peer over their shoulders and look at what they’re coding,” he told the Pittsburgh gathering via an audio link. “No one is, at this point, looking at turning over pages of code, because most of this is happening in the commercial space, so there are proprietary, intellectual property-related issues. There are commercial issues. So we are not even talking about that.”
Instead, Gill said he’s trying to “create a safe space” for a wide variety of stakeholders, including industry executives as well as government officials, to work on maximizing transparency and trust.
One of Gill’s suggested initiatives would monitor what’s happening on the frontiers of AI. Another initiative would be a platform for nations to exchange information about implementing and regulating AI. That could set the stage for more concrete international action, “but we’re not there yet,” he said.
Firth-Butterfield said the best course would be for government and industry to work together on a regulatory regime.
“What we want to think about is a regulator that works with businesses to help create technology which then the regulator can certify,” she said.
She said her own center is working with government officials on an assortment of pilot projects aimed at developing new approaches to governance for technology issues.
Firth-Butterfield pointed out that the United Arab Emirates already has a minister of state for artificial intelligence, and that the British government is funding a Center for Data Ethics and Innovation. Chinese and French leaders have also talked up government-led AI initiatives.
The International Telecommunications Union’s AI for Good Global Summit and the IEEE Global Initiative on Ethics of Autonomous and Intelligence Systems afford additional opportunities for stakeholders to work together on policy perspectives.
An acid test for high-tech regulation could begin next month, when the European Union begins enforcing its General Data Protection Regulation, or GDPR. The success or failure of Europe’s new data privacy safeguards could set the tone for AI regulations to come.
Firth-Butterfield warned that we shouldn’t wait too long to think seriously about how to manage the 21st century’s advanced computational tools — and added a 19th-century metaphor for emphasis.
“With this technology moving so fast, we’re very often in what could be called the ‘too late’ zone,” she said. “By the time we’ve legislated, the horse is four or five fields down the lane.”