Artificial intelligence is here, it’s just the beginning, and it’s time to start thinking about how to regulate it.
Those were the takeaways from the Technology Alliance‘s AI Policy Matters Summit, a Seattle event that convened experts and government officials for a conversation about artificial intelligence. Many of those experts agreed that the government should start establishing guardrails to defend against malicious or negligent uses of artificial intelligence. But determining what shape those regulations should take is no easy feat.
“It’s not even clear what the difference is between AI and software,” said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, on stage at the event. “Where does something cease to be a software program and become an AI program? Google, is that an AI program? It uses a lot of AI in it. Or is Google software? How about Netflix recommendations? Should we regulate that? These are very tricky topics.”
Regulations written now will also have to be nimble enough to keep up with the evolving technology, according to Heather Redman, co-founder of the venture capital firm, Flying Fish Ventures.
“We’ve got a 30-40 year technology arc here and we’re probably in year five, so we can’t do a regulation that is going to fix it today,” she said during the event. “We have to make it better and go to the next level next year and the next level the year after that.”
With those challenges in mind, Etzioni and Redman recommend regulations that are tied to specific use cases of artificial intelligence, rather than broad rules for the technology. Laws should be targeted to areas like AI-enabled weapons and autonomous vehicles, they said.
“My suggestion was to identify particular applications and regulate those using existing regulatory regimes and agencies,” Etzioni said. “That both allows us to move faster and also be more targeted in our application of regulations, using a scalpel rather than a sledgehammer.”
He believes the rules should include a mandatory kill switch on all AI programs and requirements that AI notify users when they are not interacting with a human. Etzioni also stressed the importance of humans taking responsibility for autonomous systems, though it isn’t clear whether the manufacturer or user of the technology will be liable.
“Let’s say my car ran somebody over,” he said. “I shouldn’t be able to say my dog ate my homework. ‘Hey I didn’t do it, it was my AI car. It’s an autonomous vehicle.’ We have to take responsibility for our technology. We have to be liable for it.”
Redman also sees the coming tide of A.I. regulation as a business opportunity for startups seeking to break into the industry. Her venture capital firm is inundated with startups pitching an “A.I. and M.L. first” approach but Redman said there are two other related fields, or “stacks” as she describes them, that companies should be exploring.
“If you talk to somebody on Wall Street, they don’t care what tech stack they’re running their trading on… they’re looking at new evolutions in law and policy as big opportunities to build new businesses or things that will kill existing businesses,” she said.
“From a startup perspective, if you’re not thinking about the law and policy stack as much as you’re thinking about the tech stack, you’re making a mistake,” Redman added.
But progress toward a regulatory framework has been slow at the local and federal level. In the last legislative session, Washington state almost became one of the first to regulate facial recognition, the controversial technology that is pushing the artificial intelligence debate forward. But the bill died in the state House. Lawmakers plan to introduce data privacy and facial recognition bills again next session.
Redman said she’s disappointed Washington state wasn’t a first-mover on AI regulation because the company is home to two of the tech giants consumers trust most with their data: Amazon and Microsoft. Amazon is in the political hot seat along with many of its tech industry peers but the Seattle tech giant has not been implicated in the types of data privacy scandals plaguing Facebook.
“We are the home of trusted tech,” Redman said, “and we need to lead on the regulatory frameworks for tech.”