Intelligent machines won’t be ruling the world anytime soon – but what happens when they turn you down for a loan, crash your car or discriminate against you because of your race or gender?
On one level, the answer is simple: “It depends,” says Bryant Walker Smith, a law professor at the University of South Carolina who specializes in the issues raised by autonomous vehicles.
But that opens the door to a far more complex legal debate. “It seems to me that ‘My Robot Did It’ is not an excuse,” says Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence, or AI2.
The rapidly rising challenges that face America’s legal system and policymakers were the focus of today’s first-ever White House public workshop on artificial intelligence, presented at the University of Washington School of Law. For a full afternoon, Smith, Etzioni and other experts debated the options in an auditorium that was filled to capacity.
White House deputy chief technology officer Ed Felten said today’s discussion will feed into an interagency policymaking process that includes a public report, due to be published later this year, and a request for information directed to the public.
“Stay tuned for more on the RFI,” Felten said.
One message came through loud and clear: The current state of artificial general intelligence isn’t nearly advanced enough to justify the worries voiced by luminaries like Stephen Hawking, Elon Musk and Bill Gates. “You have to wonder where they’re coming from,” Etzioni said.
“I do think there are some very real concerns about AI that we ought to be addressing right now,” he said. “The existential threat is not one of them. In fact, I think it’s a distraction from some very real concerns. One is about jobs. And the one about privacy is a very real one.”
On the jobs front, some estimate that half of the global population, or more, could be unemployed in 2050 due to rapid automation. On the privacy front, more and more data points are being collected on individuals via the Internet and processed by increasingly intelligent agents.
Sometimes that processing produces controversial results: For example, in one experiment, researchers found that Google’s AdFisher program was more likely to show ads for high-paying jobs to men than to women, based on simulated user profiles. And just this week, ProPublica reported that blacks were judged more harshly than whites by a software program that’s widely used for assessing the recidivism risk for criminal defendants.
Guest commentary: The real threat of artificial intelligence
Is it the fault of the program, or the programmers? As Smith said, it depends.
Kate Crawford, principal researcher at Microsoft Research in New York, said the humans behind the machines have to devote more attention to ethics and accountability, even with today’s narrowly focused AI tools.
“These are systems that work to some degree like black boxes,” said Kate Crawford, principal researcher at Microsoft Research in New York. “When these systems start to become infrastructure is when I think we have to have the highest level of due process and fairness.”
Crawford said one recommendation would be to conduct external audits of AI programs, to make sure they’re producing desirable policy outcomes. Yale law professor Jack Balkin said another recommendation would be for policymakers to include computer scientists in their decision-making process.
“I second that,” Crawford said. There’s already a group for computer scientists wrestling with such issues, known as Fairness, Accountability and Transparency in Machine Learning.
The legal questions are likely to go mainstream when autonomous and semi-autonomous vehicles hit the road en masse. Volvo has pledged that it will accept full liability for collisions involving its autonomous vehicles, while Tesla Motors is more circumspect.
When it comes to deciding liability questions, the legal system will have to weigh the validity of opposing arguments involving manufacturers and users, just as they do when there’s no automation involved, Smith said. UW computer science professor Pedro Domingos said humans will eventually get used to taking intelligent machines into account.
“Things that people are originally uncomfortable with, we’ll quickly start to take for granted,” he said.
Domingos said that’s likely to be the case for a wide range of applications, even when intelligent machines are toting weapons.
“If I ever find myself in a war zone, and there’s a drone in front of me, deciding whether or not to shoot me, I couldn’t care less whether the decision is being made by a human or a machine,” he said. “I want the correct decision to be made, so that I live. I’d rather have a machine making that decision. I think it’s the same thing with surgery: Would I rather have a robot surgeon with a success level of 99 percent, or a human surgeon with a success level of 98.5 percent? There’s no doubt: I prefer the robot.”
Eventually, society may accept having AI programs keep watch on each other to make sure they don’t go awry, Etzioni said. But if something does goes wrong, could one AI sue another AI?
Etzioni smiled at the suggestion. “Sure seems better than people suing each other,” he said.
The next workshop in the White House AI series will concentrate on artificial intelligence for social good, on June 7 in Washington, D.C. The third workshop will focus on AI safety and control, on June 28 in Pittsburgh. The final workshop will explore the social and economic implications of AI, on July 7 in New York City.