It’s high time for government officials to get up to speed on the promise and potential pitfalls of artificial intelligence, two U.S. senators leading the charge said today.
“I think we’re entering an age where artificial intelligence is going to provide great benefits,” Sen. Maria Cantwell, D-Wash., said during an AI conference presented in Washington, D.C., as part of The Washington Post’s Transformers program.
Cantwell compared the state of the AI field to the state of the internet or the drone industry during the early days, when policymakers weren’t completely sure how those technologies were going to be used.
Sen. Todd Young, R-Ind., acknowledged that members of Congress aren’t sufficiently equipped to deal with all of the issues raised by AI. “I like a measure of humility from our legislators,” he said.
To remedy that gap, Cantwell and Young are among the sponsors of a bill known as the FUTURE of AI Act. (The title is an acronym standing for “Fundamentally Understanding The Usability and Realistic Evolution of Artificial Intelligence.”)
The bill calls for creating a federal advisory committee on AI, which would include representatives from academia, private industry, civil liberties groups and other stakeholders.
Cantwell said she’d also like to see the establishment of an AI engineering institute, analogous to the federally funded Software Engineering Institute at Carnegie Mellon University, to help government officials sort through the field’s technical complexities.
“That is probably right now where we’re missing a little bit of, if you will, technology oomph,” she said.
— Washington Post Live (@postlive) March 20, 2018
Such institutions would help government and industry leaders address worries about threats to personal privacy, automation’s impact on employment and the potential for black-box algorithms to “bake in” all-too-human racial and gender biases.
Some experts say regulation is desperately needed. Last year, for example, Tesla CEO Elon Musk said governments had to be “proactive in regulation rather than reactive,” because the potential threats posed by AI could turn into actual threats so quickly.
Last week, Musk said he regarded AI as “far more dangerous than nukes” — a warning that was brought up several times in the course of today’s panel discussions.
Jack Clark, strategy and communications director for OpenAI, a non-profit research company, said “government has a clear role to play,” particularly when it comes to applications such as autonomous vehicles. But Young counseled caution.
“Before we overregulate it, we want to make sure that we get a better understanding what sort of policy structures need to be in place so people can meaningfully participate in an economy driven in large measure by AI, so that it’s not biased,” Young said, “and so that hopefully America can lead with respect to this technology, which has the potential to increase our rate of economic growth, I’ve been briefed, by up to doubling it within just over 15 years.”
Other angles on AI:
- Cantwell put in a pitch for Seattle’s AI ventures, ranging from Amazon and Microsoft to the startups fostered by the Allen Institute for Artificial Intelligence. “If you have any kind of AI education, please head to Seattle,” she said.
- Both Cantwell and Young said they’d like to see Facebook CEO Mark Zuckerberg provide testimony about the use of Facebook data by Cambridge Analytica, a political consulting firm that’s come in for heavy criticism. Other lawmakers have already demanded that Zuckerberg appear before Congress.
- Clark and other experts said this week’s case of a fatality linked to an Uber autonomous vehicle illustrated why AI should be regulated. Young pointed to the AV START Act as an effort to bring clarity to the issue.
- Peggy Johnson, Microsoft’s executive vice president for business development, pointed to several AI applications fielded by her company, including Seeing AI and Microsoft’s collaboration with Adaptive Biotechnologies. Health care, financial services and climate monitoring were among the promising frontiers for AI, she said.
- Some experts, including Microsoft co-founder Bill Gates, have suggested enacting a “robot tax” to compensate for the employment disruption caused by automation. But Young said he was against the idea. “I would not start by taxing capital investments, which is what this is,” he said. Cantwell took a different tack: “I would have taken the tax bill and put a big down payment on retraining,” she said.