AI100 report cover image
The AI100 project is designed to track trends in artificial intelligence over the course of a century. (Image Courtesy of  AI100 / Stanford Institute for Human-Centered Artificial Intelligence)

A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology — and to the ways in which that technology are being abused.

The report, titled “Gathering Strength, Gathering Storms,” was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development .

AI100 was initiated by Eric Horvitz, Microsoft’s chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary.

The project’s first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.

This year’s update, prepared by a standing committee in collaboration with a panel of 17 researchers and experts, says AI’s effects are increasingly touching people’s lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical diagnoses.

“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” Brown University computer scientist Michael Littman, who chaired the report panel, said in a news release.

“That’s really exciting, because this technology is doing some amazing things that we could only dream about five or ten years ago,” Littman added. “But at the same time the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”

Those risks include deep-fake images and videos that are used to spread misinformation or harm people’s reputations; online bots that are used to manipulate public opinion; algorithmic bias that infects AI with all-too-human prejudices; and pattern recognition systems that can invade personal privacy by piecing together data from multiple sources.

The report says computer scientists must work more closely with experts in the social sciences, the legal system and law enforcement to reduce those risks.

One of the benefits of conducting a century-long study is that each report along the way builds on the previous report, said AI100 standing committee chair Peter Stone, who’s a computer scientist at the University of Texas at Austin as well as executive director of Sony AI America.

“The 2021 report is critical to this longitudinal aspect of AI100 in that it links closely with the 2016 report by commenting on what’s changed in the intervening five years,” he said. “It also provides a wonderful template for future study panels to emulate by answering a set of questions that we expect future study panels to re-evaluate at five-year intervals.”

Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence, hailed the AI100 update in an email to GeekWire.

“The report represents a substantial amount of work and insight by top experts both inside and outside of the field,” said Etzioni, who was on the study panel for the 2016 report but played no role in the update. “It eschews sensationalism in favor of measured and scholarly obligations. I think the report is correct about the prospect for human-AI collaboration, the need for AI literacy, and the essential role of a strong non-commercial perspective from academia and non-profits.”

Etzioni’s only quibble was over the report’s claim that so far, AI’s economic significance has been “comparatively small — particularly relative to expectations.”

“I do think that the report may understate AI’s economic impact, because AI is often a component technology in products made by Apple, Amazon, Google and other major companies,” he said.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.