AI ethics panel
Cornell University information scientist Solon Barocas, at right, speaks during a panel discussion on the ethics of artificial intelligence at Seattle University, while Carnegie Mellon University’s David Danks and Google researcher Margaret Mitchell look on. (GeekWire Photo / Alan Boyle)

San Francisco’s board of supervisors took a significant step this week when it voted to ban the use of facial recognition software for law enforcement purposes, but such measures by themselves won’t resolve the ethical issues surrounding surveillance enabled by artificial intelligence.

At least those are the first impressions from a trio of experts focusing on the social implications of AI’s rapid rise.

David Danks, a philosophy professor at Carnegie Mellon University, said he hasn’t had a chance to delve into the details of the municipal legislation, which was endorsed on Tuesday and will come up for a second procedural vote next week. And he said “this is a case where the details are going to matter.”

” ‘Law enforcement purposes’ in the sense of arresting somebody on the basis of a facial recognition match is this sort of extreme, obvious case,” Danks told GeekWire after participating in a panel discussion on AI ethics at Seattle University on Tuesday night. “But what if it’s monitoring members of a group, where it’s not that I know that this is this individual, but I know that this is a member of a community where I’ve uploaded 20 faces?”

Facial recognition has attracted a lot of criticism from researchers and privacy advocates — in part because authorities have been using the technology to track millions of Muslims in northwest China, in part because of reports of race- and gender-related software fails, and in part just because the face is such a big part of a person’s public identity.

“There’s actually a whole bunch of other things that have similar properties to facial recognition that are equally pernicious, but don’t generate the same visceral reaction,” Cornell University information scientist Solon Barocas said.

Margaret Mitchell, a senior scientist at Google Research and Machine Intelligence, said examples could include analyzing a person’s walking gait, or checking the cadence of a person’s speech.

“Machine learning is going to discover a whole bunch of new ways,” Barocas said.

The leading companies in AI and machine learning say they’re trying to self-regulate the AI products they put out. “Google’s been working on being more transparent about things like what the values are,” Mitchell said. “One of my favorite ones, because it directly applies to everything that I do, is to avoid reinforcing unfair bias.”

Microsoft, meanwhile, has an internal ethics committee that reviews proposed applications for its AI software — and occasionally turns down what it sees as unethical use cases. Facial recognition is a top-level concern.

But is self-regulation enough? Last December, Microsoft President Brad Smith said the federal government should enact laws governing facial recognition technology, with a focus on issues such as consent, third-party testing, fairness and privacy.

Should the government create a Federal AI Commission?

“I’m not quite sure what that would look like,” Danks said. “It seems to me sort of like having a ‘Federal Internal Combustion Engine Commission.’ What matters is how these technologies are put to use, not the technology in isolation.”

In Danks’ view, regulating AI may not require creating a whole new bureaucracy.

“In many domains, actually, there already are the regulatory powers to do this,” he said. “What is lacking is the technical expertise at the regulatory agencies to understand how to exercise those powers to regulate some new technology or some company. There are going to be things that fall through the gaps, so then the question is, as with any new technological advance, how do we fill in those gaps?”

One big example could be personal medical data. AI tools could well outpace federal privacy protections provided under the Health Insurance Portability and Accountability Act, or HIPAA, Danks said.

“Inferred attributes are not covered by HIPAA, for example,” he said. “We’ve got great protection in certain ways on our medical data. But if I use AI to figure it out instead of an FDA-approved test, it’s no longer considered private data. It’s a ‘guess,’ in quotes. That’s one of those gaps that we need to fill.”

Update for 12:30 p.m. PT May 15: This year the Washington Legislature considered a data privacy bill that would have put limits on the use of facial recognition technology. Here’s how the bill was described in a story we published in January:

“The bill sets new regulations for facial recognition technology, requiring companies that make the software to get consent from consumers before using it on them. Consumers would need to be conspicuously notified when entering websites and physical spaces where facial recognition is in use. Companies that allow developers to use their facial recognition technology would need to make their APIs public so that third parties could test for accuracy and bias.

“The bill also extends facial recognition regulations to the public sector. Government agencies would be prohibited from using the technology in ongoing surveillance of individuals in public spaces without a court order or emergency ‘involving imminent danger or risk of death or serious physical injury to a person,’ according to the bill.”

The bill failed to become law, however. The American Civil Liberties Union and other critics said the legislation contained too many loopholes — and objected to the role that Microsoft and other tech companies played in crafting the bill. The differences couldn’t be resolved in time to meet the deadline for this year’s legislative session, but state Sen. Reuven Carlyle, the Seattle Democrat who sponsored the bill, says supporters will try again next year.

Albers Ethics Week, presented by Seattle University’s Albers School of Business and Economics, continues with “Responsible AI and the Internet of Things,” a talk by Avijit Sinha, senior director of IoT and Intelligent Edge at Microsoft. The event is scheduled for 5:30 p.m. Thursday at Pigott Auditorium.

And speaking of facial recognition, Seattle University’s Digital Arts & Technology Club will be demonstrating “Emotion Bots” from noon to 4 p.m. on Saturday at the Quadstock Music Festival, at the university’s Redhawk Center. Students will be able to get a read on what emotion they’re expressing, based on their facial expressions. This is a demo version of a club project that will be implemented at different locations on campus and is set to pilot next year.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.