L-R: Ross Reynolds, executive producer for Community Engagement at KUOW; Sankar Narayan, ACLU of Washington’s Technology and Liberty project director; and Vinay Narayan, vice-president of platform strategy and developer relations at HTC, speak about artificial intelligence risks at Seattle Interactive Conference on Friday. (Greg Scruggs Photo)

It’s much worse to let Darth Vader go free than to keep Luke Skywalker incarcerated. That is the philosophical argument underpinning risk assessment tools such as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a crime recidivism predictive algorithm employed by criminal justice systems nationwide, and it concerns Shankar Narayan, the ACLU of Washington’s Technology and Liberty project director.

Narayan used the Star Wars reference on Friday at Seattle Interactive Conference. He said the quote, from a scholar who studies such risk-assessment tools, illustrates the perils of allowing artificial intelligence to err on the side of locking someone up.

He described COMPAS as having a “risk-averse balance” between “false positives and negatives” that can have damaging real-world consequences for marginalized communities.

“Given the over-representation of the black community as having been criminal justice involved,” he said, such tools “will result in more of those people deemed to be ‘high risk.’”

Such concerns about artificial intelligence were on full display at a panel entitled “The Promise and Threat of A.I.,” where even technology developers excited about the potential of AI are increasingly adopting safeguards.

For example, HTC offers body-tracking technology for its Vive virtual reality platform, but opted to created a closed system whereby user data is not shared publicly. That decision balanced business innovation with consumer privacy, said Vinay Narayan, the company’s vice-president of platform strategy and developer relations. HTC, the Taiwanese consumer electronics maker known for its smartphones and VR technology, has its North American headquarters in Seattle.

“From a tech perspective, nobody wants to do that because you can’t rapidly iterate if I’m not taking all your data back into a central server,” he said. “We realized we built a really powerful tool and while we put in a lot of technology safeguards, we don’t know how somebody else will use it.”

While HTC is cognizant of the potential misuse of its own products, the increasing pervasiveness of AI in more everyday software applications means that other companies are not so vigilant.

“Most companies that are using these tools, at the end of the day they’re not AI companies. They are customer care companies that want to improve their response time,” said Vinay Narayan. “You may not even know you are using AI.”

One common response to poorly performing or biased AI is to improve data sets, which Shankar Narayan largely dismisses.

“Technologists and data scientists are boundlessly optimistic about technology, but coming from the perspective that I come from, actually having seen the workings of power and how skewed the data is in the criminal justice system, there seems to be a near impossibility of getting clean data sets to fix your tool,” he said.

Indeed, Shankar Narayan said, the impulse to fix AI with more data is that it creates incentives for yet more data collection. “Collect, collect, collect, collect — even if it’s far beyond the stated purpose of that technology,” the ACLU official said, describing the common approach.

Such an impulse gives him pause amidst the push for so-called smart cities, which rely heavily on surveillance technology to optimize urban systems like traffic flows and energy usage. He prefers a more balanced approach like the one former City of Boston data scientist Ben Green outlines in his new book “The Smart Enough City.”

“There is a very long history of virtually every surveillance technology disproportionately impacting marginalized and vulnerable communities,” said Shankar Narayan, who serves on Washington state’s drone and body camera task forces. “That is why I’m often boggled by the attitude of, ‘Let’s just throw it out there and see what happens.’ ”

Such principled stances also butt up against the real-world problems that AI-assisted technology is trying to solve. For example, legislation that would have permitted Seattle to use camera enforcement for vehicles that block crosswalks and illegally travel in transit-only lanes died in committee during this year’s legislative session.

For HTC’s Vinay Narayan, such a political decision reverts to an inefficient status quo where a limited number of police officers struggle to enforce traffic laws.

“We’re using human beings for manual interfaces for very large-scale traffic flow,” he said. “This is where computer vision and AI can really help solve problems.”

But for Shankar Narayan, who has watched police and prosecutors request data from Washington’s red light cameras for other crimes, the efficacy of the technology is trumped by broader concerns. “We’re not against the concept of the cameras, but the safeguards weren’t there,” he said, describing “the problem of mission creep: Once the camera is there, you want to use it for more things.”

Editor’s note: This story was updated to reflect Shankar Narayan’s statement

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.