Erez Barak
Erez Barak, senior director of product for Microsoft’s AI Division, speaks at the Global Artificial Intelligence Conference in Seattle. (GeekWire Photo / Alan Boyle)

Artificial intelligence can work wonders, but often it works in mysterious ways.

Machine learning is based on the principle that a software program can analyze a huge set of data and fine-tune its algorithms to detect patterns and come up with solutions that humans may miss. That’s how Google DeepMind’s Alpha Go AI agent learned to play the ancient game of Go (and other games) well enough to beat expert players.

But if programmers and users can’t figure out how AI algorithms came up with their results, that black-box behavior can be a cause for concern. It may become impossible to judge whether AI agents have picked up unjustified biases or racial profiling from their data sets.

That’s why terms such as transparency, explainability and interpretability are playing an increasing role in the AI ethics debate.

The European Commission includes transparency and traceability among its requirements for AI systems, in line with the “right to explanation” laid out in data-protection laws. The French government already has committed to publishing the code that powers the algorithms it uses. In the United States, the Federal Trade Commission’s Office of Technology Research and Investigation has been charged with providing guidance on algorithmic transparency.

Transparency figures in Microsoft CEO Satya Nadella’s “10 Laws of AI” as well — and Erez Barak, senior director of product for Microsoft’s AI Division, addressed the issue head-on today at the Global Artificial Intelligence Conference in Seattle.

“We believe that transparency is a key,” he said. “How many features did we consider? Did we consider just these five? Or did we consider 5,000 and choose these five?”

Barak noted that a software developer’s kit for explainability and interpretability is built right into Microsoft’s Azure Machine Learning service. “What it does is that it takes the model as an input and starts breaking it down,” he said.

The model explanation can show which factors went into the computer model, and how they were weighted by the AI system’s algorithms. As a result, customers can better understand why, for instance, they were turned down for a mortgage, passed over for a job opening, or denied parole.

AI developers can also use the model explanations to make their algorithms more “human.” For instance, it may be preferable to go with an algorithm that doesn’t fit a training set of data quite as well, but is more likely to promote fairness and avoid gender or racial bias.

As AI applications become more pervasive, calls for transparency — perhaps enforced through government regulation — could well become stronger. And that runs the risk of exposing trade secrets hidden within a company’s intricately formulated algorithms, said Elvira Castillo, a partner at Seattle’s Perkins Coie law firm who specializes in trade regulations.

“Algorithms tend to be things that are closely guarded. … That’s not something that you necessarily want to be transparent with the public or with your competitors about, so there is that fundamental tension,” Castillo said. “That’s more at issue in Europe than in the U.S., which has much, much, much stronger and aggressive enforcement.”

Microsoft has already taken a strong stance on responsible AI — to the point that the company has turned down prospective customers who sought to use AI applications such as facial recognition in ethically problematic ways.

After his talk, Barak told GeekWire that Azure Machine Learning’s explainability feature could be used as an open-source tool to look inside the black box and verify that an AI algorithm doesn’t perpetuate all-too-human injustices.

Over time, will the software industry or other stakeholders develop a set of standards or a “seal of approval” for AI algorithms?

“We’ve seen that in things like security. Those are the kinds of thresholds that have been set. I’m pretty sure we’re heading in that direction as well,” Barak said. “The idea is to give everyone the visibility and capability to do that, and those standards will develop, absolutely.”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.