Harry Shum is Microsoft’s executive vice president for AI and research. (GeekWire Photo)

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

In addition to pre-release audits, Microsoft is addressing AI’s ethical concerns by improving its facial recognition tools and adding altered versions of photos in its training databases to show people with a wider variety of skin colors, other physical traits and lighting conditions.

Shum and other Microsoft executives have discussed the ethics of AI numerous times before today:

Microsoft has been delving into the societal issues raised by AI with other tech industry leaders such as Apple, Amazon, Google and Facebook through a nonprofit group called the Partnership on AI. But during his EmTech Digital talk, Shum acknowledged that governments will have to play a role as well.

The nonprofit AI Now Foundation, for example, has called for increased oversight of AI technologies on a sector-by-sector basis, with special emphasis on applications such as facial recognition and affect recognition.

Some researchers have called for creating a “SWAT team” of government AI experts who can assist other watchdog agencies with technical issues — perhaps modeled after the National Transportation Safety Board.

Others argue that entire classes of AI applications should be outlawed. In an open letter circulated by the Future of Life Institute and an op-ed published by The BMJ, a British medical journal, experts called on the medical community and the tech community to support efforts to ban fully autonomous lethal weapons. The issue is the subject of a U.N. meeting in Geneva this week.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.