Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.
AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.
Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”
Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.
Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.
In addition to pre-release audits, Microsoft is addressing AI’s ethical concerns by improving its facial recognition tools and adding altered versions of photos in its training databases to show people with a wider variety of skin colors, other physical traits and lighting conditions.
Shum and other Microsoft executives have discussed the ethics of AI numerous times before today:
- Back in 2016, Microsoft CEO Satya Nadella laid out a list of proposed principles and goals for AI research and development, including the need to guard against algorithmic bias and ensure that humans are accountable for computer-generated actions.
- In a book titled “The Future Computed,” Shum and Microsoft President Brad Smith called for developing an ethical code for AI, supported by industry guidelines as well as government oversight. They wrote that “a Hippocratic Oath for coders … could make sense.”
- Shum and Smith are among the leaders of an internal Microsoft group called AI and Ethics in Engineering and Research, or Aether. Last year, Microsoft Research’s Eric Horvitz said “significant sales have been cut off” due to the Aether group’s recommendations. In some cases, he said specific limitations have been written into product usage agreements — for example, a ban on facial-recognition applications.
- Shum told GeekWire almost a year ago that he hoped the Aether group would develop “AI shipping criteria” — exactly the kind of pre-release checklist that he mentioned today.
Microsoft has been delving into the societal issues raised by AI with other tech industry leaders such as Apple, Amazon, Google and Facebook through a nonprofit group called the Partnership on AI. But during his EmTech Digital talk, Shum acknowledged that governments will have to play a role as well.
The nonprofit AI Now Foundation, for example, has called for increased oversight of AI technologies on a sector-by-sector basis, with special emphasis on applications such as facial recognition and affect recognition.
Some researchers have called for creating a “SWAT team” of government AI experts who can assist other watchdog agencies with technical issues — perhaps modeled after the National Transportation Safety Board.
Others argue that entire classes of AI applications should be outlawed. In an open letter circulated by the Future of Life Institute and an op-ed published by The BMJ, a British medical journal, experts called on the medical community and the tech community to support efforts to ban fully autonomous lethal weapons. The issue is the subject of a U.N. meeting in Geneva this week.