Microsoft President Brad Smith speaks Thursday morning at Planet Word, a language arts museum in Washington, D.C. (Image via webcast)

How can humanity reap AI’s rewards and minimize its risks?

Microsoft offered its own answer to that question in the form of a “blueprint” for governing artificial intelligence, outlined by Brad Smith, Microsoft’s president and vice chair, at an event in Washington, D.C., this morning.

Here are the five key points, as detailed in a blog post by Smith and an accompanying white paper.

  • Implement and build upon new government-led AI safety frameworks based on the U.S. NIST AI Risk Management Framework.
  • Require effective “safety brakes” for AI systems that control critical infrastructure and ensure human oversight and accountability.
  • Develop a broad legal and regulatory framework based on the technology architecture for AI, including new regulations and licensing for powerful AI foundation models and obligations for AI infrastructure operators.
  • Promote transparency and ensure academic and nonprofit access to AI resources, including an annual AI transparency report and expanded computing resources for research.
  • Pursue new public-private partnerships to use AI as an effective tool to address societal challenges such as protecting democracy and rights, providing inclusive growth, and advancing sustainability.

This part of a broader effort by major AI players to advocate for regulations and legislation governing the use of artificial intelligence. OpenAI CEO Sam Altman, a key Microsoft partner, won praise for his proactive approach last week at a U.S. Senate Judiciary subcommittee hearing on artificial intelligence.

In his remarks this morning, Smith agreed with Altman’s call for a new federal agency to govern the use of what Smith called “a certain class of powerful [AI] models” that would need to be defined by the agency.

“We do need new law in this space,” Smith said. “We would benefit from a new agency here in the United States.”

He explained, “We should have licensing in place so that before such a model is deployed, the agency is informed of the testing, there are requirements it needs to meet for safety protocols. There needs to be measurement, and ultimately, like so much else in life … AI will require a license.”

However, this approach also creates natural barriers that could insulate established players from future competition. As Smith noted in his post, Microsoft has nearly 350 employees specializing in governance systems for new technologies. Few others could match its resources to navigate an AI regulatory and licensing program.

The Washington Post puts the move in the context of Smith’s recent work and larger career in this story.

Smith is drawing on years of preparation for the moment. He has discussed AI ethics with leaders ranging from the Biden administration to the Vatican, where Pope Francis warned Smith to “keep your humanity.” He consulted recently with Sen. Majority Leader Charles E. Schumer, who has been developing a framework to regulate artificial intelligence. Smith shared Microsoft’s AI regulatory proposals with the New York Democrat, who has “pushed him to think harder in some areas,” he said in an interview with The Washington Post.

Smith also called for new measures to protect against deep fakes and other forms of deception enabled by AI, as detailed in this Reuters report.

The announcement comes as Microsoft moves aggressively to incorporate AI into its products, most recently with announcements this week including a new Windows Copilot for its flagship PC operating system.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.