Microsoft describes the Azure Maia AI Accelerator as the first designed by Microsoft for large language model training and inferencing in the Microsoft Cloud. (Microsoft Image)

Microsoft publicly confirmed its long-rumored plan to introduce its own datacenter processors, including one optimized for artificial intelligence and refined based on feedback from its primary AI partner, ChatGPT maker OpenAI.

The company says the chips represent “a last puzzle piece” in its Azure infrastructure, giving it a direct role in designing every layer of the cloud platform responsible for an increasingly significant portion of its business.

One of the Microsoft-designed chips, the Azure Maia AI Accelerator, is optimized for AI tasks, including generative artificial intelligence. The other, Azure Cobalt CPU, is an Arm-based processor intended for general-purpose cloud workloads.

Microsoft says the chips will roll out in its data centers starting early next year. The company plans to use them initially to run its own services, including Microsoft Copilot and Azure OpenAI Service, before expanding to more workloads.

The plan, announced Wednesday at Microsoft’s Ignite conference, comes eight years after Amazon started making its own custom data center silicon through its acquisition of Annapurna Labs. Amazon Web Services has since expanded into specialized AI chips, Tranium and Inferentia, for building and running large models.

A custom-built rack for the Maia 100 AI Accelerator and its cooling “sidekick” system inside a thermal chamber at a Microsoft lab in Redmond. (Microsoft Photo / John Brecher)

Microsoft is also playing catch-up with Google, which has developed multiple generations of its Tensor Processing Unit chips, used by Google Cloud for machine learning workloads, and is reportedly working on its own Arm-based chips.

“At the scale we operate, it’s important for us to optimize and integrate every layer of the infrastructure stack to maximize performance, diversify our supply chain and give customers infrastructure choice,” said Scott Guthrie, executive vice president of Microsoft’s Cloud + AI Group, in a statement released by the company.

The company says it has been developing the chips for years in a lab on its Redmond campus, ostensibly in secret, although news of Microsoft’s custom silicon plans emerged earlier this year through a report in The Information.

OpenAI says it worked with Microsoft to refine and test the Maia chip. “Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers,” said OpenAI CEO Sam Altman in the Microsoft announcement.

In other news from Ignite, Microsoft is announcing a series of new and updated AI services, including Copilot Studio for building custom services for Microsoft 365 Copilot; the expansion of Copilot in Microsoft Teams; and Windows AI Studio for developers.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.