Trending: Nest’s digital health ambitions revealed in records from secretive purchase of Seattle startup Senosis

Doug Burger discusses Field Programmable Gate Arrays at Microsoft Ignite 2016 (GeekWire Photo / Kevin Lisota)

When Microsoft shuffled the deck of cards that makes up its Azure cloud computing division a few months ago, it appears to have created a new hardware group that attracted one of the company’s most prominent computer scientists to jump on board.

Doug Burger, a longtime Microsoft Research engineer, tweeted this week that he’s joining “Azure’s new hardware division” as a technical fellow. Burger is known for his work designing FPGA (field-programmable gate array) chips that help Microsoft tackle emerging cloud workloads that need a specialized approach, similar to the work being done by Amazon Web Services’ Annapurna division, which AWS vice president of infrastructure Peter DeSantis discussed last month at our GeekWire Cloud Tech Summit.

Most recently, Burger has been leading Microsoft’s Project Brainwave initiative, which gives Azure developers a way to use FPGAs in their applications to unlock machine-learning techniques. This would appear to be the first time he’s moved into an applied technology role, after a career in research at both Microsoft and the University of Texas.

So what is Microsoft developing in this new division? Company representatives declined to comment beyond Burger’s tweet, which feels like one of those things that came out before everyone had gotten sign-off on the blog post.

Microsoft has been developing its own hardware for Azure for several years, beyond the custom chip stuff that Burger has been working on. It’s a platinum member of the Open Compute project, and Azure CTO Mark Russinovich shared the latest details about Microsoft’s home-grown infrastructure during a packed session at Microsoft Build last May.

For the first several years of the cloud computing era, cloud customers were happy to run their workloads on familiar general-purpose processors from Intel, just as they would have done if they were managing their own infrastructure. But those processors weren’t designed around the needs of evolving workloads like artificial intelligence research, which require certain special characteristics that companies like Nvidia have been delivering to cloud companies over the last few years.

In some cases, those chips are cheaper to create in-house than they would cost at the chip store, and so cloud vendors have increasingly looked to their own engineers to design some of these AI chips. In addition to the Annapurna work at AWS and Project Brainwave, Google has also developed its own chips for AI research known as TPUs (tensor processing units.)

Perhaps it’s just as simple as Microsoft deciding to formalize a lot of this chip and hardware work under the new organization led by Jason Zander, but it clearly got Burger’s attention. If you have more information on what Azure’s new hardware group is up to, please let us know.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Comments

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.