Trending: Jeff Bezos’ India trip features an electric delivery rickshaw — along with protests and controversy
A Microsoft data center in Amsterdam, where land is being cleared for additional facilities. (Microsoft Photo)

Build 2018 attendees got a peek behind the Azure curtain Wednesday from Azure Chief Technology Officer Mark Russinovich, who also announced that new security technologies from Intel are now available for customers as part of the Azure Confidential Computing program.

Microsoft introduced Azure Confidential Computing last year as a way to assure customers that critical cloud data would be protected at all times by hardware-level technologies that Microsoft’s servers can’t access. Russinovich announced Wednesday that processors running Intel’s SGX technology are now available in the East US region, and a new group of virtual machines running on those processors is also available.

The Intel processors create what’s known as “trusted execution environments,” a place on the chip where data can be processed without being exposed to the broader network. It’s a smart concept, and it’s one of the reasons why so many computing professionals were so freaked out by the discovery of the side-channel vulnerabilities in Intel chips known as Meltdown and Spectre; those types of attacks could potentially be used to see inside trusted execution environments.

Thanks to the mitigations put in place by Intel and cloud vendors, most cloud customers should be fine, and your data is definitely safer inside a trusted execution enviornment than outside one. Russinovich provided an example of a potential use case for this technology: health care providers have a lot of patient data that could be used by machine-learning algorithms to unlock new treatments or discover causes of certain diseases, but they are either prohibited by law or internal policy from sharing that data outside their organizations. Sharing that data through Azure Confidential Computing could satisfy those policies by preventing other organizations from seeing one group’s data.

Russinovich, who you’ll be able to see at our GeekWire Cloud Tech Summit June 27th in Bellevue, covered a lot of other topics during a 75-minute presentation to developers at Build. He walked attendees through some basic characteristics of Microsoft’s data centers, now present in 50 regions around the world.

A look inside a Microsoft data center in Cheyenne, Wyo. (Microsoft Photo)

Microsoft has reached the point where 50 percent of the energy it uses to power these data centers comes from natural sources like wind or solar, he said. The company hopes to bump that up to 60 percent by the end of the decade, and of course the eventual goal is to source 100 percent of its energy from renewable sources. Amazon Web Services said last year it was shooting for the 50 percent mark by the end of 2017 but doesn’t appear to have updated its site since then, while Google says it has offset 100 percent of its energy usage thanks to the purchases of clean power, which is a different metric.

The company is also working to make its data centers more efficient on the demand side of the energy world, researching new types of fuel cells that can help improve power consumption and overall reliability. “One of the things we realized as we looked at (data center designs) is that utility power is not that reliable,” he said.

Like almost all cloud vendors, Microsoft builds its own servers to run these data centers, and Russinovich shared a little more information on the progress it has made with its server designs.

The company recently designed a newish server architecture dubbed “Beast” inside the company, designed to handle memory-intensive workloads like SAP’s Hana database with a whopping four terabytes of memory available. Most Azure users don’t need that type of performance, but Russinovich observed something interesting about modern data center design; after years of “scale-out,” adding vast quantities of relatively cheap servers to a network to improve performance, Microsoft is finding increasing uses for more traditional “scale-up” systems, where it makes more sense to add more powerful components to servers to increase performance.

Russinovich closed his talk recapping Microsoft’s quantum computing strategy, which like most others researching the topic is pretty far out in the future. But the company is “investing a huge amount of money in this, it’s kind of a moonshot project for us.” he said. “This on on the verge of being real and practical.”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline


Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.