Microsoft’s Azure will be the preferred public-cloud service for running experiments at OpenAI, the $1-billion non-profit initiative from Tesla CEO Elon Musk, Y Combinator President Sam Altman and others to accelerate AI research, Microsoft said in a blog post today. The idea behind OpenAI is to share AI advances equally and publicly, rather than having them accrue to any one company.
“Azure has impressed us by building hardware configurations optimized for deep learning,” Altman said in OpenAI’s blog post. “In the coming months, we will use thousands to tens of thousands of these machine, to increase both the number of experiments we run and the size of the models we train.” He said more and faster computers are essential to AI technologies such as reinforcement learning and a major OpenAI research focus: generative models, a branch of unsupervised machine learning.
Altman specifically cited the value of Azure’s K80 GPUs – graphical processing unit accelerators from NVIDIA Tesla that are claimed to outperform CPUs in high-performance computing applications by a factor of up to 10. And he said OpenAI is pleased that Microsoft envisions bringing NVIDIA’s advanced Pascal GPU-based architecture to Azure.
The deal gives Microsoft increased access to OpenAI’s robotics and experts, potentially helping it become a better provider for AI services to its customers. Microsoft may also collaborate on some projects within OpenAI.
“We’re exploring a couple of specific projects” for collaboration, Altman said. “I’m assuming something will happen there.”
Part of the idea behind OpenAI’s inception last year was to counter Musk’s stated fear that AI is the “biggest existential threat” to humanity. It is sponsored by Altman, Musk, PayPal co-founder Peter Thiel, early PayPal board member Reid Hoffman, Y Combinator founding partner Jessica Livingston, and CTO and co-founder Greg Brockman.
In a related development, Microsoft and NVIDIA said that starting next month they’ll offer NVIDIA’s GPUs and Microsoft’s Cognitive Toolkit for AI applications in Azure or on premises. “Every company around the world can now tap the power of AI,” Microsoft said in its blog post.
Microsoft CEO Satya Nadella announced in September that the company is forming a new 5,000-person engineering and research team to infuse AI into all of its products and everything it does across hardware and software, and to accelerate the path from research into products.
“Thanks to cloud computing power, more advanced algorithms and the availability of massive amounts of data, the AI field has exploded,” wrote Harry Shum, executive VP for Microsoft’s AI and research group. That progress is “allowing computer scientists to create technology many of us only dreamed about just a few years ago,” he said.
Microsoft also said it’s introducing the Azure Bot Service, a cloud service that lets companies and developers store and run bots in Azure. And, as part of a big news blast before online developer conference Connect starts tomorrow, Microsoft announced the general availability of Azure Functions. That service is roughly equivalent to Amazon Web Services’ Lambda, which executes code upon the occurrence of specified conditions.