Trending: New York Times report reveals more connections between Bill Gates and Jeffrey Epstein
(Pixabay Photo / cc0)

One of the key emerging standards in artificial intelligence research, Facebook’s PyTorch 1.0 framework, has received support from Amazon Web Services, Microsoft, and Google’s cloud AI services and several key chip makers, making it easier for AI researchers to create new models.

AI researchers have gravitated toward PyTorch 1.0 since it was first introduced earlier this year, and the preview version is now available for developer experimentation, Facebook plans to announce later on Tuesday. The 1.0 version of the framework was designed to give machine-learning developers more tools to use PyTorch for both AI research and actual production use, and it also supports the ONNX framework jointly developed by Facebook, Microsoft and Amazon.

AWS plans to support PyTorch 1.0 in SageMaker, its service that helps developers create and reuse existing AI models. Google plans to support PyTorch 1.0 in its Google Cloud Deep Learning virtual machine, and it plans to work with Facebook on optimizing its custom TPU chips for PyTorch 1.0. Microsoft had previously announced plans to support the new framework on Azure.

Several key AI chip makers also pledged their support, including Nvidia, Qualcomm, Intel, Arm, and IBM. That support makes it easier for software developed around PyTorch to work on a team’s preferred hardware platform, and allows third-party developers to start working on their own projects.

If you’ve made it this far, you’ve grasped that this is a very complicated, emerging area of technology development. Frameworks allow researchers to spend more time thinking about their machine-learning models and less time reinventing the wheel needed to actually make them work, which is also how software has developed over time, abstracting complication away from the end developer one layer at a time.

Now with all major cloud and chip vendors on board, PyTorch 1.0 could become the foundation for new types of models to emerge given that developers are already familiar with a lot of those practices.

“The more software and hardware that is compatible with PyTorch 1.0, the easier it will be for AI and machine learning (ML) developers to quickly build, train, and deploy state-of-the-art deep learning models,” Facebook said in a blog post.

[Editor’s note: This post was updated to remove an incorrect reference to Google’s involvement with the Caffe2 project, and to clarify Amazon’s involvement with ONNX.]

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Comments

Job Listings on GeekWork

CTO-in-Residence // Co-FounderALLEN INSTITUTE FOR ARTIFICIAL INTELLIGENCE (AI2)
Find more jobs on GeekWork. Employers, post a job here.