Werner Vogels, Amazon CTO, unveils Alexa for Business at AWS re:Invent 2017. (GeekWire Photo / Tom Krazit)

Machine-learning services promise to be one of the most competitive areas for cloud computing vendors over the next few years. After introducing SageMaker, a new service that helps Amazon Web Services customers build and train machine-learning models, at its re:Invent 2017 conference in November, AWS shed a little more light Monday on how the service works.

In a blog post, Amazon CTO Werner Vogels explained how Amazon SageMaker was designed to scale with customer data as it arrives, as opposed to training machine-learning models on a fixed pile of data. Lots of companies interested in potential machine-learning applications have data sets that change rapidly based on the time of day or user activity.

“For these customers and many more, the notion of “the data” does not exist,” Vogels wrote. “It’s not static. Data always keeps being accrued.”

An overview of Amazon SageMaker, a hosted service for developing machine-learning models. (AWS Image)

SageMaker was designed to scale with the amount of data generated by the applications of AWS customers, using what Vogels called a “streaming computational model” to cap the amount of memory available to the training models — ensuring the algorithm won’t crash as it might if it tried to scale in memory — while scaling computing resources to handle the training process. It’s easier to scale computing resources across a massive computing infrastructure like AWS, and streaming algorithms can also take in data from more sources than other machine-learning training algorithms, Vogels wrote.

“Streaming algorithms are infinitely scalable in the sense that they can consume any amount of data. … In other words, processing the 10th gigabyte and 1000th gigabyte is conceptually the same,” he wrote.

Vogels also discussed how Amazon SageMaker uses containers to spread machine-learning workloads across its computing network, improving the speed at which these models can be trained. This also allows the models to move back and forth between using CPUs or GPUs (graphical processing units) depending on what makes the most sense for that particular model.

It’s tough to make comparisons across cloud vendors, but Google’s Cloud Machine Learning service appears to be structured in similar fashion. “(The service) has the advantages of a managed service for building custom TensorFlow-based machine-learning models that interact with any type of data, at any scale,” according to the company. Google, generally considered the leader in cloud-based artificial intelligence services (although AWS and Microsoft would certainly argue the point), also introduced Cloud AutoML earlier this year to auto-generate machine-learning models based the nature of the data set.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.