Google’s Deepmind artificial intelligence unit has been working on systems that play the ancient board game called Go. (Wikimedia Commons Photo)

Deepmind, the artificial intelligence research organization owned by Google, announced some stunning results Wednesday from research into the next generation of its AlphaGo system: the machines are getting smarter.

AlphaGo Zero, the new version of the AlphaGo system that defeated the world’s best Go players in competitions over the past few years, was able to teach itself how to play the ancient board game as well as its predecessors in a matter of days with no other input than the basic rules of the game, Deepmind said in a blog post Wednesday.

Previous versions of AlphaGo built to compete against human masters of the game required hours and hours of training on Go gameplay, but AlphaGo Zero was able to teach itself to play using a technique called reinforcement learning.

Reinforcement learning involves training a system to figure out the best reward outcome from a series of actions, unlike supervised learning, in which the system is taught which outcomes are desired and trained over and over to recognize the factors that lead to those outcomes. Deepmind set up a neural network that played games of Go against itself until it learned how to formulate a winning strategy for a game in which capturing as many stones as possible can be satisfying in early stages, but can lead to big problems as the game plays out.

It needed only three days to beat the celebrated 2016 version that beat Go world champion Lee Sedol, and 21 days to catch up to the version that beat Ke Jie earlier this year. After 40 days, AlphaGo Zero had won 100 straight games, Deepmind said.

In a few decades, we might not draw a direct link between this accomplishment and the nascent Singularity, but it’s an important breakthrough for Google’s artificial intelligence ambitions. AI is dominated by a handful of huge tech companies because even once you’ve built these complex neural networks, you need tons of data in order to train those systems how to learn.

But if they can quickly teach themselves about the problems they’re attempting to solve, it could be much easier to roll artificial intelligence technology into cloud services that wouldn’t need nearly as much time soaking up customer data to make an impact. And that could be a huge boon for Google’s cloud efforts, which trail market leaders Amazon Web Services and Microsoft by a fair margin.

The latest AlphaGo advances are the subject of a paper published in the journal Nature, titled “Mastering the Game of Go Without Human Knowledge,” plus a Nature commentary.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.