For the past week or so, a mystery player has been logging into online Go game servers and beating the world’s best. Today, the player’s identity was revealed at last.
It was none other than AlphaGo, the artificial-intelligence program that triumphed over Go master Lee Sedol last March in a widely publicized $1 million showdown.
Google DeepMind’s co-founder and CEO, Demis Hassabis, let the world in on the secret today in a tweeted statement:
“We’ve been hard at work improving AlphaGo, and over the past few days we’ve played some unofficial online games at fast time controls with our new prototype version, to check that it’s working as well as we hoped. We thank everyone who played our accounts Magister(P) and Master(P) on the Tygem and FoxGo servers, and everyone who enjoyed watching the games too! We’re excited by the results and also by what we and the Go community can learn from some of the innovative and successful moves played by the new version of AlphaGo.”
Hassabis quoted Go grandmaster Gu Li as saying that, “Together, humans and AI will soon uncover the deeper mysteries of Go.”
According to Nature’s report about the online matches, Gu Li was so impressed with AlphaGo’s play that he offered a reward of $14,400 (100,000 Chinese yuan) to any human who could beat it.
No one did. The AI agent reportedly won all but one of the more than 50 fast-move games. That one game ended inconclusively, perhaps because of a glitch in the network connection, Nature reported.
Hassabis said that the unofficial testing is now complete, and that AlphaGo would play some official, full-length games later this year in collaboration with Go experts and organizations in the “spirit of mutual enlightenment.”
“We hope to make further announcements soon!” he wrote.
Mastering Go completely is considered a worthy challenge for deep-learning systems like AlphaGo because the game is so complex.
It may seem simple, involving the mere placement of black and white pebbles to lock up territory on a 19-by-19 playing board. But Go provides for 10170 legal playing positions, compared with a mere 1050 positions for chess.
The point behind AlphaGo’s AI exercise is to help DeepMind’s researchers fine-tune programs that could come into play in autonomous vehicles, energy management systems, health diagnostics and other AI applications.
By the way, Go isn’t the only game that DeepMind has its AI eyes on: In their year-end review of 2016’s accomplishments, Hassabis and his fellow co-founders noted that they’ve been working with video-game masters at Blizzard to develop AI-friendly training environments for StarCraft II.