Intelligent Machines

Don’t Despair if Google’s AI Beats the World’s Best Go Player

You’re still special: Google’s Go-playing AI might be capable of subtle tactical insights, but it’s a long way from truly intelligent.

Mar 8, 2016

We may be about to witness a remarkable demonstration of the advancing capabilities of AI programs. AlphaGo, a program developed by AI researchers at Google, is getting set to take on the world’s most successful Go player, Lee Se-dol.

The contest will take place this week in South Korea, where Go is hugely popular. It will be fascinating because Go is such a complex and subtle game that many experts thought it would take years, if not decades, for computers to be able to compete with the best human players. Successful players must learn through years of practice to recognize promising moves, and they will often struggle to explain why a particular position seems promising.

And yet earlier this year, a team at Google DeepMind, a subsidiary created when Google acquired a British AI company in 2014, published details of a computer program that was able to beat Fan Hui, the European Go champion and a professional player, in a game played behind closed doors.

Lee Se-dol (right), a legendary South Korean player of Go, poses with Google researcher Demis Hassabis before the Google DeepMind Challenge Match in Seoul.

Developing AlphaGo involved combining several simulated neural networks with other AI techniques so that the program could learn by studying thousands of previous games and could also practice against itself.

If AlphaGo defeats Se-dol, the feat will probably be heralded as a sad moment for humankind and another sign that computers could soon start encroaching on more human turf by mastering other skills that we have long considered beyond automation.

That may be true to some extent, but don’t panic just yet. As subtle as it is, Go is still a very narrow area of expertise, and its rules are still tightly constrained. What’s more, AlphaGo cannot do anything else (even if the techniques used to build it could be applied to other board games). Some argue that a better way to gauge progress towards general AI is to ask computers to take much broader and more complex challenges, like passing an elementary science exam. And thankfully, that’s the sort of thing that AI programs are still pretty terrible at.

(Sources: Nature, ”Google’s AI Masters the Game of Go a Decade Earlier Than Expected”)