Google’s artificial intelligence can beat expert Go players ten years earlier than science predicted.
In just about any science fiction movie, a scientist saying “They’re learning faster than expected” is cause for alarm. And while we shouldn’t break out EMPs and ion weapons, that’s exactly what the artificial intelligence community is saying about Google’s AI. It seems this computer system doesn’t just play Go, it’s also good enough to surpass expert human players. Considering most programmers believed this wouldn’t be possible for another ten years, that’s kind of a big deal.
The AI in question is AlphaGo, a computer program that just beat European Go champion Fan Hui at his own game. “Go is the most complex and beautiful game ever devised by humans,” Google’s Demis Hassabis said, adding that AlphaGo’s victory “achieved one of the long-standing grand challenges of AI.”
Okay, so AlphaGo won a board game. What’s the big deal? Don’t computers win at chess all the time?
The truth is Go is a far more complex game than chess ever could be. Go games typically last 150 moves, with an average 250 choices per move. Complicating matters is the fact that victory depend on recognizing subtle patterns in the arranged pieces, which is far harder for computers than it sounds. Even human Go champions can’t always verbalize how to take advantage of certain patterns, so AI experts thought we needed at least a decade before computers could do the same.
What makes AlphaGo so special compared to other AI? It’s apparently the use of two deep-learning networks – one to predict moves and another to predict outcomes. These are then combined under traditional AI algorithms to produce results which a computer could quickly grasp. “The game of Go has an enormous search space, which is intractable to brute-force search,” Google researcher David Silver explains. “The key to AlphaGo is to reduce that search space to something more manageable. This approach makes AlphaGo much more humanlike than previous approaches.”
AlphaGo will face Lee Sedol, one of the best Go players in the entire world, in March to see how these algorithims hold up. Either way, it’s an impressive achievement – one Google wishes to apply to humanitarian goals. “Ultimately we want to apply these techniques to important real-world problems,” Hassabis said. “Because the methods we used were general purpose, our hope is that one day they could be extended to help address some of society’s most pressing problems, from medical diagnostics to climate modeling.”
I have to admit, solving climate change would be a pretty gracious way for the AI that crushed all our Go champions to behave. Here’s hoping learning faster than expected proves to be a good thing.
Source: Nature, via Technology Review