Why is Google playing ancient Chinese board games?
Two decades ago, IBM’s Deep Blue beat grandmaster Garry Kasparov at Chess. It was hailed at the time as a serious advancement in computer AI due to the complexity of Chess at the grandmaster level. Somehow, if it had beaten me, I doubt the reaction would have been the same.
As I’m writing this, a similar contest is being waged between a computer artificial intelligence backed by Google taking on a champion Go player, Lee Sedol. Google’s AI, known as AlphaGo won a five game series by taking the first three games, while Sedol won the fourth game — but by that stage the contest was effectively over, so he’s just playing for personal pride.
Go, if you’re not familiar with it, is a Chinese board game set on a 19×19 grid of squares. Players take turns placing either white or black stones to “capture” their opponents territory. It sounds simple, but the mathematics and strategy behind Go are astonishingly complex, and for many decades it was theorised that it was too complex for a computer to fully grasp. AlphaGo’s rather proven that point moot, however, taking on the world’s best and winning. This isn’t just a matter of a computer being able to crunch numbers more effectively than a human being, but an entire developing branch of neural learning being put to an effectively abstract board game.
You may be pondering at this point why all of this matters, and why Google would spend an estimated $400 million dollars buying the company developing the software that beat Sedol just to have it play a simple board game.
There’s a multitude of reasons why this kind of AI work is important, most of which revolve around developing systems that can be said to have not just raw computing power — because in all sorts of ways we have that already. The computers that put a man on moon more than fifty years ago would look astonishingly primitive compared to the processing power of even a cheap smartphone today, after all. What’s sought here is a level of intelligence and the understanding of how intelligence is built, both to understand how human intelligence works, but also to develop more effective computing solutions.
AlphaGo works not just by analysing potential combinations in the millions to determine its next move, but also by effectively playing out each move against itself, applying values to each of those moves according to their efficacy and then teaching itself in effect which the next best moves will be. Because it does this work on a continual basis, it can apply those learned values to each move, making it more effective in choosing moves on an ongoing basis rather than playing out many of the billions of absolute permutations leading from a single Go move.
The satirical writer Douglas Adams once wrote that computers were very good at being very stupid students. That may have been true at the time that he was writing it, but the research behind applications like AlphaGo shows that they’re very much capable of learning. Most academics working in these kinds of fields figured we were at least a decade away from having an effective AI Go champion, but it’s here now.
This opens up a number of AI possibilities for the kinds of software running AlphaGo or its derivatives, from Google’s own software applications, which already use the kinds of neural learnings that AlphaGo relies on, to healthcare applications, any kind of technology work that relies on robotics, and plenty more. We’ve already integrated technology into everything from airplanes to traffic lights, but it’s never been quite this smart before — and we’re really only just getting started.