Quantcast

Solving Go, a Game More Complex than Chess

By Matthew Braga

With a decades-old algorithm once used in the design of atomic bombs, researchers are tackling the computational complexity of the board game Go's rules by teaching their programs to guess, and guess smart.

It is one of the few games that computer intelligence hasn't quite been able to master – a game with so many possible moves that it has long left programmers stumped. But Go – a deceptively simple board game, thousands of years old – is beginning to look solvable. With a decades-old algorithm once used in the design of atomic bombs, researchers are tackling the computational complexity of Go's rules by teaching their programs to guess. And they're finally finding success!

If you've never played Go, the two-player Chinese strategy game is a battle for territory, with white and black stones placed near one another on intersecting lines. Typical large boards are 19x19 grids, and each player tries to win by capturing the others' pieces, and enclosing the most space, or territory, on the board.

Photo credit: Flickr user lng0004 via Creative Commons.

Like chess, checkers, and similar two-player board games, Go is an impartial and so-called "deterministic perfect information game". Nothing is hidden from either player, and there is no element of chance. But unlike these games, the play space of Go is significantly larger – and thus so are the number of possible moves.

Think of it this way: in chess, the number of legal moves a player can make each turn is, on average, 35. In Go, that number is a whopping 250. This is often referred to as a game's branching factor. "If we look at chess programs, what chess programs do is a large brute-force search. And the branching factor of the game is such that you can search deep enough that you can make fairly strategic play," explains Nathan Sturtevant, an assistant professor in computer science at the University of Denver.

IBM's chess playing computer Deep Blue was famously able to calculate all potential scenarios in this way, multiple moves ahead. But Go's much larger branching factor means it's just not feasible to search quite as deep. There are so many possibilities that it simply isn't possible – at present – to anticipate all possible following games that could occur.

Image credit: XKCD (http://xkcd.com/1263)

For example, the decision to place a piece "one space to the left or the right could mean the difference between winning or losing the game – but it's a 100 moves from now," explains Sturtevant. "It's very difficult to write an evaluation function that says 'I like this state more than this state' when you're thinking about something that's going to [happen] so far down the road – that's so deep – that you can never actually see it."

But rather than try and anticipate all possible games, researchers are trying a different tactic: anticipating just the games that have the most potential of winning.

Rather than analyze all possible moves to win, the program tries to be more probabilistic in what paths it decides to go down. It's like a statistically-driven, educated guess.

This is thanks to an algorithm called Monte Carlo Tree Search (MCTS), which actually dates back to the development of the hydrogen and atomic bombs. The algorithm combines the precision of a tree search – what chess programs do – with random sampling. Rather than analyze all possible moves until a win is achieved, the program tries to be more probabilistic in what tree branches or paths it decides to go down. It's more like a statistically-driven, educated guess.

"I've gone down this tree before, and it gave me a very bad outcome, so I'm not going to try it very much," says Sturtevant, emulating the way a Monte Carlo Tree Search might think. "But maybe every one thousand samples I might come back here just to make sure I didn't make a mistake."

This introduces the possibility that a key response or move might be missed, but using heuristics of past games, strategies and so-called domain knowledge of how Go is played, programs try to mitigate this. Basically, it's exhaustive statistical sampling of fewer possibilities – and in return, a better chance of making moves that might win.

None of this is unique to GO. Accountants, financial planners, researchers, scientists, and people in myriad other industries have been leveraging Monte Carlo algorithms for decades. Monte Carlo is often used in managing risk in decision-making scenarios and estimating financial results. It's said to have helped model nuclear reactions during development of the atomic bomb.

In each cases, statistical error isn't just expected, but preferred. Compared with brute force techniques, it's a tradeoff responsible for making Go-playing programs smarter and faster than they once were. A recent article in Wired for example chronicled the annual UEC Cup – an annual computer Go competition which gives finalists the chance to test their programs against a top Go player. In this year's competition, a program called Crazy Stone beat world-renowned player Ishida Yoshio for the first time.

Photo credit: Takashi Osato/WIRED

There was a catch: it still needed a handicap to do it. Most Go programs "don't play strongly enough on the full board to be able to win without a large handicap," says Sturtevant. "And for many many years, even with huge handicaps they were never competitive, because the strategic play was just so poor."

However, he points to a program called Fuego that has already beat top-ranked human player at even strength on a smaller 9x9 board. And as intelligent Go programs on the larger board reaching the brink of competitive play too, the conversation is beginning to shift – no longer a question of "if" we'll master the game of Go computationally, but when.

You have the people who handle your paycheck and built bombs to thank for that.