Collaborating Authors


AAAI Conferences

Computer Go presents a challenging problem for machine learning agents. With the number of possible board states estimated to be larger than the number of hydrogen atoms in the universe, learning effective policies or board evaluation functions is extremely difficult. In this paper we describe Cortigo, a system that efficiently and autonomously learns useful generalizations for large state-space classification problems such as Go. Cortigo uses a hierarchical generative model loosely related to the human visual cortex to recognize Go board positions well enough to suggest promising next moves. We begin by briefly describing and providing motivation for research in the computer Go domain. We describe Cortigo's ability to learn predictive models based on large subsets of the Go board and demonstrate how using Cortigo's learned models as additive knowledge in a state-of-the-art computer Go player (Fuego) significantly improves its playing strength.

Mining Expert Play to Guide Monte Carlo Search in the Opening Moves of Go

AAAI Conferences

We propose a method to guide a Monte Carlo search in the initial moves of the game of Go. Our method matches the current state of a Go board against clusters of board configurations that are derived from a large number of games played by experts. The main advantage of this method is that it does not require an exact match of the current board, and hence is effective for a longer sequence of moves compared to traditional opening books. We apply this method to two different open-source Go-playing programs. Our experiments show that this method, through its filtering or biasing the choice of a next move to a small subset of possible moves, improves play effectively in the initial moves of a game.

BTT-Go: An Agent for Go that Uses a Transposition Table to Reduce the Simulations and the Supervision in the Monte-Carlo Tree Search

AAAI Conferences

This paper presents BTT-Go: an agent for Go whose ar- chitecture is based on the well-known agent Fuego, that is, its search process for the best move is based on sim- ulations of games performed by means of Monte- Carlo Tree Search (MCTS). In Fuego, these simulations are guided by supervised heuristics called prior knowledge and play-out policy. In this context, the goal behind the BTT-Go proposal is to reduce the supervised character of Fuego, granting it more autonomy. To cope with this task, the BTT-Go counts on a Transposition Table (TT) whose role is not to waste the history of the nodes that have already been explored throughout the game. By this way, the agent proposed here reduces the super- vised character of Fuego by replacing, whenever pos- sible, the prior knowledge and the play-out policy with the information retrieved from the TT. Several evalua- tive tournaments involving BTT-Go and Fuego confirm that the former obtains satisfactory results in its purpose of attenuating the supervision in Fuego without losing its competitiveness, even in 19x19 game-boards.

Neural Networks Learning the Concept of Influence in Go

AAAI Conferences

This paper describes an intelligent agent that uses a MLP (Multi-Layer Perceptron) Neural Network (NN) in order to evaluate a game state in the game of Go based, exclusively, in an influence analysis. The NN learns the concept of Influence, which is domain specific to the game of Go. The learned function is used to evaluate board states in order to predict which player will win the match. The results show that, in later stages, the NN can achieve an accuracy of up to 89.3% when predicting the winner of the game. As future work the authors propose the incorporation of several improvements to the NN and also its integration intelligent player agents for the game of go, such as Fuego and GnuGo.

Teaching Deep Convolutional Neural Networks to Play Go Artificial Intelligence

Mastering the game of Go has remained a long standing challenge to the field of AI. Modern computer Go systems rely on processing millions of possible future positions to play well, but intuitively a stronger and more 'humanlike' way to play the game would be to rely on pattern recognition abilities rather then brute force computation. Following this sentiment, we train deep convolutional neural networks to play Go by training them to predict the moves made by expert Go players. To solve this problem we introduce a number of novel techniques, including a method of tying weights in the network to 'hard code' symmetries that are expect to exist in the target function, and demonstrate in an ablation study they considerably improve performance. Our final networks are able to achieve move prediction accuracies of 41.1% and 44.4% on two different Go datasets, surpassing previous state of the art on this task by significant margins. Additionally, while previous move prediction programs have not yielded strong Go playing programs, we show that the networks trained in this work acquired high levels of skill. Our convolutional neural networks can consistently defeat the well known Go program GNU Go, indicating it is state of the art among programs that do not use Monte Carlo Tree Search. It is also able to win some games against state of the art Go playing program Fuego while using a fraction of the play time. This success at playing Go indicates high level principles of the game were learned.