Towards Understanding Chinese Checkers with Heuristics, Monte Carlo Tree Search, and Deep Reinforcement Learning

arXiv.org Machine Learning

The game of Chinese Checkers is a challenging traditional board game of perfect information that differs from other traditional games in two main aspects: first, unlike Chess, all checkers remain indefinitely in the game and hence the branching factor of the search tree does not decrease as the game progresses; second, unlike Go, there are also no upper bounds on the depth of the search tree since repetitions and backward movements are allowed. Therefore, even in a restricted game instance, the state-space of the game can still be unbounded, making it challenging for a computer program to excel. In this work, we present an approach that effectively combines the use of heuristics, Monte Carlo tree search, and deep reinforcement learning for building a Chinese Checkers agent without the use of any human game-play data. Experiment results show that our agent is competent under different scenarios and reaches the level of experienced human players.


Standing on the shoulders of giants

#artificialintelligence

When you think of AI or machine learning you may draw up images of AlphaZero or even some science fiction reference such as HAL-9000 from 2001: A Space Odyssey. However, the true forefather, who set the stage for all of this, was the great Arthur Samuel. Samuel was a computer scientist, visionary, and pioneer, who wrote the first checkers program for the IBM 701 in the early 1950s. His program, "Samuel's Checkers Program", was first shown to the general public on TV on February 24th, 1956, and the impact was so powerful that IBM stock went up 15 points overnight (a huge jump at that time). This program also helped set the stage for all the modern chess programs we have come to know so well, with features like look-ahead, an evaluation function, and a mini-max search that he would later develop into alpha-beta pruning.


The Hanabi Challenge: A New Frontier for AI Research

arXiv.org Machine Learning

From the early days of computing, games have been important testbeds for studying how well machines can do sophisticated decision making. In recent years, machine learning has made dramatic advances with artificial agents reaching superhuman performance in challenge domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. We continue this tradition by proposing the game of Hanabi as a new challenge domain with novel problems that arise from its combination of purely cooperative gameplay and imperfect information in a two to five player setting. In particular, we argue that Hanabi elevates reasoning about the beliefs and intentions of other agents to the foreground. We believe developing novel techniques capable of imbuing artificial agents with such theory of mind will not only be crucial for their success in Hanabi, but also in broader collaborative efforts, and especially those with human partners. To facilitate future research, we introduce the open-source Hanabi Learning Environment, propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.


In the Age of Google DeepMind, Do the Young Go Prodigies of Asia Have a Future? - The New Yorker

#artificialintelligence

Choong-am Dojang is far from a typical Korean school. Its best pupils will never study history or math, nor will they receive traditional high-school diplomas. The academy, which operates above a bowling alley on a narrow street in northwestern Seoul, teaches only one subject: the game of Go, known in Korean as baduk and in Chinese as wei qi. Each day, Choong-am's students arrive at nine in the morning, find places at desks in a fluorescent-lit room, and play, study, memorize, and review games--with breaks for cafeteria meals or an occasional soccer match--until nine at night. Choong-am, which is the product of a merger between four top Go academies, is currently the biggest of a handful of dojangs in South Korea.


AI has beaten us at Go. So what next for humanity?

#artificialintelligence

In the next few days, humanity's ego is likely to take another hit when the world champion of the ancient Chinese game Go is beaten by a computer. Currently Lee Sedol – the Roger Federer of Go – has lost two matches to Google's AlphaGo program in their best-of-five series. If AlphaGo wins just one more of the remaining three matches, humanity will again be vanquished. Back in 1979, the newly crowned world champion of backgammon, Luigi Villa, lost to the BKG 9.8 program seven games to one in a challenge match in Monte Carlo. In 1994, the Chinook program was declared "Man-Machine World Champion" at checkers in a match against the legendary world champion Marion Tinsley after six drawn games.