Games have benchmarked AI methods since of a single game, discovering a few new variations on the inception of the field, with classic board games such existing research topics. The set of · Deckbuilding · Gameplaying · Player Modeling AI problems associated with video games has in recent decades expanded from simply playing games to win, to playing games in particular styles, generating game content, 1 Introduction modeling players etc. Different games pose very different challenges for AI systems, and several different For decades classic board games such as Chess, Checkers, AI challenges can typically be posed by the same and Go have dominated the landscape of AI and game. In this article we analyze the popular collectible games research. Often called the "drosophila of AI" in card game Hearthstone (Blizzard 2014) and describe reference to the drosophila fly's significance in biological a varied set of interesting AI challenges posed by this research, Chess in particular has been the subject game. Collectible card games are relatively understudied of hundreds of academic papers and decades of research in the AI community, despite their popularity and . At the core of many of these approaches is designing the interesting challenges they pose. Analyzing a single algorithms to beat top human players. However, game in-depth in the manner we do here allows us to despite IBM's Deep Blue defeating Garry Kasparov in see the entire field of AI and Games through the lens the 1997 World Chess Championships and DeepMind's AlphaGo defeating Lee Sedol in the 2016 Google Deep-Mind Challenge Match , such programs have yet While there is value in designing algorithms to win (e.g.
Monte-Carlo Tree Search (MCTS) has proved a remarkably effective decision mechanism in many different game domains, including computer Go and general game playing (GGP). However, in GGP, where many disparate games are played, certain type of games have proved to be particularly problematic for MCTS. One of the problems are game trees with so-called optimistic moves, that is, bad moves that superficially look good but potentially require much simulation effort to prove otherwise. Such scenarios can be difficult to identify in real time and can lead to suboptimal or even harmful decisions. In this paper we investigate a selection strategy for MCTS to alleviate this problem. The strategy, called sufficiency threshold, concentrates simulation effort better for resolving potential optimistic move scenarios. The improved strategy is evaluated empirically in an n-arm-bandit test domain for highlighting its properties as well as in a state-of-the-art GGP agent to demonstrate its effectiveness in practice. The new strategy shows significant improvements in both domains.
This book presents a methodology and philosophy of empirical science based on large scale lossless data compression. In this view a theory is scientific if it can be used to build a data compression program, and it is valuable if it can compress a standard benchmark database to a small size, taking into account the length of the compressor itself. This methodology therefore includes an Occam principle as well as a solution to the problem of demarcation. Because of the fundamental difficulty of lossless compression, this type of research must be empirical in nature: compression can only be achieved by discovering and characterizing empirical regularities in the data. Because of this, the philosophy provides a way to reformulate fields such as computer vision and computational linguistics as empirical sciences: the former by attempting to compress databases of natural images, the latter by attempting to compress large text databases. The book argues that the rigor and objectivity of the compression principle should set the stage for systematic progress in these fields. The argument is especially strong in the context of computer vision, which is plagued by chronic problems of evaluation. The book also considers the field of machine learning. Here the traditional approach requires that the models proposed to solve learning problems be extremely simple, in order to avoid overfitting. However, the world may contain intrinsically complex phenomena, which would require complex models to understand. The compression philosophy can justify complex models because of the large quantity of data being modeled (if the target database is 100 Gb, it is easy to justify a 10 Mb model). The complex models and abstractions learned on the basis of the raw data (images, language, etc) can then be reused to solve any specific learning problem, such as face recognition or machine translation.
In this paper we introduce a novel method for automatically tuning the search parameters of a chess program using genetic algorithms. Our results show that a large set of parameter values can be learned automatically, such that the resulting performance is comparable with that of manually tuned parameters of top tournament-playing chess programs.
We describe a preliminary investigation into learning a Chess player's style from game records. The method is based on attempting to learn features of a player's individual evaluation function using the method of temporal differences, with the aid of a conventional Chess engine architecture. Some encouraging results were obtained in learning the styles of two recent Chess world champions, and we report on our attempt to use the learnt styles to discriminate between the players from game records by trying to detect who was playing white and who was playing black. We also discuss some limitations of our approach and propose possible directions for future research. The method we have presented may also be applicable to other strategic games, and may even be generalisable to other domains where sequences of agents' actions are recorded.
In this article we review standard null-move pruning and introduce our extended version of it, which we call verified null-move pruning. In verified null-move pruning, whenever the shallow null-move search indicates a fail-high, instead of cutting off the search from the current node, the search is continued with reduced depth. Our experiments with verified null-move pruning show that on average, it constructs a smaller search tree with greater tactical strength in comparison to standard null-move pruning. Moreover, unlike standard null-move pruning, which fails badly in zugzwang positions, verified null-move pruning manages to detect most zugzwangs and in such cases conducts a re-search to obtain the correct result. In addition, verified null-move pruning is very easy to implement, and any standard null-move pruning program can use verified null-move pruning by modifying only a few lines of code.
Although game-tree search works well in perfectinformation games, there are problems in trying to use it for imperfect-information games such as bridge. The lack of knowledge about the opponents' possible moves gives the game tree a very large branching factor, making the ree so immense that game-tree searching is infeasible. In this paper, we describe our approach for overcoming this problem. We develop a model of imperfect-information games, and describe how to represent information about the game using a modified version of a task network that is extended to represent multi-agency and uncertainty. We present a game-playing procedure that uses this approach to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes.
Search is a topic of fundamental importance to artificial intelligence (AI). The range of search strategies investigated stretch from application-independent methods to application-dependent, knowledge-intensive methods. The former has the promise of general applicability, the latter of high performance. An important experimental domain for search algorithms has been the field of game playing. Arguably, this research has been one of the most successful in AI, leading to impressive results in chess (Deep Blue, formerly Deep Thought, playing at Grandmaster strength (Hsu et al. 1990)), checkers (Chinook, the World Man-Machine Champion (Schaeffer et al. 1996)), Othello (Logistello, significantly stronger than all humans (Buro 1994)), and Backgammon (TD-Gammon, playing at World Championship level strength (Tesauro 1995)).
The best chess machines are competitive with the best humans, but generate millions of positions per move. Their human opponents, however, only examine tens of positions, but search much deeper along some lines of play. Obviously, people are more selective in their choice of positions to examine. The importance of selective search was first recognized by (Shannon 1950). Most work on game-tree search has focussed on algorithms that make the same decisions as fullwidth, fixed-depth minimax. This includes alpha-beta pruning (Knuth & Moore 1975), fixed and dynamic node ordering (Slagle & Dixon 1969), SSS* (Stockman 1979), Scout (Pearl 1984), aspiration-windows (Kaindl, Shams, & Horacek 1991), etc.