In chess, machines such as Deep-Thought[l] are competitive with the very best humans, but generate millions of positions per move. Their human opponents, however, only examine tens of positions, but search much deeper than the machine along some lines of play. Obviously, people are much more selective in their choice of positions to examine. The importance of selective search algorithms was first recognized by Shannon. Most work on two-player search algorithms, however, has focussed on algorithms that make the same decisions as full-width, fixed-depth minimax, but do it more efficiently.
It is known that bounds on the minimax values of nodes in a game tree can be used to reduce the computational complexity of minimax search for two-player games. We describe a very simple method to estimate bounds on the minimax values of interior nodes of a game tree, and use the bounds to improve minimax search. The new algorithm, called forward estimation, does not require additional domain knowledge other than a static node evaluation function, and has small constant overhead per node expansion. We also propose a variation of forward estimation, which provides a tradeoff between computational complexity and decision quality. Our experimental results show that forward estimation outperforms alpha-beta pruning on random game trees and the game of Othello.
Search is a topic of fundamental importance to artificial intelligence (AI). The range of search strategies investigated stretch from application-independent methods to application-dependent, knowledge-intensive methods. The former has the promise of general applicability, the latter of high performance. An important experimental domain for search algorithms has been the field of game playing. Arguably, this research has been one of the most successful in AI, leading to impressive results in chess (Deep Blue, formerly Deep Thought, playing at Grandmaster strength (Hsu et al. 1990)), checkers (Chinook, the World Man-Machine Champion (Schaeffer et al. 1996)), Othello (Logistello, significantly stronger than all humans (Buro 1994)), and Backgammon (TD-Gammon, playing at World Championship level strength (Tesauro 1995)).
The alpha-beta pruning algorithms have been popular in game tree searching ever since they were discovered. Numerous enhancements are proposed in literature and it is often overwhelming as to which would be the best for implementation. A certain enhancement can take far too long to fine tune its hyper parameters or to decide whether it is going to not make much of a difference due to the memory limitations. On the other hand are the best first pruning techniques, mostly the counterparts of the infamous SSS* algorithm, the algorithm which proved out to be disruptive at the time of its discovery but gradually became outcast as being too memory intensive and having a higher time complexity. Later research doesn't see the best first approaches to be completely different from the depth first based enhancements but both seem to be transitionary in the sense that a best first approach could be looked as a depth first approach with a certain set of enhancements and with the growing power of the computers, SSS* didn't seem to be as taxing on the memory either. Even so, there seems to be quite difficulty in understanding the nature of the SSS* algorithm, why it does what it does and it being termed as being too complex to fathom, visualize and understand on an intellectual level. This article tries to bridge this gap and provide some experimental results comparing the two with the most promising advances.
Alpha-Beta is the most common game tree search algorithm, due to its high-performance and straightforward implementation. In practice one must find the best trade-off between heuristic evaluation time and bringing the subset of nodes explored closer to a minimum proof graph. In this paper we present a series of structural properties of minimum proof graphs that help us to prove that finding such graphs is NP-hard for arbitrary DAG inputs, but can be done in linear time for trees. We then introduce the class of fastest-cut-first search heuristics that aim to approximate minimum proof graphs by sorting moves based on approximations of sub-DAG values and sizes. To explore how various aspects of the game tree (such as branching factor and distribution of move values) affect the performance of Alpha-Beta we introduce the class of ``Prefix Value Game Trees'' that allows us to label interior nodes with true minimax values on the fly without search. Using these trees we show that by explicitly attempting to approximate a minimum game tree we are able to achieve performance gains over Alpha-Beta with common extensions.