Best-First Minimax Search: First Results

AAAI Conferences

In chess, machines such as Deep-Thought[l] are competitive with the very best humans, but generate millions of positions per move. Their human opponents, however, only examine tens of positions, but search much deeper than the machine along some lines of play. Obviously, people are much more selective in their choice of positions to examine. The importance of selective search algorithms was first recognized by Shannon[2]. Most work on two-player search algorithms, however, has focussed on algorithms that make the same decisions as full-width, fixed-depth minimax, but do it more efficiently.



Adaptive K-Parallel Best-First Search: A Simple but Efficient Algorithm for Multi-Core Domain-Independent Planning

AAAI Conferences

Motivated by the recent hardware evolution towards multi-core machines, we investigate parallel planning techniques in a shared-memory environment. We consider, more specifically, parallel versions of a best-first search algorithm that run K threads, each expanding the next best node from the open list. We show that the proposed technique has a number of advantages. First, it is (reasonably) simple: we show how the algorithm can be obtained from a sequential version mostly by adding parallel annotations. Second, we conduct an extensive empirical study that shows that this approach is quite effective. It is also dynamic in the sense that the number of nodes expanded in parallel is adapted during the search. Overall we show that the approach is promising for parallel domain-independent, suboptimal planning.


Recursive Best-First Search with Bounded Overhead

AAAI Conferences

There are two major paradigms for linear-space heuristic search: iterative deepening (IDA*) and recursive best-first search (RBFS). While the node regeneration overhead of IDA* is easily characterized in terms of the heuristic branching factor, the overhead of RBFS depends on how widely the promising nodes are separated in the search tree, and is harder to anticipate. In this paper, we present two simple techniques for improving the performance of RBFS while maintaining its advantages over IDA*. While these techniques work well in practice, they do not provide any theoretical bounds on the amount of regeneration overhead. To this end, we introduce RBFScr, the first method for provably bounding the regeneration overhead of RBFS. We show empirically that this improves its performance in several domains, both for optimal and suboptimal search, and also yields a better linear-space anytime heuristic search. RBFScr is the first linear space best-first search robust enough to solve a variety of domains with varying operator costs.


Front-to-Front Bidirectional Best-First Search Reconsidered

AAAI Conferences

We present several new algorithms for bidirectional best-first search that employ a front-to-front strategy of estimating distances from newly-generated frontier nodes in one search direction to existing frontier nodes in the other search direction, rather than estimating distances to terminal nodes in both searches. Unlike previous front-to-front strategies that use a shared priority queue to manage both frontiers, we use a separate data structure for each search, and choose that data structure to minimize the amount of computational effort required by the best-first search algorithm it supports. We demonstrate several results. First, we show that Bidirectional Front-to-Front Greedy (BFFG) is able to quickly find sub-optimal solutions to very large statespace problems and with a small fraction of nodes expanded (and stored) compared to other unidirectional and bidirectional greedy techniques. Secondly, we show that Bidirectional Front-to-Front A* (BFFA*) similarly outperforms both Unidirectional A* and Bidirectional Front-to-End A* (BFEA*) in terms of node expansions when searching for optimal solutions. Finally, we describe three improvements to BFFA*, each of which reduces the overall runtime by limiting the number of opposing frontier nodes that need be considered while preserving the optimality criterion.