We are interested in studying the behavior of algorithms and heuristics that can solve large and hard constraint satisfaction problems via systematic search. Our approach is to focus on the average-case behavior of several search algorithms, all variations of backtracking search, by analyzing their performance over a large number of randomly generated problem instances. Experimental evaluation of search methods may allow us to identify properties that cannot yet be identified formally. Because CSPs are an NPcomplete problem, the worst-case performance of any algorithm that solves them is exponential. Nevertheless, the average-case performance between different algorithms, determined experimentally, can vary by several orders of magnitude. An alternative to our approach is to do some form of average-case analysis. An average-case analysis requires, however, a precise characterization of the distribution of the input instances. Such a characterization is often not available. There are limitations to the approach we pursue here.
In recent years, many improvements to backtracking algorithms for solving constraint satisfaction problems have been proposed. The techniques for improving backtracking algorithms can be conveniently classified as look-ahead schemes and look-back schemes. Unfortunately, look-ahead and look-back schemes are not entirely orthogonal as it has been observed empirically that the enhancement of look-ahead techniques is sometimes counterproductive to the effects of look-back techniques. In this paper, we focus on the relationship between the two most important look-ahead techniques---using a variable ordering heuristic and maintaining a level of local consistency during the backtracking search---and the look-back technique of conflict-directed backjumping (CBJ). We show that there exists a "perfect" dynamic variable ordering such that CBJ becomes redundant. We also show theoretically that as the level of local consistency that is maintained in the backtracking search is increased, the less that backjumping will be an improvement. Our theoretical results partially explain why a backtracking algorithm doing more in the look-ahead phase cannot benefit more from the backjumping look-back scheme. Finally, we show empirically that adding CBJ to a backtracking algorithm that maintains generalized arc consistency (GAC), an algorithm that we refer to as GAC-CBJ, can still provide orders of magnitude speedups. Our empirical results contrast with Bessiere and Regin's conclusion (1996) that CBJ is useless to an algorithm that maintains arc consistency.
Although nogood learning in CSPs and clause learning in SAT are formally equivalent, nogood learning has not been as successful a technique in CSP solvers as clause learning has been for SAT solvers. We show that part of the reason for this discrepancy is that nogoods in CSPs (as standardly defined) are too restrictive. In this paper we demonstrate that these restrictions can be lifted so that a CSP solver can learn more general and powerful nogoods. Nogoods generalized in this manner yield a provably more powerful CSP solver. We also demonstrate how generalized nogoods facilitate learning useful nogoods from global constraints. Finally, we demonstrate empirically that generalized nogoods can yield significant improvements in performance.
Finite-domain constraint satisfaction (CSP) is a popular problem solving technique in AI. A CSP instance is a set of variablesFand a set of constraints on values assigned to them. Solving an instance requires finding an assignment which satisfies the constraintsFor determining that no such assignment exists. In the usual formalizations this task is NP-completePso a polynomial time algorithm exists if and only if P NP. Since many practical problems amount to solving CSPsFwe want to find the best algorithms we canFregardless of the truth of this proposition.