Constraint-Based Reasoning
Efficient Knowledge Base Management in DCSP
DCSP (Distributed Constraint Satisfaction Problem) has been a very important research area in AI (Artificial Intelligence). There are many application problems in distributed AI that can be formalized as DSCPs. With the increasing complexity and problem size of the application problems in AI, the required storage place in searching and the average searching time are increasing too. Thus, to use a limited storage place efficiently in solving DCSP becomes a very important problem, and it can help to reduce searching time as well. This paper provides an efficient knowledge base management approach based on general usage of hyper-resolution-rule in consistence algorithm. The approach minimizes the increasing of the knowledge base by eliminate sufficient constraint and false nogood. These eliminations do not change the completeness of the original knowledge base increased. The proofs are given as well. The example shows that this approach decrease both the new nogoods generated and the knowledge base greatly. Thus it decreases the required storage place and simplify the searching process.
Solving the Resource Constrained Project Scheduling Problem with Generalized Precedences by Lazy Clause Generation
Schutt, Andreas, Feydy, Thibaut, Stuckey, Peter J., Wallace, Mark G.
The technical report presents a generic exact solution approach for minimizing the project duration of the resource-constrained project scheduling problem with generalized precedences (Rcpsp/max). The approach uses lazy clause generation, i.e., a hybrid of finite domain and Boolean satisfiability solving, in order to apply nogood learning and conflict-driven search on the solution generation. Our experiments show the benefit of lazy clause generation for finding an optimal solutions and proving its optimality in comparison to other state-of-the-art exact and non-exact methods. The method is highly robust: it matched or bettered the best known results on all of the 2340 instances we examined except 3, according to the currently available data on the PSPLib. Of the 631 open instances in this set it closed 573 and improved the bounds of 51 of the remaining 58 instances.
Improving the Performance of maxRPC
Balafoutis, Thanasis, Paparrizou, Anastasia, Stergiou, Kostas, Walsh, Toby
Max Restricted Path Consistency (maxRPC) is a local consistency for binary constraints that can achieve considerably stronger pruning than arc consistency. However, existing maxRRC algorithms suffer from overheads and redundancies as they can repeatedly perform many constraint checks without triggering any value deletions. In this paper we propose techniques that can boost the performance of maxRPC algorithms. These include the combined use of two data structures to avoid many redundant constraint checks, and heuristics for the efficient ordering and execution of certain operations. Based on these, we propose two closely related algorithms. The first one which is a maxRPC algorithm with optimal O(end^3) time complexity, displays good performance when used stand-alone, but is expensive to apply during search. The second one approximates maxRPC and has O(en^2d^4) time complexity, but a restricted version with O(end^4) complexity can be very efficient when used during search. Both algorithms have O(ed) space complexity. Experimental results demonstrate that the resulting methods constantly outperform previous algorithms for maxRPC, often by large margins, and constitute a more than viable alternative to arc consistency on many problems.
Distributed solving through model splitting
Kotthoff, Lars, Moore, Neil C. A.
Constraint problems can be trivially solved in parallel by exploring different branches of the search tree concurrently. Previous approaches have focused on implementing this functionality in the solver, more or less transparently to the user. We propose a new approach, which modifies the constraint model of the problem. An existing model is split into new models with added constraints that partition the search space. Optionally, additional constraints are imposed that rule out the search already done. The advantages of our approach are that it can be implemented easily, computations can be stopped and restarted, moved to different machines and indeed solved on machines which are not able to communicate with each other at all.
Machine learning for constraint solver design -- A case study for the alldifferent constraint
Gent, Ian, Kotthoff, Lars, Miguel, Ian, Nightingale, Peter
Constraint solvers are complex pieces of software which require many design decisions to be made by the implementer based on limited information. These decisions affect the performance of the finished solver significantly. Once a design decision has been made, it cannot easily be reversed, although a different decision may be more appropriate for a particular problem. We investigate using machine learning to make these decisions automatically depending on the problem to solve. We use the alldifferent constraint as a case study. Our system is capable of making non-trivial, multi-level decisions that improve over always making a default choice and can be implemented as part of a general-purpose constraint solver.
Evaluating and Improving Modern Variable and Revision Ordering Strategies in CSPs
Balafoutis, Thanasis, Stergiou, Kostas
A key factor that can dramatically reduce the search space during constraint solving is the criterion under which the variable to be instantiated next is selected. For this purpose numerous heuristics have been proposed. Some of the best of such heuristics exploit information about failures gathered throughout search and recorded in the form of constraint weights, while others measure the importance of variable assignments in reducing the search space. In this work we experimentally evaluate the most recent and powerful variable ordering heuristics, and new variants of them, over a wide range of benchmarks. Results demonstrate that heuristics based on failures are in general more efficient. Based on this, we then derive new revision ordering heuristics that exploit recorded failures to efficiently order the propagation list when arc consistency is maintained during search. Interestingly, in addition to reducing the number of constraint checks and list operations, these heuristics are also able to cut down the size of the explored search tree.
Adaptive Branching for Constraint Satisfaction Problems
Balafoutis, Thanasis, Stergiou, Kostas
The two standard branching schemes for CSPs are d-way and 2-way branching. Although it has been shown that in theory the latter can be exponentially more effective than the former, there is a lack of empirical evidence showing such differences. To investigate this, we initially make an experimental comparison of the two branching schemes over a wide range of benchmarks. Experimental results verify the theoretical gap between d-way and 2-way branching as we move from a simple variable ordering heuristic like dom to more sophisticated ones like dom/ddeg. However, perhaps surprisingly, experiments also show that when state-of-the-art variable ordering heuristics like dom/wdeg are used then d-way can be clearly more efficient than 2-way branching in many cases. Motivated by this observation, we develop two generic heuristics that can be applied at certain points during search to decide whether 2-way branching or a restricted version of 2-way branching, which is close to d-way branching, will be followed. The application of these heuristics results in an adaptive branching scheme. Experiments with instantiations of the two generic heuristics confirm that search with adaptive branching outperforms search with a fixed branching scheme on a wide range of problems.
Stable marriage problems with quantitative preferences
Pini, Maria Silvia, Rossi, Francesca, Venable, Brent, Walsh, Toby
The stable marriage problem is a well-known problem of matching men to women so that no man and woman, who are not married to each other, both prefer each other. Such a problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools or more generally to any two-sided market. In the classical stable marriage problem, both men and women express a strict preference order over the members of the other sex, in a qualitative way. Here we consider stable marriage problems with quantitative preferences: each man (resp., woman) provides a score for each woman (resp., man). Such problems are more expressive than the classical stable marriage problems. Moreover, in some real-life situations it is more natural to express scores (to model, for example, profits or costs) rather than a qualitative preference ordering. In this context, we define new notions of stability and optimality, and we provide algorithms to find marriages which are stable and/or optimal according to these notions. While expressivity greatly increases by adopting quantitative preferences, we show that in most cases the desired solutions can be found by adapting existing algorithms for the classical stable marriage problem.
CLP-based protein fragment assembly
Palu', Alessandro Dal, Dovier, Agostino, Fogolari, Federico, Pontelli, Enrico
The paper investigates a novel approach, based on Constraint Logic Programming (CLP), to predict the 3D conformation of a protein via fragments assembly. The fragments are extracted by a preprocessor-also developed for this work- from a database of known protein structures that clusters and classifies the fragments according to similarity and frequency. The problem of assembling fragments into a complete conformation is mapped to a constraint solving problem and solved using CLP. The constraint-based model uses a medium discretization degree Ca-side chain centroid protein model that offers efficiency and a good approximation for space filling. The approach adapts existing energy models to the protein representation used and applies a large neighboring search strategy. The results shows the feasibility and efficiency of the method. The declarative nature of the solution allows to include future extensions, e.g., different size fragments for better accuracy.
Computing Cost-Optimal Definitely Discriminating Tests
Schumann, Anika (Cork Constraint Computation Centre) | Huang, Jinbo (NICTA and Australian National University) | Sachenbacher, Martin (Technische Universität München)
The goal of testing is to discriminate between multiple hypotheses about a system - for example, different fault diagnoses - by applying input patterns and verifying or falsifying the hypotheses from the observed outputs. Definitely discriminating tests (DDTs) are those input patterns that are guaranteed to discriminate between different hypotheses of non-deterministic systems. Finding DDTs is important in practice, but can be very expensive. Even more challenging is the problem of finding a DDT that minimizes the cost of the testing process, i.e., an input pattern that can be most cheaply enforced and that is a DDT. This paper addresses both problems. We show how we can transform a given problem into a Boolean structure in decomposable negation normal form (DNNF), and extract from it a Boolean formula whose models correspond to DDTs. This allows us to harness recent advances in both knowledge compilation and satisfiability for efficient and scalable DDT computation in practice. Furthermore, we show how we can generate a DNNF structure compactly encoding all DDTs of the problem and use it to obtain a cost-optimal DDT in time linear in the size of the structure. Experimental results from a real-world application show that our method can compute DDTs in less than 1 second for instances that were previously intractable, and cost-optimal DDTs in less than 20 seconds where previous approaches could not even compute an arbitrary DDT.