AAAI Conferences
Kumar
In this paper, we present an efficient algorithm for verifying path-consistency on a binary constraint network. The complexities of our algorithm beat the previous conjectures on the lower bounds for verifying path-consistency. We therefore defeat the proofs for several published results that incorrectly rely on these conjectures. Our algorithm is motivated by the idea of reformulating path-consistency verification as fast matrix multiplication. Further, for a computational model that counts arithmetic operations (rather than bit operations), a clever use of the properties of prime numbers allows us to design an even faster variant of the algorithm. Based on our algorithm, we hope to inspire a new class of techniques for verifying and even establishing varying levels of local-consistency on given constraint networks.
Kumar
Many real-world applications require the successful combination of spatial and temporal reasoning. In this paper, we study the general framework of the Traveling Salesman Problem with Simple Temporal Constraints. Representationally, this framework subsumes the Traveling Salesman Problem, Simple Temporal Problems, as well as many of the frameworks described in the literature. We analyze the theoretical properties of the combined problem providing strong inapproximability results for the general problem, and positive results for some special cases.
Jabbour
In this paper, a new approach for clauses learning is proposed. By traversing the implication graph sepa- rately from x and x, we derive a new class of bi-asserting clauses that can lead to a more compact implication graph. These new kinds of bi-asserting clauses are much shorter and tend to induce more implications than the classical bi-asserting clauses. Experimental results show that exploiting this new class of bi-asserting clauses improves the performance of state-of-the-art SAT solvers particularly on crafted instances.
El Mouelhi
Many works have studied the properties of CSPs which are based on the structures of constraint networks, or based on the features of compatibility relations. Studies on structures rely generally on properties of graphs for binary CSPs and on properties of hypergraphs for the general case, that is CSPs with constraints of arbitrary arity. In the second case, using the dual representation of hypergraphs, that is a reformulation of the instances, we can exploit notions and properties of graphs. For the studies of compatibility relations, the exploitation of properties of graphs is possible studying a graph called microstructure which allows to reformulate instances of binary CSP. Unfortunately, this approach is limited to CSPs with binary constraints.
Dvorak
AI Planning is inherently hard and hence it is desirable to derive as much information as we can from the structure of the planning problem and let this information be exploited by a planner. Many recent planners use the finite-domain state-variable representation of the problem instead of the traditional propositional representation. However, most planning problems are still specified in the propositional representation due to the widespread modeling language PDDL and it is hard to generate a compact and computationally efficient state variable representation from the propositional model. In this paper we propose a novel method for automaticallygenerating an efficient state-variable representation from the propositional representation. This method groups sets of propositions into state variables based onthe mutex relations introduced in the planning graph. As we shall show experimentally, our method outperforms the current state-of-the-art method both in the smaller number of generated state variables and in the increased performance of planners.
Chrpa
In Automated Planning, learning and exploiting additional knowledge within a domain model, in order to improve plan generation speed-up and increase the scope of problems solved, has attracted much research. Reformulation techniques such as those based on macro-operators or entanglements are very promising because they are to some extent domain model and planning engine independent. This paper aims to exploit recent work on inner entanglements, relations between pairs of planning operators and predicates encapsulating exclusivity of predicate achievements or requirements', for generating macro-operators. We discuss conditions which are necessary for generating such macro-operators and conditions that allow removing primitive operators without compromising solvability of a given (class of) problem(s). The effectiveness of our approach will be experimentally shown on a set of well-known benchmark domains using several high-performing planning engines.
Chrpa
Analysing the structures of solution plans generated by AI Planning engines is helpful in improving the generative planning process, as well as shedding light in the study of its theoretical foundations. We investigate a specific property of solution plans, that we called linearity, which refers to a situation where each action achieves an atom (or atoms) for a directly following action, or achieves goal atom(s). Similarly, linearity can be defined for parallel plans where each action in a set of actions executed at some time step, achieves either goal atom(s) or atom(s) for some action executed in the directly following time step. In this paper, we present a general and problem-independent theoretical framework focusing on the analysis of planning operator schema, namely relations of achiever, clobberer and independence, in order to determine whether solvable planning problems using a given operator schema have as solutions optimal (parallel) plans which are linear. The findings presented in this paper deepen current theoretical knowledge, provide helpful information to engineers of new planning domain models, and suggest new ways of improving the performance of state-of-the-art (optimal) planning engines.
Alhossaini
We propose an approach to remodelling classical planning domains via the addition of macro operators and removal of original operators either for the domain as a whole or instance-by-instance. For the latter remodelling, we train a predictor to choose the best reformulation of the domain based on instance characteristic. In the domain level remodelling, we try find a fixed remodelling that works best on average over our training set. Operator removal does not generally preserve solubility and proving solubility preservation of domain models is PSPACE-complete. So we use an approach that uses training instances to empirically estimate the probability of solubility preservation and maintains a minimum value of that probability on the training instances. We show that the instance-specific approach outperforms the traditional best-on-average macro-only remodelling approach in 9 out of 14 cases of domain/macro-source combinations, and that it can outperform fixed domain-based models generated with existing macro learning tools.
Beck
Unlike mathematical programming and SAT solving, ConstraintProgramming (CP) is based on the idea that both modeling and solving of combinatorial optimization problems can be based on conjunctions of loosely coupled, recurring, combinatorial sub-problems (aka "global constraints"). This rich representational approach means that, for better or for worse, pretty much anything can be expressed as a global constraint. Much of CP's success, however, has come from exploiting only one aspect of the rich constraint definition: global constraint propagation. In this talk, I will investigate how work in CP, SAT, AI planning, and mathematical programming can be understood as more seriously pursuing the implications of a rich constraint definition and how the interplay between local and global information can lead to dynamic problem reformulations and a flexible hybrid solver architecture.
Wu
We develop a novel absorption technique for large collections of factual assertions about individual objects. These assertions are commonly accompanied by implicit background knowledge and form a knowledge base. Both the assertions and the background knowledge are expressed in a suitable language of Description Logic and queries over such knowledge bases can be expressed as assertion retrieval queries. The proposed absorption technique significantly improves the performance of such queries, in particular in cases where a large number of object features are known for the objects represented in such a knowledge base. In addition to the absorption technique we present the results of a preliminary experimental evaluation that validates the efficacy of the proposed optimization.