The number partitioning problem is to divide a set of integers into a collection of subsets, so that the sum of the numbers in each subset are as nearly equal as possible. There are at least three natural objective functions for number partitioning. One is to minimize the largest subset sum, another is to maximize the smallest subset sum, and the third is to minimize the difference between the largest and smallest subset sums. I show that contrary to my previous claims, no two of these objective functions are equivalent for partitioning numbers three or more ways. Minimizing the largest subset sum or maximizing the smallest subset sum correspond to different practical applications of number partitioning, and both allow a recursive strategy for finding optimal solutions that is very effective in practice. Finally, a completely new version of this recursive strategy appears to reduce the asymptotic complexity of the algorithm, and results in orders of magnitude improvement over the best previous results for multi-way partitioning.
An important challenge in constraint programming is to rewrite constraint models into executable programs calculating the solutions. This phase of constraint processing may require translations between constraint programming languages, transformations of constraint representations, model optimizations, and tuning of solving strategies. In this paper, we introduce a pivot metamodel describing the common features of constraint models including different kinds of constraints, statements like conditionals and loops, and other first-class elements like object classes and predicates. This metamodel is general enough to cope with the constructions of many languages, from object-oriented modeling languages to logic languages, but it is independent from them. The rewriting operations manipulate metamodel instances apart from languages. As a consequence, the rewriting operations apply whatever languages are selected and they are able to manage model semantic information. A bridge is created between the metamodel space and languages using parsing techniques. Tools from the software engineering world can be useful to implement this framework.
When constructing a suboptimal single-agent search system, there are a number of decisions to be made that can significantly affect search efficiency. Each of these design decisions -- including the selection of an algorithm, a heuristic function, parameter values, etc. -- can greatly impact search speed. Following the work of others (Hutter, Hoos, and Stützle 2007) we refer to the set of choices made for an algorithm as the algorithm's configuration. In practice, configurations are tested offline so as to find some single setting to be used in any future search. Unfortunately, tuning is an expensive process that is specific to each problem domain.
We introduce a new bidirectional search algorithm, Single-Frontier Bidirectional Search (SFBDS). Unlike traditional BDS which keeps two frontiers, SFBDS uses a single frontier. At a particular node we can decide to search from start to goal or from goal to start, choosing the direction with the highest potential for minimizing the total work done. We provide theoretical analysis that explains when SFBDS will work validated by experimental results.
First, it is better to test on small benchmark instances than to solve the largest possible ones. This eases replication and allows a more diverse set of instances to be tested. There are few conclusions that one can draw from running on large benchmarks that can't also be drawn from running on small benchmarks. Second, experimental evaluation should focus on understanding algorithm behavior and forming predictive models, rather than on achieving state-of-the-art performance on toy problems. Third, it is more important to develop search techniques that are robust across multiple domains than ones that only give state-of-the-art performance in a single domain. Robust techniques are more likely be useful to others.
Planning research has returned to the issue of optimizing costs (rather than sizes) of plans. A prevalent perception, at least among non-experts in search, is that graph search for optimizing the size of paths generalizes more or less trivially to optimizing the cost of paths. While this kind of generalization is usually straightforward for graph theorems, graph algorithms are a different story. In particular, implementing a search evaluation function by substituting cost for size is a Bad Idea. Though experts have stated as much, cutting-edge practitioners are still learning of the consequences the hard way; here we mount a forceful indictment on the inherent dangers of cost-based search.