Goto

Collaborating Authors

LeapsAndBounds: A Method for Approximately Optimal Algorithm Configuration

arXiv.org Artificial Intelligence

We consider the problem of configuring general-purpose solvers to run efficiently on problem instances drawn from an unknown distribution. The goal of the configurator is to find a configuration that runs fast on average on most instances, and do so with the least amount of total work. It can run a chosen solver on a random instance until the solver finishes or a timeout is reached. We propose LeapsAndBounds, an algorithm that tests configurations on randomly selected problem instances for longer and longer time. We prove that the capped expected runtime of the configuration returned by LeapsAndBounds is close to the optimal expected runtime, while our algorithm's running time is near-optimal. Our results show that LeapsAndBounds is more efficient than the recent algorithm of Kleinberg et al. (2017), which, to our knowledge, is the only other algorithm configuration method with non-trivial theoretical guarantees. Experimental results on configuring a public SAT solver on a new benchmark dataset also stand witness to the superiority of our method.


Decision-making framework for configuration with AWS AppConfig

#artificialintelligence

In this blog post, we show you how to separate configuration from code, explain the differences between dynamic and static configuration, and help you determine which values to use in your dynamic configuration. We also share processes to keep bloat down in your application configuration. Finally, we introduce you to AWS AppConfig, which allows you to create, validate, and deploy your application configuration. After you read this blog post, you should have practical knowledge about how to manage your application's dynamic configuration. It is an established best practice to separate application configuration from application code.


Automatic Configuration of Sequential Planning Portfolios

AAAI Conferences

Sequential planning portfolios exploit the complementary strengths of different planners. Similarly, automated algorithm configuration tools can customize parameterized planning algorithms for a given type of tasks. Although some work has been done towards combining portfolios and algorithm configuration, the problem of automatically generating a sequential planning portfolio from a parameterized planner for a given type of tasks is still largely unsolved. Here, we present Cedalion, a conceptually simple approach for this problem that greedily searches for the pair of parameter configuration and runtime which, when appended to the current portfolio, maximizes portfolio improvement per additional runtime spent. We show theoretically that Cedalion yields portfolios provably within a constant factor of optimal for the training set distribution. We evaluate Cedalion empirically by applying it to construct sequential planning portfolios based on component planners from the highly parameterized Fast Downward (FD) framework. Results for a broad range of planning settings demonstrate that -- without any knowledge of planning or FD -- Cedalion constructs sequential FD portfolios that rival, and in some cases substantially outperform, manually-built FD portfolios.


On the Effective Configuration of Planning Domain Models

AAAI Conferences

The development of domain-independent planners This modular approach also supports the use of reformulation within the AI Planning community is leading to and configuration techniques which can automatically "off the shelf" technology that can be used in a reformulate, re-represent or tune the domain model and/or wide range of applications. Moreover, it allows a problem description in order to increase the efficiency of modular approach - in which planners and domain a planner and increase the scope of problems solved. The knowledge are modules of larger software applications idea is to make these techniques to some degree independent - that facilitates substitutions or improvements of domain and planner (that is, applicable to a range of individual modules without changing the of domains and planning engine technologies), and use them rest of the system. This approach also supports the to form a wrapper around a planner, improving its overall use of reformulation and configuration techniques, performance for the domain to which it is applied. Types which transform how a model is represented in order of reformulation include macro-learning [Botea et al., 2005; to improve the efficiency of plan generation. Newton et al., 2007], action schema splitting [Areces et al., In this paper, we investigate how the performance 2014] and entanglements [Chrpa and McCluskey, 2012]: here of planners is affected by domain model configuration.


Procrastinating with Confidence: Near-Optimal, Anytime, Adaptive Algorithm Configuration

arXiv.org Artificial Intelligence

Algorithm configuration methods optimize the performance of a parameterized heuristic algorithm on a given distribution of problem instances. Recent work introduced an algorithm configuration procedure ('Structured Procrastination') that provably achieves near optimal performance with high probability and with nearly minimal runtime in the worst case. It also offers an $\textit{anytime}$ property: it keeps tightening its optimality guarantees the longer it is run. Unfortunately, Structured Procrastination is not $\textit{adaptive}$ to characteristics of the parameterized algorithm: it treats every input like the worst case. Follow-up work ('Leaps and Bounds') achieves adaptivity but trades away the anytime property. This paper introduces a new algorithm configuration method, 'Structured Procrastination with Confidence', that preserves the near-optimality and anytime properties of Structured Procrastination while adding adaptivity. In particular, the new algorithm will perform dramatically faster in settings where many algorithm configurations perform poorly; we show empirically that such settings arise frequently in practice.