Plotting

 Fukunaga, Alex


Evaluation of a Simple, Scalable, Parallel Best-First Search Strategy

arXiv.org Artificial Intelligence

Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems. We investigate Hash-Distributed A* (HDA*), a simple approach to parallel best-first search that asynchronously distributes and schedules work among processors based on a hash function of the search state. We use this approach to parallelize the A* algorithm in an optimal sequential version of the Fast Downward planner, as well as a 24-puzzle solver. The scaling behavior of HDA* is evaluated experimentally on a shared memory, multicore machine with 8 cores, a cluster of commodity machines using up to 64 cores, and large-scale high-performance clusters, using up to 2400 processors. We show that this approach scales well, allowing the effective utilization of large amounts of distributed memory to optimally solve problems which require terabytes of RAM. We also compare HDA* to Transposition-table Driven Scheduling (TDS), a hash-based parallelization of IDA*, and show that, in planning, HDA* significantly outperforms TDS. A simple hybrid which combines HDA* and TDS to exploit strengths of both algorithms is proposed and evaluated.


Iterative Resource Allocation for Memory Intensive Parallel Search Algorithms on Clouds, Grids, and Shared Clusters

AAAI Conferences

The increasing availability of “utility computing” resources such as clouds, grids, and massively parallel shared clusters can provide practically unlimited processing and memory capacity on demand, at some cost per unit of resource usage. This requires a new perspective in the design and evaluation of parallel search algorithms. Previous work in parallel search implicitly assumed ownership of a cluster with a static amount of CPU cores and RAM, and emphasized wallclock runtime. With utility computing resources, trade-offs between performance and monetary costs must be considered. This paper considers dynamically increasing the usage of utility computing resources until a problem is solved. Efficient resource allocation policies are analyzed in comparison with an optimal allocation strategy. We evaluate our iterative allocation strategy by applying it to the HDA* parallel search algorithm. The experimental results validate our theoretical predictions. They show that, in practice, the costs incurred by iterative allocation are reasonably close to an optimal (but a priori unknown) policy, and are significantly better than the worst-case analytical bounds.


Scalable, Parallel Best-First Search for Optimal Sequential Planning

AAAI Conferences

Large-scale, parallel clusters composed of commodity processors are increasingly available, enabling the use of vast processing capabilities and distributed RAM to solve hard search problems.  We investigate parallel algorithms for optimal sequential planning, with an emphasis on exploiting distributed memory computing clusters.  In particular, we focus on an approach which distributes and schedules work among processors based on a hash function of the search state.  We use this approach to parallelize the A* algorithm in the optimal sequential version of the Fast Downward planner.  The scaling behavior of the algorithm is evaluated experimentally on clusters using up to 128 processors, a significant increase compared to previous work in parallelizing planners.  We show that this approach scales well, allowing us to effectively utilize the large amount of distributed memory to optimally solve problems which require hundreds of gigabytes of RAM to solve. We also show that this approach scales  well for a single, shared-memory multicore machine.


Staff Scheduling for Inbound Call and Customer Contact Centers

AI Magazine

The staff scheduling problem is a critical problem in the call center (or, more generally, customer contact center) industry. This article describes DIRECTOR, a staff scheduling system for contact centers. DIRECTOR is a constraint-based system that uses AI search techniques to generate schedules that satisfy and optimize a wide range of constraints and service-quality metrics. DIRECTOR has successfully been deployed at more than 800 contact centers, with significant measurable benefits, some of which are documented in case studies included in this article.


Staff Scheduling for Inbound Call and Customer Contact Centers

AI Magazine

The staff scheduling problem is a critical problem in the call center (or, more generally, customer contact center) industry. This article describes DIRECTOR, a staff scheduling system for contact centers. DIRECTOR is a constraint-based system that uses AI search techniques to generate schedules that satisfy and optimize a wide range of constraints and service-quality metrics. DIRECTOR has successfully been deployed at more than 800 contact centers, with significant measurable benefits, some of which are documented in case studies included in this article.