Goldenberg, Meir
Partial-Expansion A* with Selective Node Generation
Felner, Ariel (Ben-Gurion University) | Goldenberg, Meir (Ben-Gurion University) | Sharon, Guni (Ben-Gurion University) | Stern, Roni (Ben-Gurion University) | Beja, Tal (Ben-Gurion University) | Sturtevant, Nathan (University of Denver) | Schaeffer, Jonathan (University of Alberta) | Holte, Robert (University of Alberta)
A* is often described as being `optimal', in that it expands the minimum number of unique nodes. But, A* may generate many extra nodes which are never expanded. This is a performance loss, especially when the branching factor is large. Partial Expansion A* addresses this problem when expanding a node, n, by generating all the children of n but only storing children with the same f-cost as n. n is re-inserted into the OPEN list, but with the f-cost of the next best child. This paper introduces an enhanced version of PEA* (EPEA*). Given a priori domain knowledge, EPEA* generates only the children with the same f-cost as the parent. EPEA* is generalized to its iterative-deepening variant, EPE-IDA*. For some domains, these algorithms yield substantial performance improvements. State-of-the-art results were obtained for the pancake puzzle and for some multi-agent pathfinding instances. Drawbacks of EPEA* are also discussed.
A* Variants for Optimal Multi-Agent Pathfinding
Goldenberg, Meir (Ben-Gurion University) | Felner, Ariel (Ben-Gurion University) | Stern, Roni (Ben-Gurion University) | Sharon, Guni (Ben-Gurion University) | Schaeffer, Jonathan (University of Alberta)
Several variants of A* have been recently proposed for find-ing optimal solutions for the multi-agent pathfinding (MAPF)problem. However, these variants have not been deeply com-pared either quantitatively or qualitatively. In this paper weaim to fill this gap. In addition to obtaining a deeper under-standing of the existing algorithms, we describe in detail theapplication of the new enhanced partial-expansion techniqueto MAPF and show how pattern databases can be applied ontop of this technique.
The Compressed Differential Heuristic
Goldenberg, Meir (Ben-Gurion University) | Sturtevant, Nathan (University of Denver) | Felner, Ariel (Ben-Gurion University) | Schaeffer, Jonathan (University of Alberta)
The differential heuristic (DH) is an effective memory-based heuristic for explicit state spaces. In this paper we aim to improve its performance and memory usage. We introduce a compression method for DHs which stores only a portion of the original uncompressed DH, while preserving enough information to enable efficient search. Compressed DHs (CDH) are flexible and can be tuned to fit any size of memory, even smaller than the size of the state space. Furthermore, CDHs can be built without the need to create and store the entire uncompressed DH. Experimental results across different domains show that, for a given amount of memory, a CDH significantly outperforms an uncompressed DH.