Action Selection for MDPs: Anytime AO* vs. UCT
–arXiv.org Artificial Intelligence
In the presence of non-admissible heuristics, A* and other best-first algorithms can be converted into anytime optimal algorithms over OR graphs, by simply continuing the search after the first solution is found. The same trick, however, does not work for best-first algorithms over AND/OR graphs, that must be able to expand leaf nodes of the explicit graph that are not necessarily part of the best partial solution. Anytime optimal variants of AO* must thus address an exploration-exploitation tradeoff: they cannot just "exploit", they must keep exploring as well. In this work, we develop one such variant of AO* and apply it to finite-horizon MDPs. This Anytime AO* algorithm eventually delivers an optimal policy while using non-admissible random heuristics that can be sampled, as when the heuristic is the cost of a base policy that can be sampled with rollouts. We then test Anytime AO* for action selection over large infinite-horizon MDPs that cannot be solved with existing off-line heuristic search and dynamic programming algorithms, and compare it with UCT. Introduction One of the natural approaches for selecting actions in very large state spaces is by performing a limited amount of lookahead. In the contexts of discounted MDPs, Kearns, Mansour, and Ng have shown that near to optimal actions can be selected by considering a sampled lookahead tree that is sufficiently sparse, whose size depends on the discount factor and the suboptimality bound but not on the number of problem states (Kearns, Mansour, and Ng 1999).
arXiv.org Artificial Intelligence
Sep-26-2019
- Genre:
- Research Report (0.50)
- Industry:
- Energy > Oil & Gas
- Upstream (0.34)
- Leisure & Entertainment > Games (0.46)
- Energy > Oil & Gas