Action Selection for MDPs: Anytime AO* Versus UCT

Bonet, Blai (Universidad Simon Bolivar) | Geffner, Hector (ICREA and Universitat Pompeu Fabra)

AAAI Conferences 

One of the natural approaches for selecting actions in very From this perspective, an algorithm like RTDP fails on two large state spaces is by performing a limited amount of grounds: first, RTDP does not appear to make best use of lookahead. In the contexts of discounted MDPs, Kearns, short time windows in large state spaces; second, and more Mansour, and Ng have shown that near to optimal actions importantly, RTDP can use admissible heuristics but not informed can be selected by considering a sampled lookahead tree that base policies. On the other hand, algorithms like Policy is sufficiently sparse, whose size depends on the discount Iteration (Howard 1971), deliver all of these features except factor and the suboptimality bound but not on the number of one: they are exhaustive, and thus even to get started, problem states (Kearns, Mansour, and Ng 1999). The UCT they need vectors with the size of the state space. At the algorithm (Kocsis and Szepesvári 2006) is a version of this same time, while there are non-exhaustive versions of (asynchronous) form of Monte Carlo planning, where the lookahead trees Value Iteration such as RTDP, there are no similar are not grown depth-first but'best-first', following a selection'focused' versions of Policy Iteration ensuring anytime optimality.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found