Basis refinement strategies for linear value function approximation in MDPs

Comanici, Gheorghe, Precup, Doina, Panangaden, Prakash

Neural Information Processing Systems 

We provide a theoretical framework for analyzing basis function construction for linear value function approximation in Markov Decision Processes (MDPs). We show that important existing methods, such as Krylov bases and Bellman-error-based methods are a special case of the general framework we develop. We provide a general algorithmic framework for computing basis function refinements which “respect” the dynamics of the environment, and we derive approximation error bounds that apply for any algorithm respecting this general framework. We also show how, using ideas related to bisimulation metrics, one can translate basis refinement into a process of finding “prototypes” that are diverse enough to represent the given MDP.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found