Goto

Collaborating Authors

 approximating aggregated mdp



RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning

Neural Information Processing Systems

We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical methods such as fitted value iteration.


RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning

Petrik, Marek, Subramanian, Dharmashankar

Neural Information Processing Systems

We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical methods such as fitted value iteration. Our experimental results show that using the robust representation can significantly improve the solution quality with minimal additional computational cost. Papers published at the Neural Information Processing Systems Conference.