approximating aggregated mdp
Technology: Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.40)
RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning
Petrik, Marek, Subramanian, Dharmashankar
We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical methods such as fitted value iteration. Our experimental results show that using the robust representation can significantly improve the solution quality with minimal additional computational cost. Papers published at the Neural Information Processing Systems Conference.
Technology: Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.40)