RAAM: The Benefits of Robustness in Approximating Aggregated MDPs in Reinforcement Learning
Marek Petrik, Dharmashankar Subramanian
–Neural Information Processing Systems
We describe how to use robust Markov decision processes for value function approximation with state aggregation. The robustness serves to reduce the sensitivity to the approximation error of sub-optimal policies in comparison to classical methods such as fitted value iteration.
Neural Information Processing Systems
Feb-9-2025, 07:48:39 GMT
- Country:
- North America > United States > New York > New York County > New York City (0.04)
- Genre:
- Research Report (0.46)