Policy Gradient for Rectangular Robust Markov Decision Processes
–Neural Information Processing Systems
Policy gradient methods have become a standard for training reinforcement learning agents in a scalable and efficient manner. However, they do not account for transition uncertainty, whereas learning robust policies can be computationally expensive. In this paper, we introduce robust policy gradient (RPG), a policy-based method that efficiently solves rectangular robust Markov decision processes (MDPs). We provide a closed-form expression for the worst occupation measure. Incidentally, we find that the worst kernel is a rank-one perturbation of the nominal.
Neural Information Processing Systems
Dec-26-2025, 15:41:06 GMT
- Technology: