Near-Optimal Distributionally Robust Reinforcement Learning with General L_p Norms
–Neural Information Processing Systems
To address the challenges of sim-to-real gap and sample efficiency in reinforcement learning (RL), this work studies distributionally robust Markov decision processes (RMDPs) --- optimize the worst-case performance when the deployed environment is within an uncertainty set around some nominal MDP. Despite recent efforts, the sample complexity of RMDPs has remained largely undetermined. While the statistical implications of distributional robustness in RL have been explored in some specific cases, the generalizability of the existing findings remains unclear, especially in comparison to standard RL. Assuming access to a generative model that samples from the nominal MDP, we examine the sample complexity of RMDPs using a class of generalized L_p norms as the'distance' function for the uncertainty set, under two commonly adopted sa -rectangular and s -rectangular conditions. Our results imply that RMDPs can be more sample-efficient to solve than standard MDPs using generalized L_p norms in both sa - and s -rectangular cases, potentially inspiring more empirical research.
Neural Information Processing Systems
May-26-2025, 14:58:43 GMT
- Technology: