A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP
–Neural Information Processing Systems
As an important framework for safe Reinforcement Learning, the Constrained Markov Decision Process (CMDP) has been extensively studied in the recent literature. However, despite the rich results under various on-policy learning settings, there still lacks some essential understanding of the offline CMDP problems, in terms of both the algorithm design and the information theoretic sample complexity lower bound. In this paper, we focus on solving the CMDP problems where only offline data are available.
Neural Information Processing Systems
Nov-20-2025, 08:59:18 GMT