Solving Richly Constrained Reinforcement Learning through State Augmentation and Reward Penalties
Jiang, Hao, Mai, Tien, Varakantham, Pradeep, Hoang, Minh Huy
–arXiv.org Artificial Intelligence
Constrained Reinforcement Learning has been employed to compute safe policies through the use of expected cost constraints. The key challenge is in handling constraints on expected cost accumulated across time steps. Existing methods have developed innovative ways of converting this cost constraint over entire policy to constraints over local decisions (at each time step). While such approaches have provided good solutions with regards to objective, they can either be overly aggressive or conservative with respect to costs. This is owing to use of estimates for "future" or "backward" costs in local cost constraints. To that end, we provide an equivalent unconstrained formulation to constrained RL that has an augmented state space and reward penalties. This intuitive formulation is general and has interesting theoretical properties. More importantly, this provides a new paradigm for solving richly constrained (e.g., constraints on expected cost, Value at Risk, Conditional Value at Risk) Reinforcement Learning problems effectively. As we show in our experimental results, we are able to outperform leading approaches for different constraint types on multiple benchmark problems.
arXiv.org Artificial Intelligence
May-31-2023
- Country:
- North America > United States (0.14)
- Genre:
- Research Report (0.50)
- Industry:
- Automobiles & Trucks (0.46)
- Transportation > Ground
- Road (0.46)
- Technology: