cesor
- North America > United States > Oregon (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Oregon (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Oregon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Oregon (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)
Efficient Risk-Averse Reinforcement Learning
In this post I present our recent NeurIPS 2022 paper (co-authored with Yinlam Chow, Mohammad Ghavamzadeh and Shie Mannor) about risk-averse reinforcement learning (RL). I discuss why and how risk aversion is applied to RL, what its limitations are, and how we propose to overcome them. An application to accidents prevention in autonomous driving is demonstrated. Our code is also available on GitHub. Risk-averse RL is crucial when applying RL to risk-sensitive real-world problems.
Efficient Risk-Averse Reinforcement Learning
Greenberg, Ido, Chow, Yinlam, Ghavamzadeh, Mohammad, Mannor, Shie
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns. A risk measure often focuses on the worst returns out of the agent's experience. As a result, standard methods for risk-averse RL often ignore high-return strategies. We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it. We also devise a novel Cross Entropy module for risk sampling, which (1) preserves risk aversion despite the soft risk; (2) independently improves sample efficiency. By separating the risk aversion of the sampler and the optimizer, we can sample episodes with poor conditions, yet optimize with respect to successful strategies. We combine these two concepts in CeSoR - Cross-entropy Soft-Risk optimization algorithm - which can be applied on top of any risk-averse policy gradient (PG) method. We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks, including in scenarios where standard risk-averse PG completely fails.
- North America > United States > Oregon (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Israel (0.04)