Risk-sensitive Actor-Critic with Static Spectral Risk Measures for Online and Offline Reinforcement Learning
The development of Distributional Reinforcement Learning (DRL) has introduced a natural way to incorporate risk sensitivity into value-based and actor-critic methods by employing risk measures other than expectation in the value function. While this approach is widely adopted in many online and offline RL algorithms due to its simplicity, the naive integration of risk measures often results in suboptimal policies. This limitation can be particularly harmful in scenarios where the need for effective risk-sensitive policies is critical and worst-case outcomes carry severe consequences. To address this challenge, we propose a novel framework for optimizing static Spectral Risk Measures (SRM), a flexible family of risk measures that generalizes objectives such as CVaR and Mean-CVaR, and enables the tailoring of risk preferences. Our method is applicable to both online and offline RL algorithms. We establish theoretical guarantees by proving convergence in the finite state-action setting. Moreover, through extensive empirical evaluations, we demonstrate that our algorithms consistently outperform existing risk-sensitive methods in both online and offline environments across diverse domains.
Jul-8-2025
- Country:
- Europe > Portugal
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- California > San Francisco County
- San Francisco (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Wisconsin > Dane County
- Madison (0.04)
- California > San Francisco County
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.92)
- Industry:
- Banking & Finance > Trading (1.00)
- Health & Medicine > Therapeutic Area
- Immunology > HIV (0.47)
- Infections and Infectious Diseases (0.96)
- Technology: