Realizable Continuous-Space Shields for Safe Reinforcement Learning
Kim, Kyungmin, Corsi, Davide, Rodriguez, Andoni, Lanier, JB, Parellada, Benjami, Baldi, Pierre, Sanchez, Cesar, Fox, Roy
–arXiv.org Artificial Intelligence
While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. An effective solution to prevent these failures is to use a shield that validates and adjusts the agent's actions to ensure compliance with a provided set of safety specifications. For real-world robotic domains, it is essential to define safety specifications over continuous state and action spaces to accurately account for system dynamics and compute new actions that minimally deviate from the agent's original decision. In this paper, we present the first shielding approach specifically designed to ensure the satisfaction of safety requirements in continuous state and action spaces, making it suitable for practical robotic applications. Our method builds upon realizability, an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can be verified for stateful shields, enabling the incorporation of non-Markovian safety requirements, such as loop avoidance. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising the policy's success rate by applying it to a navigation problem and a multi-agent particle environment Keywords: Shielding, Reinforcement Learning, Safety, Robotics
arXiv.org Artificial Intelligence
Dec-1-2024
- Country:
- Europe > Spain (0.28)
- North America > United States (0.46)
- Genre:
- Research Report > New Finding (0.46)
- Technology: