Leveraging Approximate Model-based Shielding for Probabilistic Safety Guarantees in Continuous Environments
Goodall, Alexander W., Belardinelli, Francesco
–arXiv.org Artificial Intelligence
Shielding is a popular technique for achieving safe reinforcement learning (RL). However, classical shielding approaches come with quite restrictive assumptions making them difficult to deploy in complex environments, particularly those with continuous state or action spaces. In this paper we extend the more versatile approximate model-based shielding (AMBS) framework to the continuous setting. In particular we use Safety Gym as our test-bed, allowing for a more direct comparison of AMBS with popular constrained RL algorithms. We also provide strong probabilistic safety guarantees for the continuous setting. In addition, we propose two novel penalty techniques that directly modify the policy gradient, which empirically provide more stable convergence in our experiments.
arXiv.org Artificial Intelligence
Feb-1-2024
- Country:
- Europe > United Kingdom > England > Greater London > London (0.14)
- Genre:
- Research Report > New Finding (0.34)