SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Safe Reinforcement Learning
Zhang, Borong, Zhang, Yuhao, Ji, Jiaming, Lei, Yingshan, Dai, Josef, Chen, Yuanpei, Yang, Yaodong
–arXiv.org Artificial Intelligence
Vision-language-action models (VLAs) have shown great potential as generalist robot policies. However, these models pose urgent safety challenges during deployment, including the risk of physical harm to the environment, the robot itself, and humans. How can safety be explicitly incorporated into VLAs? In this work, we propose SafeVLA, a novel algorithm designed to integrate safety into VLAs, ensuring the protection of the environment, robot hardware and humans in real-world settings. SafeVLA effectively balances safety and task performance by employing large-scale constrained learning within simulated environments. We demonstrate that SafeVLA outperforms the current state-of-the-art method in both safety and task performance, achieving average improvements of 83.58% and 3.85%, respectively, in simulation. By prioritizing safety, our approach eliminates high-risk behaviors and reduces the upper bound of unsafe behaviors to 1/35 of that in the current state-of-the-art, thereby significantly mitigating long-tail risks. Furthermore, the learned safety constraints generalize to diverse, unseen scenarios, including multiple out-of-distribution perturbations and tasks. Our data, models and newly proposed benchmark environment are available at https://sites.google.com/view/pku-safevla.
arXiv.org Artificial Intelligence
Mar-5-2025
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.68)
- Research Report
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.68)
- Large Language Model (1.00)
- Representation & Reasoning > Optimization (0.67)
- Robots (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence