Learning Verifiable Control Policies Using Relaxed Verification
Chaudhury, Puja, Estornell, Alexander, Everett, Michael
–arXiv.org Artificial Intelligence
Learning V erifiable Control Policies Using Relaxed V erification Puja Chaudhury, Alexander Estornell, Michael Everett Abstract -- T o provide safety guarantees for learning-based control systems, recent work has developed formal verification methods to apply after training ends. However, if the trained policy does not meet the specifications, or there is conservatism in the verification algorithm, establishing these guarantees may not be possible. Instead, this work proposes to perform verification throughout training to ultimately aim for policies whose properties can be evaluated throughout runtime with lightweight, relaxed verification algorithms. The approach is to use differentiable reachability analysis and incorporate new components into the loss function. Numerical experiments on a quadrotor model and unicycle model highlight the ability of this approach to lead to learned control policies that satisfy desired reach-avoid and invariance specifications.
arXiv.org Artificial Intelligence
Apr-24-2025
- Country:
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Genre:
- Research Report (0.50)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (0.98)
- Representation & Reasoning (0.68)
- Robots (0.69)
- Information Technology > Artificial Intelligence