Provably Safe Neural Network Controllers via Differential Dynamic Logic Samuel Teuber 1 Stefan Mitsch 2 Karlsruhe Institute of Technology 2
–Neural Information Processing Systems
While neural networks (NNs) have a large potential as autonomous controllers for Cyber-Physical Systems, verifying the safety of neural network based control systems (NNCSs) poses significant challenges for the practical use of NNs--especially when safety is needed for unbounded time horizons. One reason for this is the intractability of analyzing NNs, ODEs and hybrid systems. To this end, we introduce VerSAILLE (Verifiably Safe AI via Logically Linked Envelopes): The first general approach that allows reusing control theory literature for NNCS verification. By joining forces, we can exploit the efficiency of NN verification tools while retaining the rigor of differential dynamic logic (dL). Based on a provably safe control envelope in dL, we derive a specification for the NN which is proven with NN verification tools. We show that a proof of the NN's adherence to the specification is then mirrored by a dL proof on the infinite-time safety of the NNCS. The NN verification properties resulting from hybrid systems typically contain nonlinear arithmetic over formulas with arbitrary logical structure while efficient NN verification tools merely support linear constraints. To overcome this divide, we present Mosaic: An efficient, sound and complete verification approach for polynomial real arithmetic properties on piece-wise linear NNs.
Neural Information Processing Systems
May-28-2025, 07:07:59 GMT
- Country:
- Europe > Germany
- Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.40)
- North America > United States (1.00)
- Europe > Germany
- Genre:
- Overview (0.67)
- Research Report
- Experimental Study (0.46)
- New Finding (0.46)
- Industry:
- Transportation > Air (1.00)
- Technology: