Data-Driven Falsification of Cyber-Physical Systems
Kundu, Atanu, Gon, Sauvik, Ray, Rajarshi
–arXiv.org Artificial Intelligence
--Cyber-Physical Systems (CPS) are abundant in safety-critical domains such as healthcare, avionics, and autonomous vehicles. Formal verification of their operational safety is, therefore, of utmost importance. In this paper, we address the falsification problem, where the focus is on searching for an unsafe execution in the system instead of proving their absence. The contribution of this paper is a framework that (a) connects the falsification of CPS with the falsification of deep neural networks (DNNs) and (b) leverages the inherent interpretability of Decision Trees for faster falsification of CPS. This is achieved by: (1) building a surrogate model of the CPS under test, either as a DNN model or a Decision Tree, (2) application of various DNN falsification tools to falsify CPS, and (3) a novel falsification algorithm guided by the explanations of safety violations of the CPS model extracted from its Decision Tree surrogate. The proposed framework has the potential to exploit a repertoire of adversarial attack algorithms designed to falsify robustness properties of DNNs, as well as state-of-the-art falsification algorithms for DNNs. Although the presented methodology is applicable to systems that can be executed/simulated in general, we demonstrate its effectiveness, particularly in CPS. Decision tree-guided falsification shows promising results in efficiently finding multiple counterexamples in the ARCH-COMP 2024 falsification benchmarks [22]. The traditional simulation and testing techniques can be effective for debugging the early stages of Cyber-Physical-Systems (CPS) design. However, as the design becomes pristine by passing through multiple phases of testing, finding the lurking bugs becomes computationally expensive and challenging by means of simulation and testing alone. Formal verification techniques such as model-checking come in handy here by either proving the absence of bugs in such designs or by providing a counterexample behavior that violates the specification. A complementary approach is falsification, where the focus is solely on discovering a system behavior that is a counterexample to a given specification. In this work, we address the falsification of safety specifications expressed in signal temporal logic [27] for CPS given as an executable. Our Contribution The contribution of this paper is a falsification framework that employs two strategies. First, it connects the falsification of reachability specifications of CPS with the falsification of reachability specifications of deep neural networks (DNNs). A. Kundu and S. Gon are students of the Indian Association for the Cultivation of Science (IACS), India.
arXiv.org Artificial Intelligence
May-8-2025
- Country:
- Asia
- India (0.24)
- Middle East > Israel
- Haifa District > Haifa (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Europe
- France > Île-de-France
- Germany
- Baden-Württemberg > Karlsruhe Region
- Heidelberg (0.04)
- Berlin (0.04)
- Baden-Württemberg > Karlsruhe Region
- United Kingdom > England
- Greater London > London (0.04)
- Oxfordshire > Oxford (0.04)
- North America
- Canada > Alberta
- United States
- California
- Los Angeles County > Los Angeles (0.14)
- Santa Clara County > San Jose (0.04)
- Hawaii (0.04)
- Washington > King County
- Seattle (0.04)
- California
- Asia
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Information Technology > Security & Privacy (0.70)
- Transportation > Air (0.48)
- Technology: