Runtime Monitoring and Fault Detection for Neural Network-Controlled Systems

Lan, Jianglin, Zhan, Siyuan, Patton, Ron, Zhao, Xianxian

arXiv.org Artificial Intelligence 

However, NNs are vulnerable to input perturbations such as noise and adversarial attacks. This is even more problematic when NNs are used to generate real-time control actions for automatic systems such as aircraft (Julian and Kochenderfer, 2021), because uncertainties (or deviations) in the NN will be propagated and accumulated the closed-loop, leading to degraded performance and safety concerns (Bensalem et al., 2023). It is thus important to assure real-time safety of NN-controlled systems. Safety assurance for NN-controlled autonomous systems has been looked at from different angles in the literature. Lots of research has been devoted to formal methods for verifying the robustness of NNs against perturbations (Liu et al., 2021). The formal methods are normally based on interval bound propagation and the solving of optimisation problems such as mixed-integer linear programming (MILP) (Lomuscio and Maganti, 2017), semidefinite programming (SDP) (Lan et al., 2023), and linear programming (LP) (Bunel et al., 2020).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found