A DPLL(T) Framework for Verifying Deep Neural Networks
Duong, Hai, Li, Linhan, Nguyen, ThanhVu, Dwyer, Matthew
–arXiv.org Artificial Intelligence
However, just like traditional software, DNNs can have "bugs", e.g., producing unexpected results on inputs that are different from those in training data, and be attacked, e.g., small perturbations to the inputs by a malicious adversary or even sensorial imperfections result in misclassification [Isac et al. 2022; Ren et al. 2020; Yang et al. 2022; Zhang et al. 2019; Zügner et al. 2018]. These issues, which have been observed in many DNNs [Goodfellow et al. 2014; Szegedy et al. 2014] and demonstrated in the real world [Eykholt et al. 2018], naturally raise the question of how DNNs should be tested, validated, and ultimately verified to meet the requirements of relevant robustness or safety standards [Huang et al. 2020; Katz et al. 2017b]. To address this question, researchers have developed a variety of techniques and tools to verify DNNs (e.g., [Huang et al. 2017; Katz et al. 2022, 2019; Liu et al. 2021; Müller et al. 2021; Urban and Miné 2021; Wang et al. 2021]). Constraint-based approaches [Ehlers 2017; Huang et al. 2017; Katz et al. 2017a] aim to both correctly prove and disprove properties, but do not scale to large networks. In contrast, abstraction-based approaches [Müller et al. 2021; Singh et al. 2018a, 2019b; Wang et al. 2018b, 2021] scale much better, but while modern abstraction verification tools can often refine their abstractions to avoid returning spurious counterexamples they are incomplete. The problem of verifying non-trivial properties of DNNs with piecewise linear activation functions, such as "ReLU", has been shown to be reducible [Katz et al. 2017a] from the classical satisfiability (SAT) problem [Cook 1971]. Despite this complexity, the ability of satisfiability modulo theories (SMT) solvers to scale to large formulae encoding real-world verification problems [Kroening and Strichman 2016] suggests that a similar approaches might be effective for DNN verification. However, the constraint-based DNN verifiers developed to date [Ehlers 2017; Katz et al. 2017a, 2022, 2019] are not among the state-of-the-art as determined by DNN verification competitions [Bak et al. 2021; Müller et al. 2022]. Techniques like Planet [Ehlers 2017] and Reluplex [Katz et al. 2017a, 2022] demonstrated how the semantics of a trained DNN could be encoded as a constraint in Linear Real Arithmetic (LRA) and
arXiv.org Artificial Intelligence
Aug-24-2023
- Country:
- North America > United States (1.00)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Information Technology (1.00)
- Transportation > Air (0.67)
- Technology: