Efficient Certification of Spatial Robustness
Ruoss, Anian, Baader, Maximilian, Balunović, Mislav, Vechev, Martin
–arXiv.org Artificial Intelligence
Such spatial attacks can be modeled by smooth vector fields that describe the displacement of every pixel. Common geometric transformations, e.g., rotation and translation, are particular instances of these smooth vector fields, which indicates that they capture a wide range of naturally occurring image transformations. Since the vulnerability of neural networks to spatially transformed adversarial examples can pose a security threat to computer vision systems relying on such models, it is critical to quantify their robustness against spatial transformations. A common approach to estimate neural network robustness is to measure the success rate of strong attacks (Carlini, Wagner, 2017; Madry et al., 2018). However, many networks which are indeed robust against these attacks were later broken using even more sophisticated attacks (Athalye, Carlini, 2018; Athalye et al., 2018; Engstrom et al., 2018; Tramer et al., 2020). The key issue is that such attacks do not provide provable robustness guarantees.
arXiv.org Artificial Intelligence
Sep-19-2020
- Country:
- Europe > Switzerland > Zürich > Zürich (0.14)
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology > Security & Privacy (0.86)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Representation & Reasoning (1.00)
- Vision (0.90)
- Information Technology > Artificial Intelligence