HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks
Mura, Raffaele, Floris, Giuseppe, Scionis, Luca, Piras, Giorgio, Pintor, Maura, Demontis, Ambra, Giacinto, Giorgio, Biggio, Battista, Roli, Fabio
–arXiv.org Artificial Intelligence
Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models. However, many attacks tend to provide overly-optimistic evaluations as they use fixed loss functions, optimizers, step-size schedulers, and default hyperparameters. In this work, we tackle these limitations by proposing a parametric variation of the well-known fast minimum-norm attack algorithm, whose loss, optimizer, step-size scheduler, and hyperparameters can be dynamically adjusted. We re-evaluate 12 robust models, showing that our attack finds smaller adversarial perturbations without requiring any additional tuning. This also enables reporting adversarial robustness as a function of the perturbation budget, providing a more complete evaluation than that offered by fixed-budget attacks, while remaining efficient. We release our open-source code at https://github.com/pralab/HO-FMN.
arXiv.org Artificial Intelligence
Jul-11-2024
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Information Technology > Security & Privacy (0.67)
- Technology: