ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Pintor, Maura, Angioni, Daniele, Sotgiu, Angelo, Demetrio, Luca, Demontis, Ambra, Biggio, Battista, Roli, Fabio

arXiv.org Artificial Intelligence 

Adversarial patches are created by solving an optimization problem via gradient descent. Understanding the security of machine-learning models is However, this process is costly as it requires both querying the of paramount importance nowadays, as these algorithms are target model many times and computing the back-propagation used in a large variety of settings, including security-related algorithm until convergence is reached. Hence, it is not possible and mission-critical applications, to extract actionable knowledge to obtain a fast robustness evaluation against adversarial patches from vast amounts of data. Nevertheless, such data-driven without avoiding all the computational costs required by their algorithms are not robust against adversarial perturbations of optimization process. To further exacerbate the problem, adversarial the input data [1, 2, 3, 4]. In particular, attackers can hinder the patches should also be effective under different transformations, performance of classification algorithms by means of adversarial including translation, rotation and scale changes.