Sotgiu, Angelo
ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches
Pintor, Maura, Angioni, Daniele, Sotgiu, Angelo, Demetrio, Luca, Demontis, Ambra, Biggio, Battista, Roli, Fabio
Adversarial patches are created by solving an optimization problem via gradient descent. Understanding the security of machine-learning models is However, this process is costly as it requires both querying the of paramount importance nowadays, as these algorithms are target model many times and computing the back-propagation used in a large variety of settings, including security-related algorithm until convergence is reached. Hence, it is not possible and mission-critical applications, to extract actionable knowledge to obtain a fast robustness evaluation against adversarial patches from vast amounts of data. Nevertheless, such data-driven without avoiding all the computational costs required by their algorithms are not robust against adversarial perturbations of optimization process. To further exacerbate the problem, adversarial the input data [1, 2, 3, 4]. In particular, attackers can hinder the patches should also be effective under different transformations, performance of classification algorithms by means of adversarial including translation, rotation and scale changes.
Robust image classification with multi-modal large language models
Villani, Francesco, Maljkovic, Igor, Lazzaro, Dario, Sotgiu, Angelo, Cinà, Antonio Emanuele, Roli, Fabio
Deep Neural Networks are vulnerable to adversarial examples, i.e., carefully crafted input samples that can cause models to make incorrect predictions with high confidence. To mitigate these vulnerabilities, adversarial training and detection-based defenses have been proposed to strengthen models in advance. However, most of these approaches focus on a single data modality, overlooking the relationships between visual patterns and textual descriptions of the input. In this paper, we propose a novel defense, Multi-Shield, designed to combine and complement these defenses with multi-modal information to further enhance their robustness. Multi-Shield leverages multi-modal large language models to detect adversarial examples and abstain from uncertain classifications when there is no alignment between textual and visual representations of the input. Extensive evaluations on CIFAR-10 and ImageNet datasets, using robust and non-robust image classification models, demonstrate that Multi-Shield can be easily integrated to detect and reject adversarial examples, outperforming the original defenses.
Can Domain Knowledge Alleviate Adversarial Attacks in Multi-Label Classifiers?
Melacci, Stefano, Ciravegna, Gabriele, Sotgiu, Angelo, Demontis, Ambra, Biggio, Battista, Gori, Marco, Roli, Fabio
Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems. In this paper, we shift the attention to multi-label classification, where the availability of domain knowledge on the relationships among the considered classes may offer a natural way to spot incoherent predictions, i.e., predictions associated to adversarial examples lying outside of the training data distribution. We explore this intuition in a framework in which first-order logic knowledge is converted into constraints and injected into a semi-supervised learning problem. Within this setting, the constrained classifier learns to fulfill the domain knowledge over the marginal distribution, and can naturally reject samples with incoherent predictions. Even though our method does not exploit any knowledge of attacks during training, our experimental analysis surprisingly unveils that domain-knowledge constraints can help detect adversarial examples effectively, especially if such constraints are not known to the attacker. While we also show that an adaptive attack exploiting knowledge of the constraints may still deceive our classifier, it remains an open issue to understand how hard for an attacker would be to infer such constraints in practical cases. For this reason, we believe that our approach may provide a significant step towards designing robust multi-label classifiers.