TabAttackBench: A Benchmark for Adversarial Attacks on Tabular Data

He, Zhipeng, Ouyang, Chun, Wen, Lijie, Liu, Cong, Moreira, Catarina

arXiv.org Artificial Intelligence 

However, with these advancements comes increasing concern about the robustness and security of models, particularly in the context of adversarial attacks. Adversarial attacks involve the intentional manipulation of input data to deceive machine learning models, causing incorrect or misleading outputs (Szegedy et al., 2014). This area of research has drawn significant attention as researchers strive to understand and mitigate the vulnerabilities in various types of data and models. Adversarial perturbations to images involve pixel intensity modifications (Weng et al., 2024), spatial transformations (Aydin & Temizel, 2023), texture perturbations (Geirhos et al., 2018), and localised patches (Wang et al., 2025) that cause dramatic misclassifications while remaining visually imperceptible in Computer Vision (CV). Similarly, in Natural Language Processing (Zhang et al., 2020), attacks typically involve word substitutions (Yang et al., 2023), character-level modifications (Rocamora et al., 2024), or syntactic transformations (Asl et al., 2024) that preserve semantic meaning while fooling text classifiers (Gao et al., 2024). Adversarial vulnerabilities have also been demonstrated in audio processing (Noureddine et al., 2023) through amplitude modifications (Ko et al., 2023), frequency perturbations (Abdullah et al., 2019), and psychoacoustic masking (Qin et al., 2019) that cause speech recognition systems to misinterpret commands. By addressing the vulnerabilities in these types of data, researchers aim to develop more robust and secure machine learning systems across various domains.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found