Robust Learning against Relational Adversaries
–Neural Information Processing Systems
Test-time adversarial attacks have posed serious challenges to the robustness of machine-learning models, and in many settings the adversarial perturbation need not be bounded by small \ell_p -norms. Motivated by attacks in program analysis and security tasks, we investigate \textit{relational adversaries}, a broad class of attackers who create adversarial examples in a reflexive-transitive closure of a logical relation. We analyze the conditions for robustness against relational adversaries and investigate different levels of robustness-accuracy trade-off due to various patterns in a relation.
Neural Information Processing Systems
Oct-11-2024, 10:27:59 GMT