TED-LaST: Towards Robust Backdoor Defense Against Adaptive Attacks

Mo, Xiaoxing, Cheng, Yuxuan, Sun, Nan, Zhang, Leo Yu, Luo, Wei, Gao, Shang

arXiv.org Artificial Intelligence 

--Deep Neural Networks (DNNs) are vulnerable to backdoor attacks, where attackers implant hidden triggers during training to maliciously control model behavior . T opological Evolution Dynamics (TED) has recently emerged as a powerful tool for detecting backdoor attacks in DNNs. However, TED can be vulnerable to backdoor attacks that adaptively distort topological representation distributions across network layers. T o address this limitation, we propose TED-LaST (T opological Evolution Dynamics against La undry, S low release, and T arget mapping attack strategies), a novel defense strategy that enhances TED's robustness against adaptive attacks. TED-LaST introduces two key innovations: label-supervised dynamics tracking and adaptive layer emphasis. These enhancements enable the identification of stealthy threats that evade traditional TED-based defenses, even in cases of inseparability in topological space and subtle topological perturbations. We review and classify data poisoning tricks in state-of-the-art adaptive attacks and propose enhanced adaptive attack with target mapping, which can dynamically shift malicious tasks and fully leverage the stealthiness that adaptive attacks possess. Our comprehensive experiments on multiple datasets (CIF AR-10, GTSRB, and ImageNet100) and model architectures (ResNet20, ResNet101) show that TED-LaST effectively counteracts sophisticated backdoors like Adap-Blend, Adapt-Patch, and the proposed enhanced adaptive attack. TED-LaST sets a new benchmark for robust backdoor detection, substantially enhancing DNN security against evolving threats. EEP Neural Networks (DNN) models have revolutionized fields such as computer vision [1], speech recognition [2], and autonomous driving [3] with their impressive capabilities. Despite these advances, their dependence on expansive datasets and complex training procedures introduces significant vulnerabilities, notably through backdoor attacks. Backdoor attacks implant hidden behaviors in DNN models, which can be activated by specific triggers.