Goto

Collaborating Authors

 sodef



StableNeuralODEwithLyapunov-Stable EquilibriumPointsforDefendingAgainst AdversarialAttacks

Neural Information Processing Systems

Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included inthe input to the deep network to fool it into making a wrong classification.


DefendingAgainstAdversarialAttacksviaNeural DynamicSystem

Neural Information Processing Systems

Some recent works have accordingly proposed to enhance the robustnessofDNN fromadynamic system perspective. Followingthislineofinquiry, and inspired by the asymptotic stability of the general nonautonomous dynamicalsystem, wepropose tomakeeachcleaninstance betheasymptotically stable equilibrium points of a slowly time-varying system in order to defend against adversarial attacks. We present a theoretical guarantee that if a clean instance is an asymptotically stable equilibrium point and the adversarial instance is in the neighborhood of this point, the asymptotic stability will reduce the adversarial noise to bring the adversarial instance close to the clean instance. Motivated by our theoretical results, we go on to propose a nonautonomous neural ordinary differential equation (ASODE) and place constraints onitscorresponding linear time-variant system to make all clean instances act as its asymptotically stable equilibrium points. Our analysis suggests that the constraints can be converted to regularizers in implementation.


Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks

Neural Information Processing Systems

Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equations (ODEs) are intrinsically more robust against adversarial attacks compared to vanilla DNNs. In this work, we propose a neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF). By ensuring that the equilibrium points of the ODE solution used as part of SODEF are Lyapunov-stable, the ODE solution for an input with a small perturbation converges to the same solution as the unperturbed input. We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability. Our analysis suggests that our proposed regularizers force the extracted feature points to be within a neighborhood of the Lyapunov-stable equilibrium points of the SODEF ODE. SODEF is compatible with many defense methods and can be applied to any neural network's final regressor layer to enhance its stability against adversarial attacks.



Supplement

Neural Information Processing Systems

In this section, we give an overview of related work in stable neural ODE networks. We also give an overview of common adversarial attacks and recent works that defend against adversarial examples. Stable Neural Network Gradient vanishing and gradient exploding are two well-known phenomena in deep learning [1]. The gradient of the objective function, which strongly relies on the training method as well as the neural network architecture, indicates how sensitive the output is with respect to (w.r.t.) input perturbation. Exploding gradient implies instability of the output w.r.t. the input and thus resulting in a non-robust learning architecture.



Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks

Neural Information Processing Systems

Deep neural networks (DNNs) are well-known to be vulnerable to adversarial attacks, where malicious human-imperceptible perturbations are included in the input to the deep network to fool it into making a wrong classification. Recent studies have demonstrated that neural Ordinary Differential Equations (ODEs) are intrinsically more robust against adversarial attacks compared to vanilla DNNs. In this work, we propose a neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF). By ensuring that the equilibrium points of the ODE solution used as part of SODEF are Lyapunov-stable, the ODE solution for an input with a small perturbation converges to the same solution as the unperturbed input. We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability.