Not enough data to create a plot.
Try a different view from the menu above.
Sarkar, Anindya
Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation
Sarkar, Anindya, Gupta, Nikhil Kumar, Iyengar, Raghu
Recent studies on the adversarial vulnerability of neural networks have shown that models trained with the objective of minimizing an upper bound on the worst-case loss over all possible adversarial perturbations improve robustness against adversarial attacks. Beside exploiting adversarial training framework, we show that by enforcing a Deep Neural Network (DNN) to be linear in transformed input and feature space improves robustness significantly. We also demonstrate that by augmenting the objective function with Local Lipschitz regularizer boost robustness of the model further. Our method outperforms most sophisticated adversarial training methods and achieves state of the art adversarial accuracy on MNIST, CIFAR10 and SVHN dataset. In this paper, we also propose a novel adversarial image generation method by leveraging Inverse Representation Learning and Linearity aspect of an adversarially trained deep neural network classifier.
Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation
Sarkar, Anindya, Raj, Anirudh Sunder, Iyengar, Raghu Sesha
Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation Anindya Sarkar Mobiliya Bangalore, INDIA anindya.sarkar@mobiliya.com Abstract --Exploring adversarial attack vectors and studying their effects on machine learning algorithms has been of interest to researchers. Deep neural networks working with time series data have received lesser interest compared to their image counterparts in this context. In a recent finding, it has been revealed that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks. In this paper, we introduce two local gradient based and one spectral density based time series data augmentation techniques. We show that a model trained with data obtained using our techniques obtains state-of- the-art classification accuracy on various time series benchmarks. In addition, it improves the robustness of the model against some of the most common corruption techniques,such as Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM). Index T erms --time series classification, adversarial training, gradient based adversarial attacks I.