Goto

Collaborating Authors

Benchmarking adversarial attacks and defenses for time-series data

arXiv.org Artificial Intelligence

The adversarial vulnerability of deep networks has spurred the interest of researchers worldwide. Unsurprisingly, like images, adversarial examples also translate to time-series data as they are an inherent weakness of the model itself rather than the modality. Several attempts have been made to defend against these adversarial attacks, particularly for the visual modality. In this paper, we perform detailed benchmarking of well-proven adversarial defense methodologies on time-series data. We restrict ourselves to the $L_{\infty}$ threat model. We also explore the trade-off between smoothness and clean accuracy for regularization-based defenses to better understand the trade-offs that they offer. Our analysis shows that the explored adversarial defenses offer robustness against both strong white-box as well as black-box attacks. This paves the way for future research in the direction of adversarial attacks and defenses, particularly for time-series data.


Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation

arXiv.org Machine Learning

Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation Anindya Sarkar Mobiliya Bangalore, INDIA anindya.sarkar@mobiliya.com Abstract --Exploring adversarial attack vectors and studying their effects on machine learning algorithms has been of interest to researchers. Deep neural networks working with time series data have received lesser interest compared to their image counterparts in this context. In a recent finding, it has been revealed that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks. In this paper, we introduce two local gradient based and one spectral density based time series data augmentation techniques. We show that a model trained with data obtained using our techniques obtains state-of- the-art classification accuracy on various time series benchmarks. In addition, it improves the robustness of the model against some of the most common corruption techniques,such as Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM). Index T erms --time series classification, adversarial training, gradient based adversarial attacks I.


Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey

arXiv.org Artificial Intelligence

As we seek to deploy machine learning models beyond virtual and controlled domains, it is critical to analyze not only the accuracy or the fact that it works most of the time, but if such a model is truly robust and reliable. This paper studies strategies to implement adversary robustly trained algorithms towards guaranteeing safety in machine learning algorithms. We provide a taxonomy to classify adversarial attacks and defenses, formulate the Robust Optimization problem in a min-max setting and divide it into 3 subcategories, namely: Adversarial (re)Training, Regularization Approach, and Certified Defenses. We survey the most recent and important results in adversarial example generation, defense mechanisms with adversarial (re)Training as their main defense against perturbations. We also survey mothods that add regularization terms that change the behavior of the gradient, making it harder for attackers to achieve their objective. Alternatively, we've surveyed methods which formally derive certificates of robustness by exactly solving the optimization problem or by approximations using upper or lower bounds. In addition, we discuss the challenges faced by most of the recent algorithms presenting future research perspectives.


Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks

arXiv.org Machine Learning

Deep neural networks have been shown to be vulnerable to adversarial examples, perturbed inputs that are designed specifically to produce intentional errors in the learning algorithms. However, existing attacks are either computationally expensive or require extensive knowledge of the target model and its dataset to succeed. Hence, these methods are not practical in a deployed adversarial setting. In this paper we introduce an exploratory approach for generating adversarial examples using procedural noise. We show that it is possible to construct practical black-box attacks with low computational cost against robust neural network architectures such as Inception v3 and Inception ResNet v2 on the ImageNet dataset. We show that these attacks successfully cause misclassification with a low number of queries, significantly outperforming state-of-the-art black box attacks. Our attack demonstrates the fragility of these neural networks to Perlin noise, a type of procedural noise used for generating realistic textures. Perlin noise attacks achieve at least 90% top 1 error across all classifiers. More worryingly, we show that most Perlin noise perturbations are "universal" in that they generalize, as adversarial examples, across large portions of the dataset, with up to 73% of images misclassified using a single perturbation. These findings suggest a systemic fragility of DNNs that needs to be explored further. We also show the limitations of adversarial training, a technique used to enhance the robustness against adversarial examples. Thus, the attacker just needs to change the perspective to generate the adversarial examples to craft successful attacks and, for the defender, it is difficult to foresee a priori all possible types of adversarial perturbations.


Improved Detection of Adversarial Attacks via Penetration Distortion Maximization

arXiv.org Machine Learning

A BSTRACT This paper is concerned with the defense of deep models against adversarial attacks. We develop an adversarial detection method, which is inspired by the certificate defense approach, and captures the idea of separating class clusters in the embedding space to increase the margin. The resulting defense is intuitive, effective, scalable, and can be integrated into any given neural classification model. Our method demonstrates state-of-the-art (detection) performance under all threat models. 1 Introduction Defending machine learning models from adversarial attacks has become an increasingly pressing issue as deep neural networks become associated with more critical aspects of society. Adversarial attacks can effectively fool deep models and force them to misclassify, using a slight but maliciously-designed distortion that is typically invisible to the human eye (Carlini & Wagner, 2017c; Athalye et al., 2018). Despite numerous developments, defense mechanisms are still wanting. Many interesting ideas have been proposed to construct defense mechanisms for adversarial examples. Among these are adversarial training (Metzen et al., 2017; Zuo et al., 2020; Y an et al., 2018), ensemble methods (Strauss et al., 2017), and randomization (Dhillon et al., 2018; Xu et al., 2017) to name a few.