deepfool
We would like to thank the reviewers for their valuable feedback, which we will duly consider and integrate in our
In this paper, we demonstrate that "the decision boundaries of a DNN can only exist as long We clarify the main points raised by the reviewers here below. We further shed more light on the relationship between adv. Nevertheless, we never claim that, within the discr. In fact, we agree that the margin associated to different discr. Overall, however, we firmly believe that the invariant dirs.
e1c13a13fc6b87616b787b986f98a111-Supplemental.pdf
This section gives the worst-case time analysis for Algorithm 1. This gives the bound shown in Eq. 3. B.1 Loss function space L Recall that the loss function search space is defined as: (Loss Function Search Space) L::= targeted Loss, n with Z | untargeted Loss with Z | targeted Loss, n - untargeted Loss with Z Z::= logits | probs To refer to different settings, we use the following notation: U: for the untargeted loss, T: for the targeted loss, D: for the targeted untargeted loss L: for using logits, and P: for using probs Effectively, the search space includes all the possible combinations expect that the cross-entropy loss supports only probability. B.2 Attack Algorithm & Parameters Space S Recall the attack space defined as: S::= S; S | randomize S | EOT S, n | repeat S, n | try S for n | Attack with params with loss L randomize The type of every parameter is either integer or float. Generic parameters and the supported loss for each attack algorithm are defined in Table 4. B.3 Search space conditioned on network property Following Stutz et al. (2020), we use the robust test error (Rerr) metric We define robust accuracy as 1 Rerr. Note however that Rerr defined in Eq. 5 has intractable maximization problem in the denominator, Note that we use a zero knowledge detector model, so none of the attacks in the search space are aware of the detector.
SDN-Based False Data Detection With Its Mitigation and Machine Learning Robustness for In-Vehicle Networks
Dang, Long, Hapuarachchi, Thushari, Xiong, Kaiqi, Li, Yi
As the development of autonomous and connected vehicles advances, the complexity of modern vehicles increases, with numerous Electronic Control Units (ECUs) integrated into the system. In an in-vehicle network, these ECUs communicate with one another using an standard protocol called Controller Area Network (CAN). Securing communication among ECUs plays a vital role in maintaining the safety and security of the vehicle. This paper proposes a robust SDN-based False Data Detection and Mitigation System (FDDMS) for in-vehicle networks. Leveraging the unique capabilities of Software-Defined Networking (SDN), FDDMS is designed to monitor and detect false data injection attacks in real-time. Specifically, we focus on brake-related ECUs within an SDN-enabled in-vehicle network. First, we decode raw CAN data to create an attack model that illustrates how false data can be injected into the system. Then, FDDMS, incorporating a Long Short Term Memory (LSTM)-based detection model, is used to identify false data injection attacks. We further propose an effective variant of DeepFool attack to evaluate the model's robustness. To countermeasure the impacts of four adversarial attacks including Fast gradient descent method, Basic iterative method, DeepFool, and the DeepFool variant, we further enhance a re-training technique method with a threshold based selection strategy. Finally, a mitigation scheme is implemented to redirect attack traffic by dynamically updating flow rules through SDN. Our experimental results show that the proposed FDDMS is robust against adversarial attacks and effectively detects and mitigates false data injection attacks in real-time.
- North America > United States > Florida > Hillsborough County > Tampa (0.14)
- Europe > Ukraine (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Investigating Imperceptibility of Adversarial Attacks on Tabular Data: An Empirical Analysis
He, Zhipeng, Ouyang, Chun, Alzubaidi, Laith, Barros, Alistair, Moreira, Catarina
Adversarial attacks are a potential threat to machine learning models, as they can cause the model to make incorrect predictions by introducing imperceptible perturbations to the input data. While extensively studied in unstructured data like images, their application to structured data like tabular data presents unique challenges due to the heterogeneity and intricate feature interdependencies of tabular data. Imperceptibility in tabular data involves preserving data integrity while potentially causing misclassification, underscoring the need for tailored imperceptibility criteria for tabular data. However, there is currently a lack of standardised metrics for assessing adversarial attacks specifically targeted at tabular data. To address this gap, we derive a set of properties for evaluating the imperceptibility of adversarial attacks on tabular data. These properties are defined to capture seven perspectives of perturbed data: proximity to original inputs, sparsity of alterations, deviation to datapoints in the original dataset, sensitivity of altering sensitive features, immutability of perturbation, feasibility of perturbed values and intricate feature interdepencies among tabular features. Furthermore, we conduct both quantitative empirical evaluation and case-based qualitative examples analysis for seven properties. The evaluation reveals a trade-off between attack success and imperceptibility, particularly concerning proximity, sensitivity, and deviation. Although no evaluated attacks can achieve optimal effectiveness and imperceptibility simultaneously, unbounded attacks prove to be more promised for tabular data in crafting imperceptible adversarial examples. The study also highlights the limitation of evaluated algorithms in controlling sparsity effectively. We suggest incorporating a sparsity metric in future attack design to regulate the number of perturbed features.
- Oceania > Australia > Queensland > Brisbane (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.48)
Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers
Baishya, Nayan Moni, Manoj, B. R.
Data-driven deep learning (DL) techniques developed for automatic modulation classification (AMC) of wireless signals are vulnerable to adversarial attacks. This poses a severe security threat to the DL-based wireless systems, specifically for edge applications of AMC. In this work, we address the joint problem of developing optimized DL models that are also robust against adversarial attacks. This enables efficient and reliable deployment of DL-based AMC on edge devices. We first propose two optimized models using knowledge distillation and network pruning, followed by a computationally efficient adversarial training process to improve the robustness. Experimental results on five white-box attacks show that the proposed optimized and adversarially trained models can achieve better robustness than the standard (unoptimized) model. The two optimized models also achieve higher accuracy on clean (unattacked) samples, which is essential for the reliability of DL-based solutions at edge applications.
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (7 more...)
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Human motion prediction is still an open problem, which is extremely important for autonomous driving and safety applications. Although there are great advances in this area, the widely studied topic of adversarial attacks has not been applied to multi-regression models such as GCNs and MLP-based architectures in human motion prediction. This work intends to reduce this gap using extensive quantitative and qualitative experiments in state-of-the-art architectures similar to the initial stages of adversarial attacks in image classification. The results suggest that models are susceptible to attacks even on low levels of perturbation. We also show experiments with 3D transformations that affect the model performance, in particular, we show that most models are sensitive to simple rotations and translations which do not alter joint distances. We conclude that similar to earlier CNN models, motion forecasting tasks are susceptible to small perturbations and simple 3D transformations.
- Europe > Germany (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Information Technology > Security & Privacy (0.94)
- Government > Military (0.84)
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Labib, S. M. Fazle Rabby, Mondal, Joyanta Jyoti, Manab, Meem Arafat
Deep neural networks (DNNs) have significantly advanced various domains, but their vulnerability to adversarial attacks poses serious concerns. Understanding these vulnerabilities and developing effective defense mechanisms is crucial. DeepFool, an algorithm proposed by Moosavi-Dezfooli et al. (2016), finds minimal perturbations to misclassify input images. However, DeepFool lacks a targeted approach, making it less effective in specific attack scenarios. Also, in previous related works, researchers primarily focus on success, not considering how much an image is getting distorted; the integrity of the image quality, and the confidence level to misclassifying. So, in this paper, we propose Enhanced Targeted DeepFool, an augmented version of DeepFool that allows targeting specific classes for misclassification and also introduce a minimum confidence score requirement hyperparameter to enhance flexibility. Our experiments demonstrate the effectiveness and efficiency of the proposed method across different deep neural network architectures while preserving image integrity as much and perturbation rate as less as possible. By using our approach, the behavior of models can be manipulated arbitrarily using the perturbed images, as we can specify both the target class and the associated confidence score, unlike other DeepFool-derivative works, such as Targeted DeepFool by Gajjar et al. (2022). Results show that one of the deep convolutional neural network architectures, AlexNet, and one of the state-of-the-art model Vision Transformer exhibit high robustness to getting fooled. This approach can have larger implication, as our tuning of confidence level can expose the robustness of image recognition models. Our code will be made public upon acceptance of the paper.
- North America > United States > Alabama > Jefferson County > Birmingham (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military (0.91)