Yaich, Reda
Diffusion-based Adversarial Purification for Intrusion Detection
Merzouk, Mohamed Amine, Beurier, Erwan, Yaich, Reda, Boulahia-Cuppens, Nora, Cuppens, Frédéric
The escalating sophistication of cyberattacks has encouraged the integration of machine learning techniques in intrusion detection systems, but the rise of adversarial examples presents a significant challenge. These crafted perturbations mislead ML models, enabling attackers to evade detection or trigger false alerts. In response, adversarial purification has emerged as a compelling solution, particularly with diffusion models showing promising results. However, their purification potential remains unexplored in the context of intrusion detection. This paper demonstrates the effectiveness of diffusion models in purifying adversarial examples in network intrusion detection. Through a comprehensive analysis of the diffusion parameters, we identify optimal configurations maximizing adversarial robustness with minimal impact on regular performance. Importantly, this study reveals insights into the relationship between diffusion noise and diffusion steps, representing a novel contribution to the field. Our experiments are carried out on two datasets and against five adversarial attacks. The implementation code is publicly available.
Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey
Schott, Lucas, Delas, Josephine, Hajri, Hatem, Gherbi, Elies, Yaich, Reda, Boulahia-Cuppens, Nora, Cuppens, Frederic, Lamprier, Sylvain
The advent of Deep Reinforcement Learning (DRL) has marked a significant shift in various fields, including games [1-3], autonomous robotics [4], autonomous driving [5], and energy management [6]. By integrating Reinforcement Learning (RL) with Deep Neural Networks (DNN), DRL can leverages high dimensional continuous observations and rewards to train neural policies, without the need for supervised example trajectories. While DRL achieves remarkable performances in well known controlled environments, it also encounter challenges in ensuring robust performance amid diverse condition changes and real-world perturbations. It particularly struggle to bridge the reality gap [7, 8], often DRL agents are trained in simulation that remains an imitation of the real-world, resulting in a gap between the performance of a trained agent in the simulation and its performance once transferred to the real-world application. Even without trying to bridge the reality gap, agents can be trained in the first place in some conditions, and be deployed later and the conditions may have changed since.