Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey
Schott, Lucas, Delas, Josephine, Hajri, Hatem, Gherbi, Elies, Yaich, Reda, Boulahia-Cuppens, Nora, Cuppens, Frederic, Lamprier, Sylvain
–arXiv.org Artificial Intelligence
The advent of Deep Reinforcement Learning (DRL) has marked a significant shift in various fields, including games [1-3], autonomous robotics [4], autonomous driving [5], and energy management [6]. By integrating Reinforcement Learning (RL) with Deep Neural Networks (DNN), DRL can leverages high dimensional continuous observations and rewards to train neural policies, without the need for supervised example trajectories. While DRL achieves remarkable performances in well known controlled environments, it also encounter challenges in ensuring robust performance amid diverse condition changes and real-world perturbations. It particularly struggle to bridge the reality gap [7, 8], often DRL agents are trained in simulation that remains an imitation of the real-world, resulting in a gap between the performance of a trained agent in the simulation and its performance once transferred to the real-world application. Even without trying to bridge the reality gap, agents can be trained in the first place in some conditions, and be deployed later and the conditions may have changed since.
arXiv.org Artificial Intelligence
Mar-1-2024
- Country:
- Europe (0.46)
- North America > Canada
- Quebec (0.14)
- Genre:
- Overview (1.00)
- Research Report (0.83)
- Industry:
- Government > Military (1.00)
- Information Technology > Security & Privacy (1.00)
- Leisure & Entertainment > Games
- Computer Games (0.46)
- Technology: