RL-based Control of UAS Subject to Significant Disturbance
Chakraborty, Kousheek, Hof, Thijs, Alharbat, Ayham, Mersha, Abeje
–arXiv.org Artificial Intelligence
RL-based Control of UAS Subject to Significant DisturbanceAccepted at the 2025 International Conference on Unmanned Aircraft SystemsKousheek Chakraborty, 1, Thijs Hof, 1, A yham Alharbat 1, 2, Abeje Mersha 1 Abstract --This paper proposes a Reinforcement Learning (RL)-based control framework for position and attitude control of an Unmanned Aerial System (UAS) subjected to significant disturbance that can be associated with an uncertain trigger signal. The proposed method learns the relationship between the trigger signal and disturbance force, enabling the system to anticipate and counteract the impending disturbances before they occur . We train and evaluate three policies: a baseline policy trained without exposure to the disturbance, a reactive policy trained with the disturbance but without the trigger signal, and a predictive policy that incorporates the trigger signal as an observation and is exposed to the disturbance during training. Our simulation results show that the predictive policy outperforms the other policies by minimizing position deviations through a proactive correction maneuver . This work highlights the potential of integrating predictive cues into RL frameworks to improve UAS performance. I NTRODUCTION Unmanned Aerial Systems (UAS) are increasingly deployed in high-risk environments to perform critical tasks such as infrastructure inspection, search and rescue, and aerial firefighting [1].
arXiv.org Artificial Intelligence
Apr-14-2025
- Country:
- Europe > Netherlands (0.04)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Aerospace & Defense (0.75)
- Law Enforcement & Public Safety > Fire & Emergency Services (0.34)
- Transportation (0.95)
- Technology: