Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning
Selmonaj, Ardian, Szehr, Oleg, Del Rio, Giacomo, Antonucci, Alessandro, Schneider, Adrian, Rüegsegger, Michael
–arXiv.org Artificial Intelligence
This is motivated by the strong performance of RL agents in finding effective Courses of Action (CoA) across a wide range of environments, including combinatorial settings such as Chess or Go [1], real-time continuous control tasks found in arcade video games [2], and scenarios that combine control with strategic decision-making, as seen in modern wargames [3]. The application of RL in the context of air combat comes with a number of specific challenges. Those include structural properties of the simulation scenario, such as the complexity of the individual units and their flight dynamics, the exponential size of the combined state and action spaces, the depth of the planning horizon, the presence of stochasticity and imperfect information, etc. Overall the size of the game tree (i.e., the set of possible CoAs) in strategic games and defense scenarios appears vast and beyond the access of straightforward search. Furthermore, real-world operations involve the simultaneous maneuverings of individual units, but also be- ing mindful of the strategic positions and global mission planning. Training policies that integrate real-time control at the troop level with high-level mission planning at the commander level is challenging, as these tasks inherently demand distinct system requirements, algorithmic approaches, and training configurations.
arXiv.org Artificial Intelligence
May-15-2025
- Country:
- Europe > Switzerland (0.04)
- North America > United States
- California > Monterey County > Monterey (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Government > Military
- Air Force (0.82)
- Leisure & Entertainment > Games
- Computer Games (0.87)
- Government > Military
- Technology: