air combat
The future of air combat: How long will the US military still need pilots?
Fox News contributor Brett Velicovich demands U.S. defenses'adapt' to modern warfare after Ukraine's drone strikes on'The Story.' As sixth-generation fighter programs ramp up, military insiders are divided over whether future warplanes need pilots at all. The Pentagon is pouring billions into next-generation aircraft, pushing the boundaries of stealth and speed. But as America eyes a future of air dominance, one question looms large: Should Americans still be risking their lives in the cockpit? Autonomous drones backed by AI are progressing faster than many expected, and that has some defense leaders rethinking the role of the pilot.
- North America > United States (1.00)
- Europe > Ukraine (0.25)
- Asia > Middle East > Iran (0.06)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Air Force (1.00)
Training Environment for High Performance Reinforcement Learning
This paper presents Tunnel, a simple, open source, reinforcement learning training environment for high performance aircraft. It integrates the F16 3D nonlinear flight dynamics into OpenAI Gymnasium python package. The template includes primitives for boundaries, targets, adversaries and sensing capabilities that may vary depending on operational need. This offers mission planners a means to rapidly respond to evolving environments, sensor capabilities and adversaries for autonomous air combat aircraft. It offers researchers access to operationally relevant aircraft physics. Tunnel code base is accessible to anyone familiar with Gymnasium and/or those with basic python skills. This paper includes a demonstration of a week long trade study that investigated a variety of training methods, observation spaces, and threat presentations. This enables increased collaboration between researchers and mission planners which can translate to a national military advantage. As warfare becomes increasingly reliant upon automation, software agility will correlate with decision advantages. Airmen must have tools to adapt to adversaries in this context. It may take months for researchers to develop skills to customize observation, actions, tasks and training methodologies in air combat simulators. In Tunnel, this can be done in a matter of days.
- Transportation > Air (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Air Force (1.00)
- Aerospace & Defense > Aircraft (1.00)
A Hierarchical Reinforcement Learning Framework for Multi-UAV Combat Using Leader-Follower Strategy
Pang, Jinhui, He, Jinglin, Mohamed, Noureldin Mohamed Abdelaal Ahmed, Lin, Changqing, Zhang, Zhihui, Hao, Xiaoshuai
Multi-UAV air combat is a complex task involving multiple autonomous UAVs, an evolving field in both aerospace and artificial intelligence. This paper aims to enhance adversarial performance through collaborative strategies. Previous approaches predominantly discretize the action space into predefined actions, limiting UAV maneuverability and complex strategy implementation. Others simplify the problem to 1v1 combat, neglecting the cooperative dynamics among multiple UAVs. To address the high-dimensional challenges inherent in six-degree-of-freedom space and improve cooperation, we propose a hierarchical framework utilizing the Leader-Follower Multi-Agent Proximal Policy Optimization (LFMAPPO) strategy. Specifically, the framework is structured into three levels. The top level conducts a macro-level assessment of the environment and guides execution policy. The middle level determines the angle of the desired action. The bottom level generates precise action commands for the high-dimensional action space. Moreover, we optimize the state-value functions by assigning distinct roles with the leader-follower strategy to train the top-level policy, followers estimate the leader's utility, promoting effective cooperation among agents. Additionally, the incorporation of a target selector, aligned with the UAVs' posture, assesses the threat level of targets. Finally, simulation experiments validate the effectiveness of our proposed method.
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.05)
- Asia > Middle East > Saudi Arabia > Northern Borders Province > Arar (0.04)
- Government > Military (1.00)
- Aerospace & Defense > Aircraft (1.00)
- Transportation > Air (0.94)
US Air Force Secretary Kendall flies in cockpit of plane controlled by AI
U.S. Air Force Secretary Frank Kendall took a history-making flight in an AI-controlled F-16 on May 3, 2024. U.S. Air Force Secretary Frank Kendall rode in the cockpit of a fighter jet on Friday, which flew over the desert in California and was controlled by artificial intelligence. Last month, Kendall announced his plans to fly in an AI-controlled F-16 to the U.S. Senate Appropriations Committee's defense panel, while speaking about the future of air warfare being dependent on autonomously operated drones. On Friday, the senior Air Force leader followed through with his plans, making what could be one of the biggest advances in military aviation since stealth planes were introduced in the early 1990s. Kendall flew to Edwards Air Force Base – the same desert facility where Chuck Yeager broke the sound barrier – to watch and experience AI flight in real time. US MILITARY'OUT OF TIME' IN PUSH AGAINST ADVERSARIES' MODERNIZATION, AIR FORCE SECRETARY SAYS The X-62A VISTA aircraft, an experimental AI-enabled Air Force F-16 fighter jet, takes off on Thursday, May 2, 2024, at Edwards Air Force Base, Calif.
- North America > United States > California (0.25)
- Europe (0.05)
- Asia > China (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Air Force (1.00)
A survey of air combat behavior modeling using machine learning
Gorton, Patrick Ribu, Strand, Andreas, Brathen, Karsten
With the recent advances in machine learning, creating agents that behave realistically in simulated air combat has become a growing field of interest. This survey explores the application of machine learning techniques for modeling air combat behavior, motivated by the potential to enhance simulation-based pilot training. Current simulated entities tend to lack realistic behavior, and traditional behavior modeling is labor-intensive and prone to loss of essential domain knowledge between development steps. Advancements in reinforcement learning and imitation learning algorithms have demonstrated that agents may learn complex behavior from data, which could be faster and more scalable than manual methods. Yet, making adaptive agents capable of performing tactical maneuvers and operating weapons and sensors still poses a significant challenge. The survey examines applications, behavior model types, prevalent machine learning methods, and the technical and human challenges in developing adaptive and realistically behaving agents. Another challenge is the transfer of agents from learning environments to military simulation systems and the consequent demand for standardization. Four primary recommendations are presented regarding increased emphasis on beyond-visual-range scenarios, multi-agent machine learning and cooperation, utilization of hierarchical behavior models, and initiatives for standardization and research collaboration. These recommendations aim to address current issues and guide the development of more comprehensive, adaptable, and realistic machine learning-based behavior models for air combat applications.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Florida > Orange County > Orlando (0.05)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- (37 more...)
- Government > Military > Air Force (0.68)
- Leisure & Entertainment > Games > Computer Games (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
BVR Gym: A Reinforcement Learning Environment for Beyond-Visual-Range Air Combat
Scukins, Edvards, Klein, Markus, Kroon, Lars, Ögren, Petter
Creating new air combat tactics and discovering novel maneuvers can require numerous hours of expert pilots' time. Additionally, for each different combat scenario, the same strategies may not work since small changes in equipment performance may drastically change the air combat outcome. For this reason, we created a reinforcement learning environment to help investigate potential air combat tactics in the field of beyond-visual-range (BVR) air combat: the BVR Gym. This type of air combat is important since long-range missiles are often the first weapon to be used in aerial combat. Some existing environments provide high-fidelity simulations but are either not open source or are not adapted to the BVR air combat domain. Other environments are open source but use less accurate simulation models. Our work provides a high-fidelity environment based on the open-source flight dynamics simulator JSBSim and is adapted to the BVR air combat domain. This article describes the building blocks of the environment and some use cases.
Deep Learning Based Situation Awareness for Multiple Missiles Evasion
Scukins, Edvards, Klein, Markus, Kroon, Lars, Ögren, Petter
As the effective range of air-to-air missiles increases, it becomes harder for human operators to maintain the situational awareness needed to keep a UAV safe. In this work, we propose a decision support tool to help UAV operators in Beyond Visual Range (BVR) air combat scenarios assess the risks of different options and make decisions based on those. Earlier work focused on the threat posed by a single missile, and in this work, we extend the ideas to several missile threats. The proposed method uses Deep Neural Networks (DNN) to learn from high-fidelity simulations to provide the operator with an outcome estimate for a set of different strategies. Our results demonstrate that the proposed system can manage multiple incoming missiles, evaluate a family of options, and recommend the least risky course of action.
- Government > Military > Air Force (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
Maneuver Decision-Making Through Automatic Curriculum Reinforcement Learning Without Handcrafted Reward functions
Maneuver decision-making is the core of unmanned combat aerial vehicle for autonomous air combat. To solve this problem, we propose an automatic curriculum reinforcement learning method, which enables agents to learn effective decisions in air combat from scratch. The range of initial states are used for distinguishing curricula of different difficulty levels, thereby maneuver decision is divided into a series of sub-tasks from easy to difficult, and test results are used to change sub-tasks. As sub-tasks change, agents gradually learn to complete a series of sub-tasks from easy to difficult, enabling them to make effective maneuvering decisions to cope with various states without the need to spend effort designing reward functions. The ablation studied show that the automatic curriculum learning proposed in this article is an essential component for training through reinforcement learning, namely, agents cannot complete effective decisions without curriculum learning. Simulation experiments show that, after training, agents are able to make effective decisions given different states, including tracking, attacking and escaping, which are both rational and interpretable.
- North America > United States > Massachusetts (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Government > Military (1.00)
- Leisure & Entertainment (0.93)
Autonomous Agent for Beyond Visual Range Air Combat: A Deep Reinforcement Learning Approach
Dantas, Joao P. A., Maximo, Marcos R. O. A., Yoneyama, Takashi
This work contributes to developing an agent based on deep reinforcement learning capable of acting in a beyond visual range (BVR) air combat simulation environment. The paper presents an overview of building an agent representing a high-performance fighter aircraft that can learn and improve its role in BVR combat over time based on rewards calculated using operational metrics. Also, through self-play experiments, it expects to generate new air combat tactics never seen before. Finally, we hope to examine a real pilot's ability, using virtual simulation, to interact in the same environment with the trained agent and compare their performances. This research will contribute to the air combat training context by developing agents that can interact with real pilots to improve their performances in air defense missions.
- South America > Brazil (0.06)
- North America > United States > Florida > Orange County > Orlando (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
Maneuver Decision-Making For Autonomous Air Combat Through Curriculum Learning And Reinforcement Learning With Sparse Rewards
Wei, Yu-Jie, Zhang, Hong-Peng, Huang, Chang-Qiang
Reinforcement learning is an effective way to solve the decision-making problems. It is a meaningful and valuable direction to investigate autonomous air combat maneuver decision-making method based on reinforcement learning. However, when using reinforcement learning to solve the decision-making problems with sparse rewards, such as air combat maneuver decision-making, it costs too much time for training and the performance of the trained agent may not be satisfactory. In order to solve these problems, the method based on curriculum learning is proposed. First, three curricula of air combat maneuver decision-making are designed: angle curriculum, distance curriculum and hybrid curriculum. These courses are used to train air combat agents respectively, and compared with the original method without any curriculum. The training results show that angle curriculum can increase the speed and stability of training, and improve the performance of the agent; distance curriculum can increase the speed and stability of agent training; hybrid curriculum has a negative impact on training, because it makes the agent get stuck at local optimum. The simulation results show that after training, the agent can handle the situations where targets come from different directions, and the maneuver decision results are consistent with the characteristics of missile.
- North America > United States > Massachusetts (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Instructional Material > Course Syllabus & Notes (0.67)
- Research Report > New Finding (0.56)