Automated Aircraft Recovery via Reinforcement Learning: Initial Experiments
Monaco, Jeffrey F., Ward, David G., Barto, Andrew G.
–Neural Information Processing Systems
An emerging use of reinforcement learning (RL) is to approximate optimal policies for large-scale control problems through extensive simulated control experience. Described here are initial experiments directed toward the development of an automated recovery system (ARS)for high-agility aircraft. An ARS is an outer-loop flight control system designed to bring the aircraft from a range of initial states to straight, level, and non-inverted flight in minimum time while satisfying constraints such as maintaining altitude and accelerations within acceptable limits. Here we describe the problem and present initial results involving only single-axis (pitch) recoveries. Through extensive simulated control experience using a medium-fidelity simulation of an F-16, the RL system approximated an optimal policy for longitudinal-stick inputs to produce near-minimum-time transitions to straight and level flight in unconstrained cases, as well as while meeting a pilot-station acceleration constraint. 2 AIRCRAFT MODEL
Neural Information Processing Systems
Dec-31-1998
- Country:
- Asia > Middle East
- Jordan (0.05)
- Europe > Monaco (0.06)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Massachusetts > Hampshire County
- Amherst (0.14)
- New York (0.04)
- Virginia > Albemarle County
- Charlottesville (0.05)
- California > San Francisco County
- Asia > Middle East
- Industry:
- Transportation > Air (0.38)
- Technology: