Multimodal Deep Learning for ATCO Command Lifecycle Modeling and Workload Prediction

Tan, Kaizhen

arXiv.org Artificial Intelligence 

-- Air traffic controllers (ATCOs) issue high - intensity voice commands in dense airspace, where accurate workload modeling is critical for safety and efficiency. This paper proposes a multimodal deep learning framework that integrates structured data, trajectory sequences, and image features to estimate two key parameters in the ATCO command lifecycle: the time offset between a command and the resulting aircraft maneuver, and the command duration. A hi gh - quality dataset was constructed, with maneuver points detected using sliding window and histogram - based methods. A CNN - Transformer ensemble model was developed for accurate, generalizable, and interpretable predictions. By linking trajectories to voice commands, this work offers the first model of its kind to support intelligent command generation and provides practical value for workload assessment, staffing, and scheduling. A. Background As global air traffic demand increases, airspace operations have become more complex and congested, presenting major challenges for air traffic control (ATC) systems. Although surveillance and communication technologies have improved, ATC performance still largely depends on human operators, particularly air traffic controllers (ATCOs), who monitor flights, assess conditions, and issue maneuver instructions to ensure safe and efficient operations. This human bottleneck has become a key constraint on ATC efficiency and safety, emphasizing the importance of quantifying task intensity and evaluating workload to support fatigue management, staff scheduling, and the development of in telligent ATC solutions . Early studies on ATCO workload modeling primarily focused on statistical methods and subjective assessments such as NASA Task Load Index (NASA - TLX) [1] .