A $1.7-billion expansion project at Los Angeles International Airport was officially unveiled Monday by local officials who expressed optimism that the facility will soon help serve a resurgence of travel demand from the yearlong pandemic slump. The new facility, named West Gates and billed as an expansion of the Tom Bradley International Terminal, holds 15 gates. The project broke ground in 2017, when international travel was surging, particularly with big-spending visitors from China. At the time, the airport was the second-busiest in the nation and was considered the West Coast gateway to the United States. The airport served more than 84 million domestic and international travelers that year, according to LAX records.
To model and forecast flight delays accurately, it is crucial to harness various vehicle trajectory and contextual sensor data on airport tarmac areas. These heterogeneous sensor data, if modelled correctly, can be used to generate a situational awareness map. Existing techniques apply traditional supervised learning methods onto historical data, contextual features and route information among different airports to predict flight delay are inaccurate and only predict arrival delay but not departure delay, which is essential to airlines. In this paper, we propose a vision-based solution to achieve a high forecasting accuracy, applicable to the airport. Our solution leverages a snapshot of the airport situational awareness map, which contains various trajectories of aircraft and contextual features such as weather and airline schedules. We propose an end-to-end deep learning architecture, TrajCNN, which captures both the spatial and temporal information from the situational awareness map. Additionally, we reveal that the situational awareness map of the airport has a vital impact on estimating flight departure delay. Our proposed framework obtained a good result (around 18 minutes error) for predicting flight departure delay at Los Angeles International Airport.
Delseny, Hervé, Gabreau, Christophe, Gauffriau, Adrien, Beaudouin, Bernard, Ponsolle, Ludovic, Alecu, Lucian, Bonnin, Hugues, Beltran, Brice, Duchel, Didier, Ginestet, Jean-Brice, Hervieu, Alexandre, Martinez, Ghilaine, Pasquet, Sylvain, Delmas, Kevin, Pagetti, Claire, Gabriel, Jean-Marc, Chapdelaine, Camille, Picard, Sylvaine, Damour, Mathieu, Cappi, Cyril, Gardès, Laurent, De Grancey, Florence, Jenn, Eric, Lefevre, Baptiste, Flandin, Gregory, Gerchinovitz, Sébastien, Mamalet, Franck, Albore, Alexandre
Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.
Disruption management during the airline scheduling process can be compartmentalized into proactive and reactive processes depending upon the time of schedule execution. The state of the art for decision-making in airline disruption management involves a heuristic human-centric approach that does not categorically study uncertainty in proactive and reactive processes for managing airline schedule disruptions. Hence, this paper introduces an uncertainty transfer function model (UTFM) framework that characterizes uncertainty for proactive airline disruption management before schedule execution, reactive airline disruption management during schedule execution, and proactive airline disruption management after schedule execution to enable the construction of quantitative tools that can allow an intelligent agent to rationalize complex interactions and procedures for robust airline disruption management. Specifically, we use historical scheduling and operations data from a major U.S. airline to facilitate the development and assessment of the UTFM, defined by hidden Markov models (a special class of probabilistic graphical models) that can efficiently perform pattern learning and inference on portions of large data sets.
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Delta Air Lines is bringing facial recognition technology to domestic flights. Last week, the airline announced that it is launching its digital ID technology for domestic flights out of Detroit Metropolitan Wayne County Airport. Delta previously debuted the technology in 2018 for international flights.
Financially strapped airlines are pushing an idea intended to breathe new life into the travel industry: coronavirus tests that passengers can take before boarding a flight. Several airlines, including United, American, Hawaiian, JetBlue and Alaska, have announced plans to begin offering testing -- either kits mailed to a passenger's home or rapid tests taken at or near airports -- that would allow travelers to enter specific states and countries without having to quarantine. The tests will cost fliers $90 to $250, depending on the airline and the type of test. At Los Angeles International Airport, a design company has announced plans to convert cargo containers into a coronavirus testing facility with an on-site lab that can produce results in about two hours. On Thursday, Tampa International Airport began offering testing to all arriving and departing passengers on a walk-in basis. It's an idea that has gone global, with a trade group for the world's airlines calling on governments to create a testing standard for airline passengers as a way to fight the COVID-19 pandemic instead of using travel restrictions and mandatory quarantines.
For an airline, the crew operating cost is second only to the fuel cost, making the crew pairing optimization (CPO) critical for business viability. Its aim is to generate a set of flight sequences (crew pairings) that cover all flights in an airline's schedule, at minimum cost, while satisfying several legality constraints. Being an NP-hard combinatorial optimization problem, CPO is tackled by relaxing the underlying Integer Programming Problem into a Linear Programming Problem, and solving the latter through Column generation (CG) technique. However, with the expansion of airlines' operations lately, the curse of dimensionality renders the exact CG-implementations obsolete, paving the way for heuristic-based CG-implementations. Yet, the much prevalent large-scale complex flight networks involving multiple-crew bases and hub-and-spoke sub-networks, largely remain unaddressed. To bridge the research gap, this paper proposes a novel CG heuristic, which has enabled in-house development of an Airline Crew Pairing Optimizer (AirCROP). The efficacy of the heuristic/AirCROP has been: (a) tested on real-world airline data with an unprecedented conjunct scale-and-complexity, marked by over 4200 flights, 15 crew bases, and over a billion pairings, and (b) validated by the research consortium's industrial sponsor. This paper has a dedicated focus on the proposed CG heuristic which constitutes the core search mechanism of the optimizer, by balancing random exploration (of pairings' space), exploitation of domain knowledge (on optimal solution's features), and utilization of the past computational effort through archiving. Though this paper has an airline context, the underlying propositions may find applications across different domains as the proposed CG heuristic can serve as a template on how to utilize domain knowledge to better tackle large-scale combinatorial optimization problems.
Shared mobility-on-demand services are expanding rapidly in cities around the world. As a prominent example, app-based ridesourcing is becoming an integral part of many urban transportation ecosystems. Despite the centrality, limited public availability of detailed temporal and spatial data on ridesourcing trips has limited research on how new services interact with traditional mobility options and how they impact travel in cities. Improving data-sharing agreements are opening unprecedented opportunities for research in this area. This study examines emerging patterns of mobility using recently released City of Chicago public ridesourcing data. The detailed spatio-temporal ridesourcing data are matched with weather, transit, and taxi data to gain a deeper understanding of ridesourcings role in Chicagos mobility system. The goal is to investigate the systematic variations in patronage of ride-hailing. K-prototypes is utilized to detect user segments owing to its ability to accept mixed variable data types. An extension of the K-means algorithm, its output is a classification of the data into several clusters called prototypes. Six ridesourcing prototypes are identified and discussed based on significant differences in relation to adverse weather conditions, competition with alternative modes, location and timing of use, and tendency for ridesplitting. The paper discusses implications of the identified clusters related to affordability, equity and competition with transit.
Hogan, Aidan, Blomqvist, Eva, Cochez, Michael, d'Amato, Claudia, de Melo, Gerard, Gutierrez, Claudio, Gayo, José Emilio Labra, Kirrane, Sabrina, Neumaier, Sebastian, Polleres, Axel, Navigli, Roberto, Ngomo, Axel-Cyrille Ngonga, Rashid, Sabbir M., Rula, Anisa, Schmelzeisen, Lukas, Sequeda, Juan, Staab, Steffen, Zimmermann, Antoine
In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data. After a general introduction, we motivate and contrast various graph-based data models and query languages that are used for knowledge graphs. We discuss the roles of schema, identity, and context in knowledge graphs. We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques. We summarise methods for the creation, enrichment, quality assessment, refinement, and publication of knowledge graphs. We provide an overview of prominent open knowledge graphs and enterprise knowledge graphs, their applications, and how they use the aforementioned techniques. We conclude with high-level future research directions for knowledge graphs.
Well-designed technologies that offer high levels of human control and high levels of computer automation can increase human performance, leading to wider adoption. The Human-Centered Artificial Intelligence (HCAI) framework clarifies how to (1) design for high levels of human control and high levels of computer automation so as to increase human performance, (2) understand the situations in which full human control or full computer control are necessary, and (3) avoid the dangers of excessive human control or excessive computer control. The methods of HCAI are more likely to produce designs that are Reliable, Safe & Trustworthy (RST). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility.