A deep reinforcement learning approach for airport departure metering under spatial-temporal airside interactions

Airport taxi delays adversely affect airports and airlines around the world leading to airside congestion, increased Air Traffic Controllers/Pilot workload, and adverse environmental impact due to excessive fuel burn. Airport Departure Metering (DM) is an effective approach to contain taxi delays by...

Full description

Saved in:
Bibliographic Details
Main Authors: Ali, Hasnain, Pham, Duc-Thinh, Schultz, Michael, Alam, Sameer
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/161934
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Airport taxi delays adversely affect airports and airlines around the world leading to airside congestion, increased Air Traffic Controllers/Pilot workload, and adverse environmental impact due to excessive fuel burn. Airport Departure Metering (DM) is an effective approach to contain taxi delays by controlling departure pushback timings. The key idea behind DM is to transfer aircraft waiting time from taxiways to gates. State-of-the-art DM methods use model-based control policies that rely on airside departure modeling to obtain simplified analytical equations. Consequently, these models fail to capture non-stationarity in the airside operations leading to poor performance of control policies under uncertainties. This work proposes model-free and learning-based DM using Deep Reinforcement Learning (DRL) approach to reduce taxi delays while meeting flight schedule constraints. This paper casts the DM problem in a markov decision process framework and develops a representative airport-airside simulator to simulate airside operations and evaluate the learnt DM policy. For effective state representation, this work introduces taxiway hotspot features to account for the spatial-temporal evolution of airside congestion levels. This significantly improves the DM policy convergence rate during training. The performance of the learnt policy is evaluated under different traffic densities with a reduction of approximately 44% in taxi out delays, in medium-density traffic scenarios, which corresponds to 2-minute savings in taxi-out time per aircraft. Furthermore, benchmarking DRL against an evolutionary method and another state-of-the-art simulation-based heuristic demonstrates the superior performance of our method, especially in high traffic density scenarios. With increased traffic density, taxi-time savings achieved by the learnt DM policy increase without a significant decrease in runway throughput. Results, on a typical day of simulated operations at Singapore Changi Airport, demonstrate that DRL can learn an effective DM policy to contain congestion on the taxiways, reduce total fuel consumption by approximately 22% and better manage the airside traffic.