A deep reinforcement learning approach for runway configuration management: a case study for Philadelphia International Airport
Airports featuring multiple runways have the capability to operate in diverse runway configurations, each with its unique setup. Presently, Air Traffic Controllers (ATCOs) heavily rely on their operational experience and predefined procedures (”playbooks”) to plan the utilization of runway configura...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180894 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Airports featuring multiple runways have the capability to operate in diverse runway configurations, each with its unique setup. Presently, Air Traffic Controllers (ATCOs) heavily rely on their operational experience and predefined procedures (”playbooks”) to plan the utilization of runway configurations. These ’playbooks’ however lack the capacity to comprehensively address the intricacies of a dynamic runway system under increasing weather uncertainties. This study introduces innovative methodologies for addressing the Runway Configuration Management (RCM) problem, with the objective of selecting the optimal runway configuration to maximize the overall runway system capacity. A new approach is presented, employing Deep Reinforcement Learning (Deep RL) techniques that leverage real-world data obtained from operations at Philadelphia International Airport (PHL). This approach generates a day-long schedule of optimized runway configurations with a rolling window horizon, until the end of the day, updated every 30 min. Additionally, a computational model is introduced to gauge the impact on capacity resulting from transitions between runway configurations which feedback into optimized runway configurations generation. The Deep RL model demonstrates reduction of number of delayed flights, amounting to approximately 30%, when applied to scenarios not encountered during the model's training phase. Moreover, the Deep RL model effectively reduces the number of delayed arrivals by 27% and departures by 33% when compared to a baseline configuration. |
---|