Reinforcement learning for collaborative multi-airport slot re-allocation under reduced capacity scenarios

Airport Collaborative Decision Making (A-CDM) is currently implemented to foster collaboration for efficient airport slot allocation. In the ASEAN region, where a central decision-making authority is not available, each airport reserves its autonomy in managing its own airport resources, which leads...

Full description

Saved in:
Bibliographic Details
Main Authors: Nguyen-Duy, Anh, Pham, Duc-Thinh
Other Authors: School of Mechanical and Aerospace Engineering
Format: Conference or Workshop Item
Language:English
Published: 2025
Subjects:
Online Access:https://iwac2024.org/docs/IWAC2024_ProgramBooklet.pdf
https://hdl.handle.net/10356/182324
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Airport Collaborative Decision Making (A-CDM) is currently implemented to foster collaboration for efficient airport slot allocation. In the ASEAN region, where a central decision-making authority is not available, each airport reserves its autonomy in managing its own airport resources, which leads to different decision-making policies. An effective collaborative airport slot allocation approach needs to demonstrate its ability to collaborate with different slot allocation policies. Reinforcement Learning, a learning-based approach, can make use of interactions between airports to capture the underlying policies of other airports. In this paper, we consider a multi-airport system with different slot allocation policies, consisting of a Reinforcement Learning airport agent interacting with fixed-policy airport agents. We want to validate if the Reinforcement Learning agent can utilize interactions between airports to learn to reallocate slots efficiently under reduced capacity scenarios. We perform validation on the Hong Kong-Singapore-Bangkok hub, with the 2018 OAG data. The performance of the Reinforcement Learning agent is compared with the Nearest Heuristic, which assigns delays based on the nearest available slots. Results show that the Reinforcement Learning agent performs significantly better than the Nearest Heuristic under a heavy-reduced capacity scenario, with a total delay of 84 and 107, respectively. For a medium-reduced capacity scenario, the Reinforcement Learning agent closely resembles the performance of the Nearest Heuristic, with a total delay of 45 and 41, respectively.