Multi-agent deep reinforcement learning for mix-mode runway sequencing

In mixed-mode operation, arrivals and departures are allowed to land and depart on the same runway. An appropriate strategy from air traffic controllers for arrivals and departures sequencing would boost the runway throughput significantly. On the other hand, safety is still the most crucial feature...

Full description

Saved in:
Bibliographic Details
Main Authors: Shi, Limin, Pham, Duc-Thinh, Alam, Sameer
Other Authors: School of Mechanical and Aerospace Engineering
Format: Conference or Workshop Item
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/162775
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-162775
record_format dspace
spelling sg-ntu-dr.10356-1627752022-11-12T23:30:22Z Multi-agent deep reinforcement learning for mix-mode runway sequencing Shi, Limin Pham, Duc-Thinh Alam, Sameer School of Mechanical and Aerospace Engineering 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC) Air Traffic Management Research Institute Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Simulation and modeling Reinforcement Learning Runway Sequencing Airport Optimization Multi-Agent Approach In mixed-mode operation, arrivals and departures are allowed to land and depart on the same runway. An appropriate strategy from air traffic controllers for arrivals and departures sequencing would boost the runway throughput significantly. On the other hand, safety is still the most crucial feature in the operation. Therefore, to assist air traffic controllers to make decisions on departures and arrivals with efficient utilization of runway capacity and safe operations, this paper proposed a Multi-agent Deep Reinforcement Learning approach using Multi-agent Deep Deterministic Policy Gradient to train two agents simultaneously: departure agent, and arrival agent. The departure agent makes departure slotting decisions for departures while the arrival agent determines the time delay or spacing decision on the arrival stream. A data-driven simulation environment is developed using Singapore Changi Airport data to support the learning process. Besides, a random sampling technique is also introduced to reduce redundant samples and increase off-policy sample efficiency. Moreover, the impact of different reward functions on runway throughput is also investigated and two specific models, e.g., 'arrival priority' and 'departure priority', are selected for further analysis in this study. As the result, by comparing the trained models with the ad-hoc model, the proposed approach increases the runway throughput significantly, with the highest 12.8% additional departure releases and overall 5.3% additional departure releases in identical environments while safety separations are maintained. Civil Aviation Authority of Singapore (CAAS) National Research Foundation (NRF) Submitted/Accepted version This research is supported by the National Research Foundation, Singapore, and the Civil Aviation Authority of Singapore, under the Aviation Transformation Programme. 2022-11-09T02:26:38Z 2022-11-09T02:26:38Z 2022 Conference Paper Shi, L., Pham, D. & Alam, S. (2022). Multi-agent deep reinforcement learning for mix-mode runway sequencing. 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), 586-593. https://dx.doi.org/10.1109/ITSC55140.2022.9922221 https://hdl.handle.net/10356/162775 10.1109/ITSC55140.2022.9922221 586 593 en © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/ITSC55140.2022.9922221. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Simulation and modeling
Reinforcement Learning
Runway Sequencing
Airport Optimization
Multi-Agent Approach
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Simulation and modeling
Reinforcement Learning
Runway Sequencing
Airport Optimization
Multi-Agent Approach
Shi, Limin
Pham, Duc-Thinh
Alam, Sameer
Multi-agent deep reinforcement learning for mix-mode runway sequencing
description In mixed-mode operation, arrivals and departures are allowed to land and depart on the same runway. An appropriate strategy from air traffic controllers for arrivals and departures sequencing would boost the runway throughput significantly. On the other hand, safety is still the most crucial feature in the operation. Therefore, to assist air traffic controllers to make decisions on departures and arrivals with efficient utilization of runway capacity and safe operations, this paper proposed a Multi-agent Deep Reinforcement Learning approach using Multi-agent Deep Deterministic Policy Gradient to train two agents simultaneously: departure agent, and arrival agent. The departure agent makes departure slotting decisions for departures while the arrival agent determines the time delay or spacing decision on the arrival stream. A data-driven simulation environment is developed using Singapore Changi Airport data to support the learning process. Besides, a random sampling technique is also introduced to reduce redundant samples and increase off-policy sample efficiency. Moreover, the impact of different reward functions on runway throughput is also investigated and two specific models, e.g., 'arrival priority' and 'departure priority', are selected for further analysis in this study. As the result, by comparing the trained models with the ad-hoc model, the proposed approach increases the runway throughput significantly, with the highest 12.8% additional departure releases and overall 5.3% additional departure releases in identical environments while safety separations are maintained.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Shi, Limin
Pham, Duc-Thinh
Alam, Sameer
format Conference or Workshop Item
author Shi, Limin
Pham, Duc-Thinh
Alam, Sameer
author_sort Shi, Limin
title Multi-agent deep reinforcement learning for mix-mode runway sequencing
title_short Multi-agent deep reinforcement learning for mix-mode runway sequencing
title_full Multi-agent deep reinforcement learning for mix-mode runway sequencing
title_fullStr Multi-agent deep reinforcement learning for mix-mode runway sequencing
title_full_unstemmed Multi-agent deep reinforcement learning for mix-mode runway sequencing
title_sort multi-agent deep reinforcement learning for mix-mode runway sequencing
publishDate 2022
url https://hdl.handle.net/10356/162775
_version_ 1751548541275209728