Computational modelling and analysis of impeded and unimpeded taxi-out time at congested airports
In this study, a Deep Reinforcement Learning (DRL) approach is proposed to optimize the pre-departure sequencing of aircraft at airports, with the objective of minimizing taxi delays and queuing time. The research focuses on two main components: synthetic schedule generation and agent pre-training w...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167995 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In this study, a Deep Reinforcement Learning (DRL) approach is proposed to optimize the pre-departure sequencing of aircraft at airports, with the objective of minimizing taxi delays and queuing time. The research focuses on two main components: synthetic schedule generation and agent pre-training with supervised learning.
Synthetic schedules are generated by combining machine learning and simulation techniques, allowing for more realistic and diverse training scenarios that enhance the agent's generalization capabilities. These schedules effectively incorporate key features such as inter-departure times between aircraft to mimic real-world traffic scenarios.
Three deep learning models are investigated in the agent pre-training component - Transformer, Convolutional ResNet, and Linear ResNet - are pretrained using supervised learning to increase training efficiency. The performance of these models is assessed using root mean squared error (RMSE) and mean absolute error (MAE) metrics. The transformer model demonstrates superior performance, achieving an RMSE of 0.56923 and an MAE of 0.34372, thereby outperforming both ResNet models. The transformer model is then selected for the reinforcement learning task for the next phase of the research.
The use of synthetic schedules and pre-training with supervised learning makes the DRL approach more practical for real-world implementation. |
---|