Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties

With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequently, more potential conflicts. This gives rise to the need for conflict resolution advisory tools that can perform well in high-density traffic scenarios given a no...

Full description

Saved in:
Bibliographic Details
Main Authors: Pham, Duc-Thinh, Tran, Phu N., Alam, Sameer, Duong, Vu, Delahaye, Daniel
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/153396
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-153396
record_format dspace
spelling sg-ntu-dr.10356-1533962022-01-08T20:10:21Z Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties Pham, Duc-Thinh Tran, Phu N. Alam, Sameer Duong, Vu Delahaye, Daniel School of Mechanical and Aerospace Engineering Air Traffic Management Research Institute Engineering::Aeronautical engineering::Aviation Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Reinforcement Learning Air Traffic Control With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequently, more potential conflicts. This gives rise to the need for conflict resolution advisory tools that can perform well in high-density traffic scenarios given a noisy environment. Unlike model-based approaches, learning-based approaches can take advantage of historical traffic data and flexibly encapsulate environmental uncertainty. In this study, we propose a reinforcement learning approach that is capable of resolving conflicts, in the presence of traffic and inherent uncertainties in conflict resolution maneuvers, without the need for prior knowledge about a set of rules mapping from conflict scenarios to expected actions. The conflict resolution task is formulated as a decision-making problem in large and complex action space. The research also includes the development of a learning environment, scenario state representation, reward function, and a reinforcement learning algorithm inspired from Q-learning and Deep Deterministic Policy Gradient algorithms. The proposed algorithm, with two stages decision-making process, is used to train an agent that can serve as an advisory tool for air traffic controllers in resolving air traffic conflicts where it can learn from historical data by evolving over time. Our findings show that the proposed model gives the agent the capability to suggest high-quality conflict resolutions under different environmental conditions. It outperforms two baseline algorithms. The trained model has high performance under low uncertainty level (success rate >= 95% ) and medium uncertainty level (success rate >= 87%) with high traffic density. The detailed analysis of different impact factors such as the environment's uncertainty and traffic density on learning performance are investigated and discussed. The environment's uncertainty is the most important factor which affects the performance. Moreover, the combination of high-density traffic and high uncertainty will be a challenge for any learning model. Civil Aviation Authority of Singapore (CAAS) National Research Foundation (NRF) Accepted version This research / project* is supported by the National Research Foundation, Singapore, and the Civil Aviation Authority of Singapore, under the Aviation Transformation Programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and the Civil Aviation Authority of Singapore. 2022-01-03T05:01:32Z 2022-01-03T05:01:32Z 2022 Journal Article Pham, D., Tran, P. N., Alam, S., Duong, V. & Delahaye, D. (2022). Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties. Transportation Research Part C: Emerging Technologies, 135, 103463-. https://dx.doi.org/10.1016/j.trc.2021.103463 0968-090X https://hdl.handle.net/10356/153396 10.1016/j.trc.2021.103463 135 103463 en Transportation Research Part C: Emerging Technologies © 2021 Elsevier Ltd. A. All rights reserved. This paper was published in Transportation Research Part C: Emerging Technologies and is made available with permission of Elsevier Ltd. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Aeronautical engineering::Aviation
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Reinforcement Learning
Air Traffic Control
spellingShingle Engineering::Aeronautical engineering::Aviation
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Reinforcement Learning
Air Traffic Control
Pham, Duc-Thinh
Tran, Phu N.
Alam, Sameer
Duong, Vu
Delahaye, Daniel
Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
description With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequently, more potential conflicts. This gives rise to the need for conflict resolution advisory tools that can perform well in high-density traffic scenarios given a noisy environment. Unlike model-based approaches, learning-based approaches can take advantage of historical traffic data and flexibly encapsulate environmental uncertainty. In this study, we propose a reinforcement learning approach that is capable of resolving conflicts, in the presence of traffic and inherent uncertainties in conflict resolution maneuvers, without the need for prior knowledge about a set of rules mapping from conflict scenarios to expected actions. The conflict resolution task is formulated as a decision-making problem in large and complex action space. The research also includes the development of a learning environment, scenario state representation, reward function, and a reinforcement learning algorithm inspired from Q-learning and Deep Deterministic Policy Gradient algorithms. The proposed algorithm, with two stages decision-making process, is used to train an agent that can serve as an advisory tool for air traffic controllers in resolving air traffic conflicts where it can learn from historical data by evolving over time. Our findings show that the proposed model gives the agent the capability to suggest high-quality conflict resolutions under different environmental conditions. It outperforms two baseline algorithms. The trained model has high performance under low uncertainty level (success rate >= 95% ) and medium uncertainty level (success rate >= 87%) with high traffic density. The detailed analysis of different impact factors such as the environment's uncertainty and traffic density on learning performance are investigated and discussed. The environment's uncertainty is the most important factor which affects the performance. Moreover, the combination of high-density traffic and high uncertainty will be a challenge for any learning model.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Pham, Duc-Thinh
Tran, Phu N.
Alam, Sameer
Duong, Vu
Delahaye, Daniel
format Article
author Pham, Duc-Thinh
Tran, Phu N.
Alam, Sameer
Duong, Vu
Delahaye, Daniel
author_sort Pham, Duc-Thinh
title Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
title_short Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
title_full Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
title_fullStr Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
title_full_unstemmed Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
title_sort deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties
publishDate 2022
url https://hdl.handle.net/10356/153396
_version_ 1722355338699603968