Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications
Autonomous Vehicles (AVs) are required to operate safely and efficiently in dynamic environments. For this, the AVs equipped with Joint Radar-Communications (JRC) functions can enhance the driving safety by utilizing both radar detection and data communication functions. However, optimizing the perf...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164292 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-164292 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1642922023-01-13T04:49:24Z Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications Nguyen, Quang Hieu Dinh, Thai Hoang Niyato, Dusit Wang, Ping Kim, Dong In Yuen, Chau School of Computer Science and Engineering Engineering::Computer science and engineering Autonomous Vehicles Deep Reinforcement Learning Autonomous Vehicles (AVs) are required to operate safely and efficiently in dynamic environments. For this, the AVs equipped with Joint Radar-Communications (JRC) functions can enhance the driving safety by utilizing both radar detection and data communication functions. However, optimizing the performance of the AV system with two different functions under uncertainty and dynamic of surrounding environments is very challenging. In this work, we first propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions in selecting JRC operation functions under the dynamic and uncertainty of the surrounding environment. We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV without requiring any prior information about surrounding environment. Furthermore, to make our proposed framework more scalable, we develop a Transfer Learning (TL) mechanism that enables the AV to leverage valuable experiences for accelerating the training process when it moves to a new environment. Extensive simulations show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches. With the deep reinforcement learning and transfer learning approaches, our proposed solution can find its applications in a wide range of autonomous driving scenarios from driver assistance to full automation transportation. Agency for Science, Technology and Research (A*STAR) Ministry of Education (MOE) National Research Foundation (NRF) This research was supported in part by the Australian Research Council under the DECRA project DE210100651. This research is supported in part by the programme DesCartes - the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme and under its Emerging Areas Research Projects (EARP) Funding Initiative, Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI), the National Research Foundation, Singapore under the AI Singapore Programme (AISG) (AISG2- RP-2020-019), and Singapore Ministry of Education (MOE) Tier 1 (RG16/20). This research was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIT) under Grant 2021R1A2C2007638 and the MSIT under the ICT Creative Consilience program (IITP-2020-0-01821) supervised by the IITP. This research is supported by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Industry Alignment Fund – Pre Positioning (IAF-PP) (Grant No. A19D6a0053). 2023-01-13T04:49:24Z 2023-01-13T04:49:24Z 2022 Journal Article Nguyen, Q. H., Dinh, T. H., Niyato, D., Wang, P., Kim, D. I. & Yuen, C. (2022). Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications. IEEE Transactions On Communications, 70(8), 5164-5180. https://dx.doi.org/10.1109/TCOMM.2022.3182034 0090-6778 https://hdl.handle.net/10356/164292 10.1109/TCOMM.2022.3182034 2-s2.0-85132776643 8 70 5164 5180 en AISG2-RP-2020-019 RG16/20 A19D6a0053 IEEE Transactions on Communications © 2022 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering Autonomous Vehicles Deep Reinforcement Learning |
spellingShingle |
Engineering::Computer science and engineering Autonomous Vehicles Deep Reinforcement Learning Nguyen, Quang Hieu Dinh, Thai Hoang Niyato, Dusit Wang, Ping Kim, Dong In Yuen, Chau Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
description |
Autonomous Vehicles (AVs) are required to operate safely and efficiently in dynamic environments. For this, the AVs equipped with Joint Radar-Communications (JRC) functions can enhance the driving safety by utilizing both radar detection and data communication functions. However, optimizing the performance of the AV system with two different functions under uncertainty and dynamic of surrounding environments is very challenging. In this work, we first propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions in selecting JRC operation functions under the dynamic and uncertainty of the surrounding environment. We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV without requiring any prior information about surrounding environment. Furthermore, to make our proposed framework more scalable, we develop a Transfer Learning (TL) mechanism that enables the AV to leverage valuable experiences for accelerating the training process when it moves to a new environment. Extensive simulations show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches. With the deep reinforcement learning and transfer learning approaches, our proposed solution can find its applications in a wide range of autonomous driving scenarios from driver assistance to full automation transportation. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Nguyen, Quang Hieu Dinh, Thai Hoang Niyato, Dusit Wang, Ping Kim, Dong In Yuen, Chau |
format |
Article |
author |
Nguyen, Quang Hieu Dinh, Thai Hoang Niyato, Dusit Wang, Ping Kim, Dong In Yuen, Chau |
author_sort |
Nguyen, Quang Hieu |
title |
Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
title_short |
Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
title_full |
Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
title_fullStr |
Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
title_full_unstemmed |
Transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
title_sort |
transferable deep reinforcement learning framework for autonomous vehicles with joint radar-data communications |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/164292 |
_version_ |
1756370579911344128 |