Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving

To further improve learning efficiency and performance of reinforcement learning (RL), a novel uncertainty-aware model-based RL method is proposed and validated in autonomous driving scenarios in this paper. First, an action-conditioned ensemble model with the capability of uncertainty assessment is...

Full description

Saved in:
Bibliographic Details
Main Authors: Wu, Jingda, Huang, Zhiyu, Lv, Chen
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178357
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-178357
record_format dspace
spelling sg-ntu-dr.10356-1783572024-06-15T16:48:16Z Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving Wu, Jingda Huang, Zhiyu Lv, Chen School of Mechanical and Aerospace Engineering Engineering Model-based reinforcement learning Uncertainty awareness To further improve learning efficiency and performance of reinforcement learning (RL), a novel uncertainty-aware model-based RL method is proposed and validated in autonomous driving scenarios in this paper. First, an action-conditioned ensemble model with the capability of uncertainty assessment is established as the environment model. Then, a novel uncertainty-aware model-based RL method is developed based on the adaptive truncation approach, providing virtual interactions between the agent and environment model, and improving RL’s learning efficiency and performance. The proposed method is then implemented in end-to-end autonomous vehicle control tasks, validated and compared with state-of-the-art methods under various driving scenarios. Validation results suggest that the proposed method outperforms the model-free RL approach with respect to learning efficiency, and model-based approach with respect to both efficiency and performance, demonstrating its feasibility and effectiveness. Agency for Science, Technology and Research (A*STAR) Nanyang Technological University Submitted/Accepted version This work was supported in part by the Agency for Science, Technology and Research (A*STAR) under Advanced Manufacturing and Engineering (AME) Young Individual Research under Grant A2084c0156, and in part by the Start-Up Grant, Nanyang Technological University, Singapore 2024-06-13T06:07:18Z 2024-06-13T06:07:18Z 2022 Journal Article Wu, J., Huang, Z. & Lv, C. (2022). Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving. IEEE Transactions On Intelligent Vehicles, 8(1), 194-203. https://dx.doi.org/10.1109/TIV.2022.3185159 2379-8858 https://hdl.handle.net/10356/178357 10.1109/TIV.2022.3185159 1 8 194 203 en A2084c0156 NTU-SUG IEEE Transactions on Intelligent Vehicles © 2022 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/TIV.2022.3185159. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Model-based reinforcement learning
Uncertainty awareness
spellingShingle Engineering
Model-based reinforcement learning
Uncertainty awareness
Wu, Jingda
Huang, Zhiyu
Lv, Chen
Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
description To further improve learning efficiency and performance of reinforcement learning (RL), a novel uncertainty-aware model-based RL method is proposed and validated in autonomous driving scenarios in this paper. First, an action-conditioned ensemble model with the capability of uncertainty assessment is established as the environment model. Then, a novel uncertainty-aware model-based RL method is developed based on the adaptive truncation approach, providing virtual interactions between the agent and environment model, and improving RL’s learning efficiency and performance. The proposed method is then implemented in end-to-end autonomous vehicle control tasks, validated and compared with state-of-the-art methods under various driving scenarios. Validation results suggest that the proposed method outperforms the model-free RL approach with respect to learning efficiency, and model-based approach with respect to both efficiency and performance, demonstrating its feasibility and effectiveness.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Wu, Jingda
Huang, Zhiyu
Lv, Chen
format Article
author Wu, Jingda
Huang, Zhiyu
Lv, Chen
author_sort Wu, Jingda
title Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
title_short Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
title_full Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
title_fullStr Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
title_full_unstemmed Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
title_sort uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving
publishDate 2024
url https://hdl.handle.net/10356/178357
_version_ 1814047411890290688