Towards safe autonomous driving: decision making with observation-robust reinforcement learning

Most real-world situations involve unavoidable measurement noises or perception errors which result in unsafe decision making or even casualty in autonomous driving. To address these issues and further improve safety, automated driving is required to be capable of handling perception uncertainties....

Full description

Saved in:
Bibliographic Details
Main Authors: He, Xiangkun, Lv, Chen
Other Authors: School of Mechanical and Aerospace Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/173164
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-173164
record_format dspace
spelling sg-ntu-dr.10356-1731642024-01-20T16:48:04Z Towards safe autonomous driving: decision making with observation-robust reinforcement learning He, Xiangkun Lv, Chen School of Mechanical and Aerospace Engineering Engineering::Mechanical engineering Autonomous Vehicle Safe Decision Making Most real-world situations involve unavoidable measurement noises or perception errors which result in unsafe decision making or even casualty in autonomous driving. To address these issues and further improve safety, automated driving is required to be capable of handling perception uncertainties. Here, this paper presents an observation-robust reinforcement learning against observational uncertainties to realize safe decision making for autonomous vehicles. Specifically, an adversarial agent is trained online to generate optimal adversarial attacks on observations, which attempts to amplify the average variation distance on perturbed policies. In addition, an observation-robust actor-critic approach is developed to enable the agent to learn the optimal policies and ensure that the changes of the policies perturbed by optimal adversarial attacks remain within a certain bound. Lastly, the safe decision making scheme is evaluated on a lane change task under complex highway traffic scenarios. The results show that the developed approach can ensure autonomous driving performance, as well as the policy robustness against adversarial attacks on observations. Published version This work was supported by Foundation of State Key Laboratory of Automotive Simulation and Control. 2024-01-16T01:41:02Z 2024-01-16T01:41:02Z 2023 Journal Article He, X. & Lv, C. (2023). Towards safe autonomous driving: decision making with observation-robust reinforcement learning. Automotive Innovation, 6(4), 509-520. https://dx.doi.org/10.1007/s42154-023-00256-x 2096-4250 https://hdl.handle.net/10356/173164 10.1007/s42154-023-00256-x 2-s2.0-85176125985 4 6 509 520 en Automotive Innovation © The Author(s) 2023. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Mechanical engineering
Autonomous Vehicle
Safe Decision Making
spellingShingle Engineering::Mechanical engineering
Autonomous Vehicle
Safe Decision Making
He, Xiangkun
Lv, Chen
Towards safe autonomous driving: decision making with observation-robust reinforcement learning
description Most real-world situations involve unavoidable measurement noises or perception errors which result in unsafe decision making or even casualty in autonomous driving. To address these issues and further improve safety, automated driving is required to be capable of handling perception uncertainties. Here, this paper presents an observation-robust reinforcement learning against observational uncertainties to realize safe decision making for autonomous vehicles. Specifically, an adversarial agent is trained online to generate optimal adversarial attacks on observations, which attempts to amplify the average variation distance on perturbed policies. In addition, an observation-robust actor-critic approach is developed to enable the agent to learn the optimal policies and ensure that the changes of the policies perturbed by optimal adversarial attacks remain within a certain bound. Lastly, the safe decision making scheme is evaluated on a lane change task under complex highway traffic scenarios. The results show that the developed approach can ensure autonomous driving performance, as well as the policy robustness against adversarial attacks on observations.
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
He, Xiangkun
Lv, Chen
format Article
author He, Xiangkun
Lv, Chen
author_sort He, Xiangkun
title Towards safe autonomous driving: decision making with observation-robust reinforcement learning
title_short Towards safe autonomous driving: decision making with observation-robust reinforcement learning
title_full Towards safe autonomous driving: decision making with observation-robust reinforcement learning
title_fullStr Towards safe autonomous driving: decision making with observation-robust reinforcement learning
title_full_unstemmed Towards safe autonomous driving: decision making with observation-robust reinforcement learning
title_sort towards safe autonomous driving: decision making with observation-robust reinforcement learning
publishDate 2024
url https://hdl.handle.net/10356/173164
_version_ 1789482893754499072