Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving
The learning from intervention (LfI) approach has been proven effective in improving the performance of RL algorithms; nevertheless, existing methodologies in this domain tend to operate under the assumption that human guidance is invariably devoid of risk, thereby possibly leading to oscillations o...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182435 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-182435 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1824352025-02-03T02:30:21Z Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving Huang, Wenhui Liu, Haochen Huang, Zhiyu Lv, Chen School of Mechanical and Aerospace Engineering Engineering Autonomous driving Curriculum guidance The learning from intervention (LfI) approach has been proven effective in improving the performance of RL algorithms; nevertheless, existing methodologies in this domain tend to operate under the assumption that human guidance is invariably devoid of risk, thereby possibly leading to oscillations or even divergence in RL training as a result of improper demonstrations. In this paper, we propose a safety-aware human-in-the-loop reinforcement learning (SafeHIL-RL) approach to bridge the abovementioned gap. We first present a safety assessment module based on the artificial potential field (APF) model that incorporates dynamic information of the environment under the Frenet coordinate system, which we call the Frenet-based dynamic potential field (FDPF), for evaluating the real-time safety throughout the intervention process. Subsequently, we propose a curriculum guidance mechanism inspired by the pedagogical principle of whole-to-part patterns in human education. The curriculum guidance facilitates the RL agent's early acquisition of comprehensive global information through continual guidance while also allowing for fine-tuning local behavior through intermittent human guidance through a human-AI shared control strategy. Consequently, our approach enables a safe, robust, and efficient reinforcement learning process independent of the quality of guidance human participants provide. The proposed method is validated in two highway autonomous driving scenarios under highly dynamic traffic flows (https://github.com/OscarHuangWind/Safe-Human-in-the-Loop-RL). The experiments' results confirm the superiority and generalization capability of our approach when compared to other state-of-the-art (SOTA) baselines, as well as the effectiveness of the curriculum guidance. Agency for Science, Technology and Research (A*STAR) Ministry of Education (MOE) National Research Foundation (NRF) This work was supported in part by the Agency for Science, Technology and Research (A*STAR), Singapore, under the MTC Individual Research under Grant M22K2c0079, in part by the ANRNRF Joint under Grant NRF2021-NRF-ANR003 HM Science, and in part by the Ministry of Education (MOE), Singapore, under the Tier 2 under Grant MOE-T2EP50222-0002. 2025-02-03T02:30:21Z 2025-02-03T02:30:21Z 2024 Journal Article Huang, W., Liu, H., Huang, Z. & Lv, C. (2024). Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving. IEEE Transactions On Intelligent Transportation Systems, 25(11), 16181-16192. https://dx.doi.org/10.1109/TITS.2024.3420959 1524-9050 https://hdl.handle.net/10356/182435 10.1109/TITS.2024.3420959 2-s2.0-85204790118 11 25 16181 16192 en M22K2c0079 NRF2021-NRF-ANR003 HM Science MOE-T2EP50222-0002 IEEE Transactions on Intelligent Transportation Systems © 2024 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Autonomous driving Curriculum guidance |
spellingShingle |
Engineering Autonomous driving Curriculum guidance Huang, Wenhui Liu, Haochen Huang, Zhiyu Lv, Chen Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
description |
The learning from intervention (LfI) approach has been proven effective in improving the performance of RL algorithms; nevertheless, existing methodologies in this domain tend to operate under the assumption that human guidance is invariably devoid of risk, thereby possibly leading to oscillations or even divergence in RL training as a result of improper demonstrations. In this paper, we propose a safety-aware human-in-the-loop reinforcement learning (SafeHIL-RL) approach to bridge the abovementioned gap. We first present a safety assessment module based on the artificial potential field (APF) model that incorporates dynamic information of the environment under the Frenet coordinate system, which we call the Frenet-based dynamic potential field (FDPF), for evaluating the real-time safety throughout the intervention process. Subsequently, we propose a curriculum guidance mechanism inspired by the pedagogical principle of whole-to-part patterns in human education. The curriculum guidance facilitates the RL agent's early acquisition of comprehensive global information through continual guidance while also allowing for fine-tuning local behavior through intermittent human guidance through a human-AI shared control strategy. Consequently, our approach enables a safe, robust, and efficient reinforcement learning process independent of the quality of guidance human participants provide. The proposed method is validated in two highway autonomous driving scenarios under highly dynamic traffic flows (https://github.com/OscarHuangWind/Safe-Human-in-the-Loop-RL). The experiments' results confirm the superiority and generalization capability of our approach when compared to other state-of-the-art (SOTA) baselines, as well as the effectiveness of the curriculum guidance. |
author2 |
School of Mechanical and Aerospace Engineering |
author_facet |
School of Mechanical and Aerospace Engineering Huang, Wenhui Liu, Haochen Huang, Zhiyu Lv, Chen |
format |
Article |
author |
Huang, Wenhui Liu, Haochen Huang, Zhiyu Lv, Chen |
author_sort |
Huang, Wenhui |
title |
Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
title_short |
Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
title_full |
Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
title_fullStr |
Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
title_full_unstemmed |
Safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
title_sort |
safety-aware human-in-the-loop reinforcement learning with shared control for autonomous driving |
publishDate |
2025 |
url |
https://hdl.handle.net/10356/182435 |
_version_ |
1823108718572601344 |