Deep reinforcement learning-based control model for automatic robot navigation

This report explores the application of deep reinforcement learning (DRL) for robot navigation without pre-constructed maps. Several mainstream DRL models, including DDPG, PPO, and TD3, were tested in a simple static obstacle environment, and TD3 was found to have the best performance. The report th...

Full description

Saved in:
Bibliographic Details
Main Author: Deng, Haoyuan
Other Authors: Jiang Xudong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/168320
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-168320
record_format dspace
spelling sg-ntu-dr.10356-1683202023-07-07T19:36:12Z Deep reinforcement learning-based control model for automatic robot navigation Deng, Haoyuan Jiang Xudong School of Electrical and Electronic Engineering EXDJiang@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics This report explores the application of deep reinforcement learning (DRL) for robot navigation without pre-constructed maps. Several mainstream DRL models, including DDPG, PPO, and TD3, were tested in a simple static obstacle environment, and TD3 was found to have the best performance. The report then investigates the incentive effect of different reward values on TD3 training and shows that a slightly increased positive reward value can substantially improve convergence and motivate the robot to reach the best convergence with less time and fewer steps. Additionally, a novel training approach using a pre-trained model from a static environment was proposed, resulting in faster convergence and larger cumulative reward values in a dynamic obstacle environment. However, the method does not perform well in more complex environments, highlighting the need for further optimization of the model structure and feature extraction capabilities. Overall, this report provides important insights into the use of DRL for map-free robot navigation and highlights potential directions for future research. Bachelor of Engineering (Electrical and Electronic Engineering) 2023-06-12T02:32:23Z 2023-06-12T02:32:23Z 2023 Final Year Project (FYP) Deng, H. (2023). Deep reinforcement learning-based control model for automatic robot navigation. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/168320 https://hdl.handle.net/10356/168320 en application/pdf application/octet-stream Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
spellingShingle Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Deng, Haoyuan
Deep reinforcement learning-based control model for automatic robot navigation
description This report explores the application of deep reinforcement learning (DRL) for robot navigation without pre-constructed maps. Several mainstream DRL models, including DDPG, PPO, and TD3, were tested in a simple static obstacle environment, and TD3 was found to have the best performance. The report then investigates the incentive effect of different reward values on TD3 training and shows that a slightly increased positive reward value can substantially improve convergence and motivate the robot to reach the best convergence with less time and fewer steps. Additionally, a novel training approach using a pre-trained model from a static environment was proposed, resulting in faster convergence and larger cumulative reward values in a dynamic obstacle environment. However, the method does not perform well in more complex environments, highlighting the need for further optimization of the model structure and feature extraction capabilities. Overall, this report provides important insights into the use of DRL for map-free robot navigation and highlights potential directions for future research.
author2 Jiang Xudong
author_facet Jiang Xudong
Deng, Haoyuan
format Final Year Project
author Deng, Haoyuan
author_sort Deng, Haoyuan
title Deep reinforcement learning-based control model for automatic robot navigation
title_short Deep reinforcement learning-based control model for automatic robot navigation
title_full Deep reinforcement learning-based control model for automatic robot navigation
title_fullStr Deep reinforcement learning-based control model for automatic robot navigation
title_full_unstemmed Deep reinforcement learning-based control model for automatic robot navigation
title_sort deep reinforcement learning-based control model for automatic robot navigation
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/168320
_version_ 1772827598763589632