Reinforcement learning based mobile robot self-navigation with static obstacle avoidance
In this project, we explore the application of reinforcement learning for enhancing mobile robot self-navigation capabilities, specifically focusing on the challenge of static obstacle avoidance. Utilizing the Gazebo simulation environment integrated with the Robot Operating System (ROS), we impleme...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/176676 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-176676 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1766762024-05-24T15:49:45Z Reinforcement learning based mobile robot self-navigation with static obstacle avoidance Yang, Shaobo Jiang Xudong School of Electrical and Electronic Engineering EXDJiang@ntu.edu.sg Computer and Information Science Reinforcement learning TD3 Gazebo ROS In this project, we explore the application of reinforcement learning for enhancing mobile robot self-navigation capabilities, specifically focusing on the challenge of static obstacle avoidance. Utilizing the Gazebo simulation environment integrated with the Robot Operating System (ROS), we implement the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, a variant of reinforcement learning known for its stability and efficiency in continuous action spaces. Our objective was to demonstrate that the TD3 algorithm could effectively guide a mobile robot in a simulated environment populated with static obstacles, thereby advancing autonomous navigation strategies. Through a systematic integration of Gazebo, ROS, and TD3, we developed a mobile robot model capable of learning and navigating while avoiding collisions. Our evaluation metrics, centered around navigation efficiency and obstacle avoidance effectiveness, reveal significant improvements in autonomous navigation capabilities. The results indicate that the TD3 algorithm, with its twin-critic architecture, provides a robust framework for mobile robot navigation in complex environments. Bachelor's degree 2024-05-20T02:31:56Z 2024-05-20T02:31:56Z 2024 Final Year Project (FYP) Yang, S. (2024). Reinforcement learning based mobile robot self-navigation with static obstacle avoidance. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/176676 https://hdl.handle.net/10356/176676 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Reinforcement learning TD3 Gazebo ROS |
spellingShingle |
Computer and Information Science Reinforcement learning TD3 Gazebo ROS Yang, Shaobo Reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
description |
In this project, we explore the application of reinforcement learning for enhancing mobile robot self-navigation capabilities, specifically focusing on the challenge of static obstacle avoidance. Utilizing the Gazebo simulation environment integrated with the Robot Operating System (ROS), we implement the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, a variant of reinforcement learning known for its stability and efficiency in continuous action spaces. Our objective was to demonstrate that the TD3 algorithm could effectively guide a mobile robot in a simulated environment populated with static obstacles, thereby advancing autonomous navigation strategies. Through a systematic integration of Gazebo, ROS, and TD3, we developed a mobile robot model capable of learning and navigating while avoiding collisions. Our evaluation metrics, centered around navigation efficiency and obstacle avoidance effectiveness, reveal significant improvements in autonomous navigation capabilities. The results indicate that the TD3 algorithm, with its twin-critic architecture, provides a robust framework for mobile robot navigation in complex environments. |
author2 |
Jiang Xudong |
author_facet |
Jiang Xudong Yang, Shaobo |
format |
Final Year Project |
author |
Yang, Shaobo |
author_sort |
Yang, Shaobo |
title |
Reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
title_short |
Reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
title_full |
Reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
title_fullStr |
Reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
title_full_unstemmed |
Reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
title_sort |
reinforcement learning based mobile robot self-navigation with static obstacle avoidance |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/176676 |
_version_ |
1806059786620370944 |