Reinforcement learning based mobile robot self-navigation with static obstacle avoidance

In this project, we explore the application of reinforcement learning for enhancing mobile robot self-navigation capabilities, specifically focusing on the challenge of static obstacle avoidance. Utilizing the Gazebo simulation environment integrated with the Robot Operating System (ROS), we impleme...

Full description

Saved in:
Bibliographic Details
Main Author: Yang, Shaobo
Other Authors: Jiang Xudong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
TD3
ROS
Online Access:https://hdl.handle.net/10356/176676
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this project, we explore the application of reinforcement learning for enhancing mobile robot self-navigation capabilities, specifically focusing on the challenge of static obstacle avoidance. Utilizing the Gazebo simulation environment integrated with the Robot Operating System (ROS), we implement the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, a variant of reinforcement learning known for its stability and efficiency in continuous action spaces. Our objective was to demonstrate that the TD3 algorithm could effectively guide a mobile robot in a simulated environment populated with static obstacles, thereby advancing autonomous navigation strategies. Through a systematic integration of Gazebo, ROS, and TD3, we developed a mobile robot model capable of learning and navigating while avoiding collisions. Our evaluation metrics, centered around navigation efficiency and obstacle avoidance effectiveness, reveal significant improvements in autonomous navigation capabilities. The results indicate that the TD3 algorithm, with its twin-critic architecture, provides a robust framework for mobile robot navigation in complex environments.