Reinforcement learning based algorithm design for mobile robot static obstacle avoidance

Robot static obstacle avoidance has always been a hot topic in Robot Control. The traditional method utilizes a global path planner, such as A*, with a high precision map, to automatically generate a path that could avoid the obstacles. However, considering the difficulties of producing a high preci...

Full description

Saved in:
Bibliographic Details
Main Author: Li, Zongrui
Other Authors: Hu, Guoqiang
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/151905
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Robot static obstacle avoidance has always been a hot topic in Robot Control. The traditional method utilizes a global path planner, such as A*, with a high precision map, to automatically generate a path that could avoid the obstacles. However, considering the difficulties of producing a high precision map in the real world, map-free methods, such as Reinforcement Learning (RL) methods, have attracted more and more researchers. This dissertation compares various RL algorithms, including DQN, DDQN, and DDPG, with the traditional method, and discusses their performance in different tasks, respectively. A new RL training platform, ROSRL, is also proposed in this dissertation, which improves training efficiency. Researchers can easily deploy RL algorithms and test their performance in ROSRL. The research result of this dissertation is meaningful in exploring state-of-art RL algorithms in static obstacle avoidance problems.