Deep reinforcement learning-based control model for automatic robot navigation

This report explores the application of deep reinforcement learning (DRL) for robot navigation without pre-constructed maps. Several mainstream DRL models, including DDPG, PPO, and TD3, were tested in a simple static obstacle environment, and TD3 was found to have the best performance. The report th...

全面介紹

Saved in:
書目詳細資料
主要作者: Deng, Haoyuan
其他作者: Jiang Xudong
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/168320
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:This report explores the application of deep reinforcement learning (DRL) for robot navigation without pre-constructed maps. Several mainstream DRL models, including DDPG, PPO, and TD3, were tested in a simple static obstacle environment, and TD3 was found to have the best performance. The report then investigates the incentive effect of different reward values on TD3 training and shows that a slightly increased positive reward value can substantially improve convergence and motivate the robot to reach the best convergence with less time and fewer steps. Additionally, a novel training approach using a pre-trained model from a static environment was proposed, resulting in faster convergence and larger cumulative reward values in a dynamic obstacle environment. However, the method does not perform well in more complex environments, highlighting the need for further optimization of the model structure and feature extraction capabilities. Overall, this report provides important insights into the use of DRL for map-free robot navigation and highlights potential directions for future research.