Collision avoidance for automated guided vehicles using deep reinforcement learning
It is crucial yet challenging to develop an efficient collision avoidance policy for robots. While centralized collision avoidance methods for multi-robot systems exist and they are often more accurate and error-free, decentralized methods have the potential to reduce the prohibitive computation whe...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/139736 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | It is crucial yet challenging to develop an efficient collision avoidance policy for robots. While centralized collision avoidance methods for multi-robot systems exist and they are often more accurate and error-free, decentralized methods have the potential to reduce the prohibitive computation where each robot generates paths without observing other robots’ states. As the first step towards a decentralized multi-robot collision avoidance system, this project aims to implement Deep Reinforcement Learning in the collision avoidance simulation of a single robot. The robot scans the environment around it and is supposed to find its way in a pre- designed map with multiple obstacles and branches. Several algorithms are tested and discussed in this project including Q Learning, SARSA, Deep Q Network (DQN), Policy Gradient (PG), Actor Critic, Deep Determinist Policy Gradient (DDPG), Distributed Proximal Policy Optimization (DPPO). Thorough comparisons between DQN, DDPG and DPPO are presented in this project. |
---|