Autonomous multi-agent collaborative environment exploration

Exploring an unknown environment with multiple autonomous agents is one of the fundamental research in mobility agents and is essential for numerous environment-related applications, such as autonomous cleaning, mowing, deploying, etc. The major challenge of multi-robot environment exploration is ho...

Full description

Saved in:
Bibliographic Details
Main Author: Luo, Tianze
Other Authors: Tan Ah Hwee
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/136784
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Exploring an unknown environment with multiple autonomous agents is one of the fundamental research in mobility agents and is essential for numerous environment-related applications, such as autonomous cleaning, mowing, deploying, etc. The major challenge of multi-robot environment exploration is how to achieve effective collaboration so that the overall exploration strategy is efficient. One of the typical approaches is the frontier-based exploration, in which the agents move to the appropriate frontier point to explore the environment. Furthermore, exploration through partitioning maps is another efficient explore strategy. However, although many exploration methods have been proposed, autonomous exploring an unknown environment by multiple agents still remains a difficult task. The main issue is due to the inefficient collaboration, e.g. the agents repetitively explore regions which have been explored by other agents. If many agents explore a certain area with few agents exploring other areas, the collaboration is inefficient. In contrast, allocating agents into separate areas of the map, with each agent exploring a separate area, can achieve highly efficient collaboration in the environment exploration task. Since efficient map exploration method can highly benefit numerous related applications and there exists a large gap in multi-agent collaborative exploration, in this report, we focus on developing more efficient multi-agent collaborative environment exploration methods. We propose an efficient and robust map segmentation method, and furthermore, propose an exploration method based on the segmentation algorithm. In addition, we apply reinforcement learning on this task, and propose a novel graph-based multi-agent deep reinforcement learning method in order to derive a more efficient and scalable environment exploration strategy. To evaluate the performance of our methods, we compare with the state-of-the-art map segmentation methods and environment exploration methods. The experimental results show that the map segmentation method can achieve more accurate map partitioning compared to the state-of-the-art segmentation methods. Based on the comprehensive understanding of the map through segmentation, our segmentation based exploration method can also achieve faster and more efficient exploration than the state-of-the-art exploration method. The graph-based multi-agent reinforcement learning method provides a special point of view on solving multi-agent environment exploration problem, and the results show that through this method, the agents are able to learn a better exploration strategy than non-learning methods.