End-to-end deep reinforcement learning for multi-agent collaborative exploration
Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148510 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-148510 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1485102021-05-25T09:09:51Z End-to-end deep reinforcement learning for multi-agent collaborative exploration Chen, Zichen Subagdja, Bhuditama Tan, Ah-Hwee School of Electrical and Electronic Engineering 2019 IEEE International Conference on Agents (ICA) ST Engineering-NTU Corporate Lab Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Multi-agent Exploration Deep Learning Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method. National Research Foundation (NRF) Accepted version 2021-05-25T09:09:51Z 2021-05-25T09:09:51Z 2019 Conference Paper Chen, Z., Subagdja, B. & Tan, A. (2019). End-to-end deep reinforcement learning for multi-agent collaborative exploration. 2019 IEEE International Conference on Agents (ICA), 99-102. https://dx.doi.org/10.1109/AGENTS.2019.8929192 9781728140261 https://hdl.handle.net/10356/148510 10.1109/AGENTS.2019.8929192 2-s2.0-85077815398 99 102 en © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/AGENTS.2019.8929192 application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Multi-agent Exploration Deep Learning |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Multi-agent Exploration Deep Learning Chen, Zichen Subagdja, Bhuditama Tan, Ah-Hwee End-to-end deep reinforcement learning for multi-agent collaborative exploration |
description |
Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Chen, Zichen Subagdja, Bhuditama Tan, Ah-Hwee |
format |
Conference or Workshop Item |
author |
Chen, Zichen Subagdja, Bhuditama Tan, Ah-Hwee |
author_sort |
Chen, Zichen |
title |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_short |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_full |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_fullStr |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_full_unstemmed |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_sort |
end-to-end deep reinforcement learning for multi-agent collaborative exploration |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/148510 |
_version_ |
1701270615776821248 |