End-to-end deep reinforcement learning for multi-agent collaborative exploration
Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6170 https://ink.library.smu.edu.sg/context/sis_research/article/7173/viewcontent/Observation_based_Deep_Reinforcement_Learning_for_Multi_agent_Collaborative_Exploration.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7173 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-71732021-09-29T10:27:04Z End-to-end deep reinforcement learning for multi-agent collaborative exploration CHEN, Zichen SUBAGDJA, Budhitama TAN, Ah-hwee Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method. 2019-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6170 info:doi/10.1109/AGENTS.2019.8929192 https://ink.library.smu.edu.sg/context/sis_research/article/7173/viewcontent/Observation_based_Deep_Reinforcement_Learning_for_Multi_agent_Collaborative_Exploration.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning Multi-agent exploration Reinforcement Learning Artificial Intelligence and Robotics Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Deep learning Multi-agent exploration Reinforcement Learning Artificial Intelligence and Robotics Databases and Information Systems |
spellingShingle |
Deep learning Multi-agent exploration Reinforcement Learning Artificial Intelligence and Robotics Databases and Information Systems CHEN, Zichen SUBAGDJA, Budhitama TAN, Ah-hwee End-to-end deep reinforcement learning for multi-agent collaborative exploration |
description |
Exploring an unknown environment by multiple autonomous robots is a major challenge in robotics domains. As multiple robots are assigned to explore different locations, they may interfere each other making the overall tasks less efficient. In this paper, we present a new model called CNN-based Multi-agent Proximal Policy Optimization (CMAPPO) to multi-agent exploration wherein the agents learn the effective strategy to allocate and explore the environment using a new deep reinforcement learning architecture. The model combines convolutional neural network to process multi-channel visual inputs, curriculum-based learning, and PPO algorithm for motivation based reinforcement learning. Evaluations show that the proposed method can learn more efficient strategy for multiple agents to explore the environment than the conventional frontier-based method. |
format |
text |
author |
CHEN, Zichen SUBAGDJA, Budhitama TAN, Ah-hwee |
author_facet |
CHEN, Zichen SUBAGDJA, Budhitama TAN, Ah-hwee |
author_sort |
CHEN, Zichen |
title |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_short |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_full |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_fullStr |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_full_unstemmed |
End-to-end deep reinforcement learning for multi-agent collaborative exploration |
title_sort |
end-to-end deep reinforcement learning for multi-agent collaborative exploration |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/6170 https://ink.library.smu.edu.sg/context/sis_research/article/7173/viewcontent/Observation_based_Deep_Reinforcement_Learning_for_Multi_agent_Collaborative_Exploration.pdf |
_version_ |
1770575841263091712 |