Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning
This paper concerns imitation learning (IL) in cooperative multi-agent systems.The learning problem under consideration poses several challenges, characterized by high-dimensional state and action spaces and intricate inter-agent dependencies. In a single-agent setting, IL was shown to be done effic...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9818 https://ink.library.smu.edu.sg/context/sis_research/article/10818/viewcontent/NeurIPS2024___Multi_agent_Inverse_Q_learning_for_imitation_6.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10818 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-108182024-12-24T03:44:28Z Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning BUI, The Viet MAI, Tien NGUYEN, Thanh This paper concerns imitation learning (IL) in cooperative multi-agent systems.The learning problem under consideration poses several challenges, characterized by high-dimensional state and action spaces and intricate inter-agent dependencies. In a single-agent setting, IL was shown to be done efficiently via an inverse soft-Q learning process. However, extending this framework to a multi-agent context introduces the need to simultaneously learn both local value functions to capture local observations and individual actions, and a joint value function for exploiting centralized learning.In this work, we introduce a new multi-agent IL algorithm designed to address these challenges. Our approach enables thecentralized learning by leveraging mixing networks to aggregate decentralized Q functions.We further establish conditions for the mixing networks under which the multi-agent IL objective function exhibits convexity within the Q function space.We present extensive experiments conducted on some challenging multi-agent game environments, including an advanced version of the Star-Craft multi-agent challenge (SMACv2), which demonstrates the effectiveness of our algorithm. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9818 https://ink.library.smu.edu.sg/context/sis_research/article/10818/viewcontent/NeurIPS2024___Multi_agent_Inverse_Q_learning_for_imitation_6.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Imitation learning Multi-agent systems soft-Q learning Artificial Intelligence and Robotics Computer Sciences |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Imitation learning Multi-agent systems soft-Q learning Artificial Intelligence and Robotics Computer Sciences |
spellingShingle |
Imitation learning Multi-agent systems soft-Q learning Artificial Intelligence and Robotics Computer Sciences BUI, The Viet MAI, Tien NGUYEN, Thanh Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning |
description |
This paper concerns imitation learning (IL) in cooperative multi-agent systems.The learning problem under consideration poses several challenges, characterized by high-dimensional state and action spaces and intricate inter-agent dependencies. In a single-agent setting, IL was shown to be done efficiently via an inverse soft-Q learning process. However, extending this framework to a multi-agent context introduces the need to simultaneously learn both local value functions to capture local observations and individual actions, and a joint value function for exploiting centralized learning.In this work, we introduce a new multi-agent IL algorithm designed to address these challenges. Our approach enables thecentralized learning by leveraging mixing networks to aggregate decentralized Q functions.We further establish conditions for the mixing networks under which the multi-agent IL objective function exhibits convexity within the Q function space.We present extensive experiments conducted on some challenging multi-agent game environments, including an advanced version of the Star-Craft multi-agent challenge (SMACv2), which demonstrates the effectiveness of our algorithm. |
format |
text |
author |
BUI, The Viet MAI, Tien NGUYEN, Thanh |
author_facet |
BUI, The Viet MAI, Tien NGUYEN, Thanh |
author_sort |
BUI, The Viet |
title |
Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning |
title_short |
Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning |
title_full |
Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning |
title_fullStr |
Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning |
title_full_unstemmed |
Inverse factorized soft Q-Learning for cooperative multi-agent imitation learning |
title_sort |
inverse factorized soft q-learning for cooperative multi-agent imitation learning |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9818 https://ink.library.smu.edu.sg/context/sis_research/article/10818/viewcontent/NeurIPS2024___Multi_agent_Inverse_Q_learning_for_imitation_6.pdf |
_version_ |
1820027790414577664 |