RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents

Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness o...

Full description

Saved in:
Bibliographic Details
Main Authors: QIU, Wei, WANG, Xinrun, YU, Runsheng, HE, Xu, WANG, Rundong, AN, Bo, OBRAZTSOVA, Svetlana, RABINOVICH, Zinovi
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9137
https://ink.library.smu.edu.sg/context/sis_research/article/10140/viewcontent/NeurIPS_2021_rmix__pvoa.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10140
record_format dspace
spelling sg-smu-ink.sis_research-101402024-08-01T09:26:07Z RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents QIU, Wei WANG, Xinrun YU, Runsheng HE, Xu WANG, Rundong AN, Bo OBRAZTSOVA, Svetlana RABINOVICH, Zinovi Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals' Q values. Specifically, we first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution. Then, to handle the temporal nature of the stochastic outcomes during executions, we propose a dynamic risk level predictor for risk level tuning. Finally, we optimize the CVaR policies with CVaR values used to estimate the target in TD error during centralized training and the CVaR values are used as auxiliary local rewards to update the local distribution via Quantile Regression loss. Empirically, we show that our method outperforms many state-of-the-art methods on various multi-agent risk-sensitive navigation scenarios and challenging StarCraft II cooperative tasks, demonstrating enhanced coordination and revealing improved sample efficiency. 2021-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9137 https://ink.library.smu.edu.sg/context/sis_research/article/10140/viewcontent/NeurIPS_2021_rmix__pvoa.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
Theory and Algorithms
spellingShingle Artificial Intelligence and Robotics
Theory and Algorithms
QIU, Wei
WANG, Xinrun
YU, Runsheng
HE, Xu
WANG, Rundong
AN, Bo
OBRAZTSOVA, Svetlana
RABINOVICH, Zinovi
RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents
description Current value-based multi-agent reinforcement learning methods optimize individual Q values to guide individuals' behaviours via centralized training with decentralized execution (CTDE). However, such expected, i.e., risk-neutral, Q value is not sufficient even with CTDE due to the randomness of rewards and the uncertainty in environments, which causes the failure of these methods to train coordinating agents in complex environments. To address these issues, we propose RMIX, a novel cooperative MARL method with the Conditional Value at Risk (CVaR) measure over the learned distributions of individuals' Q values. Specifically, we first learn the return distributions of individuals to analytically calculate CVaR for decentralized execution. Then, to handle the temporal nature of the stochastic outcomes during executions, we propose a dynamic risk level predictor for risk level tuning. Finally, we optimize the CVaR policies with CVaR values used to estimate the target in TD error during centralized training and the CVaR values are used as auxiliary local rewards to update the local distribution via Quantile Regression loss. Empirically, we show that our method outperforms many state-of-the-art methods on various multi-agent risk-sensitive navigation scenarios and challenging StarCraft II cooperative tasks, demonstrating enhanced coordination and revealing improved sample efficiency.
format text
author QIU, Wei
WANG, Xinrun
YU, Runsheng
HE, Xu
WANG, Rundong
AN, Bo
OBRAZTSOVA, Svetlana
RABINOVICH, Zinovi
author_facet QIU, Wei
WANG, Xinrun
YU, Runsheng
HE, Xu
WANG, Rundong
AN, Bo
OBRAZTSOVA, Svetlana
RABINOVICH, Zinovi
author_sort QIU, Wei
title RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents
title_short RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents
title_full RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents
title_fullStr RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents
title_full_unstemmed RMIX: Learning risk-sensitive policies for cooperative reinforcement learning agents
title_sort rmix: learning risk-sensitive policies for cooperative reinforcement learning agents
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/9137
https://ink.library.smu.edu.sg/context/sis_research/article/10140/viewcontent/NeurIPS_2021_rmix__pvoa.pdf
_version_ 1814047752698462208