Multi-agent dueling Q-learning with mean field and value decomposition

A great deal of multi agent reinforcement learning(MARL) work has investigated how multiple agents effectively accomplish cooperative tasks utilizing value function decomposition methods. However, existing value decomposition methods can only handle cooperative tasks with shared reward, due to these...

Full description

Saved in:
Bibliographic Details
Main Authors: Ding, Shifei, Du, Wei, Ding, Ling, Guo, Lili, Zhang, Jian, An, Bo
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172040
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:A great deal of multi agent reinforcement learning(MARL) work has investigated how multiple agents effectively accomplish cooperative tasks utilizing value function decomposition methods. However, existing value decomposition methods can only handle cooperative tasks with shared reward, due to these methods factorize the value function from a global perspective. To tackle the competitive tasks and mixed cooperative-competitive tasks with differing individual reward setting, we design the Multi-agent Dueling Q-learning (MDQ) method based on mean-filed theory and individual value decomposition. Specifically, we integrate the mean-field theory with the value decomposition to factorize the value function at the individual level, which can deal with mixed cooperative-competitive tasks. Besides, we take a dueling network architecture to distinguish which states are valuable, eliminating the need to learn the impact of each action on each state, therefore enabling efficient learning and leading to better policy evaluation. The proposed method MDQ is applicable not only to cooperative tasks with shared rewards setting, but also to mixed cooperative-competitive tasks with individualized reward settings. In this end, it is flexible and generically applicable enough to most multi-agent tasks. Empirical experiments on various mixed cooperative-competitive tasks demonstrate that MDQ significantly outperforms existing multi agent reinforcement learning methods.