Policy gradient with value function approximation for collective multiagent planning

Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs. We address such a subclass called CDec-POMDP where the collective behav...

Full description

Saved in:
Bibliographic Details
Main Authors: NGUYEN, Duc Thien, KUMAR, Akshat, LAU, Hoong Chuin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2017
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3871
https://ink.library.smu.edu.sg/context/sis_research/article/4873/viewcontent/7019_policy_gradient_with_value_function_approximation_for_collective_multiagent_planning.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-4873
record_format dspace
spelling sg-smu-ink.sis_research-48732020-03-24T06:08:20Z Policy gradient with value function approximation for collective multiagent planning NGUYEN, Duc Thien KUMAR, Akshat LAU, Hoong Chuin Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs. We address such a subclass called CDec-POMDP where the collective behavior of a population of agents affects the joint-reward and environment dynamics. Our main contribution is an actor-critic (AC) reinforcement learning method for optimizing CDec-POMDP policies. Vanilla AC has slow convergence for larger problems. To address this, we show how a particular decomposition of the approximate action-value function over agents leads to effective updates, and also derive a new way to train the critic based on local reward signals. Comparisons on a synthetic benchmark and a real world taxi fleet optimization problem show that our new AC approach provides better quality solutions than previous best approaches. 2017-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/3871 https://ink.library.smu.edu.sg/context/sis_research/article/4873/viewcontent/7019_policy_gradient_with_value_function_approximation_for_collective_multiagent_planning.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Collective behavior Environment dynamics Multi-agent planning Optimization problems Reinforcement learning method Sequential decision making Synthetic benchmark Value function approximation Artificial Intelligence and Robotics Computer Sciences Operations Research, Systems Engineering and Industrial Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Collective behavior
Environment dynamics
Multi-agent planning
Optimization problems
Reinforcement learning method
Sequential decision making
Synthetic benchmark
Value function approximation
Artificial Intelligence and Robotics
Computer Sciences
Operations Research, Systems Engineering and Industrial Engineering
spellingShingle Collective behavior
Environment dynamics
Multi-agent planning
Optimization problems
Reinforcement learning method
Sequential decision making
Synthetic benchmark
Value function approximation
Artificial Intelligence and Robotics
Computer Sciences
Operations Research, Systems Engineering and Industrial Engineering
NGUYEN, Duc Thien
KUMAR, Akshat
LAU, Hoong Chuin
Policy gradient with value function approximation for collective multiagent planning
description Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs. We address such a subclass called CDec-POMDP where the collective behavior of a population of agents affects the joint-reward and environment dynamics. Our main contribution is an actor-critic (AC) reinforcement learning method for optimizing CDec-POMDP policies. Vanilla AC has slow convergence for larger problems. To address this, we show how a particular decomposition of the approximate action-value function over agents leads to effective updates, and also derive a new way to train the critic based on local reward signals. Comparisons on a synthetic benchmark and a real world taxi fleet optimization problem show that our new AC approach provides better quality solutions than previous best approaches.
format text
author NGUYEN, Duc Thien
KUMAR, Akshat
LAU, Hoong Chuin
author_facet NGUYEN, Duc Thien
KUMAR, Akshat
LAU, Hoong Chuin
author_sort NGUYEN, Duc Thien
title Policy gradient with value function approximation for collective multiagent planning
title_short Policy gradient with value function approximation for collective multiagent planning
title_full Policy gradient with value function approximation for collective multiagent planning
title_fullStr Policy gradient with value function approximation for collective multiagent planning
title_full_unstemmed Policy gradient with value function approximation for collective multiagent planning
title_sort policy gradient with value function approximation for collective multiagent planning
publisher Institutional Knowledge at Singapore Management University
publishDate 2017
url https://ink.library.smu.edu.sg/sis_research/3871
https://ink.library.smu.edu.sg/context/sis_research/article/4873/viewcontent/7019_policy_gradient_with_value_function_approximation_for_collective_multiagent_planning.pdf
_version_ 1770573869260734464