Policy gradient with value function approximation for collective multiagent planning

Decentralized (PO)MDPs provide an expressive framework for sequential decision making in a multiagent system. Given their computational complexity, recent research has focused on tractable yet practical subclasses of Dec-POMDPs. We address such a subclass called CDec-POMDP where the collective behav...

Full description

Saved in:
Bibliographic Details
Main Authors: NGUYEN, Duc Thien, KUMAR, Akshat, LAU, Hoong Chuin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2017
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3871
https://ink.library.smu.edu.sg/context/sis_research/article/4873/viewcontent/7019_policy_gradient_with_value_function_approximation_for_collective_multiagent_planning.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English

Similar Items