Decentralized planning in stochastic environments with submodular rewards

Decentralized Markov Decision Process (Dec-MDP) providesa rich framework to represent cooperative decentralizedand stochastic planning problems under transition uncertainty.However, solving a Dec-MDP to generate coordinatedyet decentralized policies is NEXP-Hard. Researchershave made significant pro...

Full description

Saved in:
Bibliographic Details
Main Authors: KUMAR, Rajiv Ranjan, Pradeep VARAKANTHAM, Akshat KUMAR
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2017
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3549
https://ink.library.smu.edu.sg/context/sis_research/article/4550/viewcontent/14928_66557_1_PB.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Decentralized Markov Decision Process (Dec-MDP) providesa rich framework to represent cooperative decentralizedand stochastic planning problems under transition uncertainty.However, solving a Dec-MDP to generate coordinatedyet decentralized policies is NEXP-Hard. Researchershave made significant progress in providing approximate approachesto improve scalability with respect to number ofagents. However, there has been little or no research devotedto finding guarantees on solution quality for approximateapproaches considering multiple (more than 2 agents)agents. We have a similar situation with respect to the competitivedecentralized planning problem and the StochasticGame (SG) model. To address this, we identify models in thecooperative and competitive case that rely on submodular rewards,where we show that existing approximate approachescan provide strong quality guarantees (a priori, and for cooperativecase also posteriori guarantees). We then providesolution approaches and demonstrate improved online guaranteeson benchmark problems from the literature for the cooperativecase.