Decentralized planning in stochastic environments with submodular rewards
Decentralized Markov Decision Process (Dec-MDP) providesa rich framework to represent cooperative decentralizedand stochastic planning problems under transition uncertainty.However, solving a Dec-MDP to generate coordinatedyet decentralized policies is NEXP-Hard. Researchershave made significant pro...
Saved in:
Main Authors: | KUMAR, Rajiv Ranjan, Pradeep VARAKANTHAM, Akshat KUMAR |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2017
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3549 https://ink.library.smu.edu.sg/context/sis_research/article/4550/viewcontent/14928_66557_1_PB.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Approximate difference rewards for scalable multigent reinforcement learning
by: SINGH, Arambam James, et al.
Published: (2021) -
Decentralized Stochastic Planning with Anonymity in Interactions
by: VARAKANTHAM, Pradeep, et al.
Published: (2014) -
Font Size: Make font size smaller Make font size default Make font size larger Exploiting Coordination Locales in Distributed POMDPs via Social Model Shaping
by: VARAKANTHAM, Pradeep, et al.
Published: (2009) -
Approximate difference rewards for scalable multigent reinforcement learning
by: SINGH, Arambam James, et al.
Published: (2021) -
Distributed Model Shaping for Scaling to Decentralized POMDPs with hundreds of agents
by: VELAGAPUDI, Prasanna, et al.
Published: (2011)