Winning back the CUP for Distributed POMDPs: Planning over continuous belief spaces
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms have been proposed to obtain locally or globally optimal policies. Unfortunately, most of these algorithms have either been...
Saved in:
Main Authors: | VARAKANTHAM, Pradeep, Nair, Ranjit, Tambe, Milind, Yokoo, Makoto |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2006
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/940 https://ink.library.smu.edu.sg/context/sis_research/article/1939/viewcontent/AAMAS2006.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Networked Distributed POMDPs: A Synthesis of Distributed Constraint Optimization and POMDPs
by: NAIR, Ranjit, et al.
Published: (2005) -
Letting loose a SPIDER on a network of POMDPs: Generating quality guranteed policies
by: VARAKANTHAM, Pradeep Reddy, et al.
Published: (2007) -
Exploiting Belief Bounds: Practical POMDPs for Personal Assistant Agents
by: VARAKANTHAM, Pradeep, et al.
Published: (2005) -
Introducing Communication in Dis-POMDPs with Locality of Interaction
by: TASAKI, Makoto, et al.
Published: (2010) -
Towards Efficient Computation of Quality Bounded Solutions in POMDPs: Expected Value Approximation and Dynamic Disjunctive Beliefs
by: VARAKANTHAM, Pradeep Reddy, et al.
Published: (2007)