Revisiting Risk-Sensitive MDPs: New Algorithms and Results
While Markov Decision Processes (MDPs) have been shown to be effective models for planning under uncertainty, theobjective to minimize the expected cumulative cost is inappropriate for high-stake planning problems. As such, Yu, Lin, and Yan (1998) introduced the Risk-Sensitive MDP (RSMDP) model, whe...
Saved in:
Main Authors: | HOU, Ping, YEOH, William, VARAKANTHAM, Pradeep Reddy |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2014
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/2089 https://ink.library.smu.edu.sg/context/sis_research/article/3088/viewcontent/icaps12_rsmdp.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Risk-Sensitive Planning in Partially Observable
by: MARECKI, Janusz, et al.
Published: (2010) -
Caching Schemes for DCOP Search Algorithms
by: YEOH, William, et al.
Published: (2009) -
Unleashing Dec-MDPs in Security Games: Enabling Effective Defender Teamwork
by: Shieh, Eric, et al.
Published: (2014) -
Incremental DCOP Search Algorithms for Solving Dynamic DCOP Problems
by: YEOH, William, et al.
Published: (2011) -
Event-Detecting Multi-Agent MDPs: Complexity and Constant-Factor Approximation
by: KUMAR, Akshat, et al.
Published: (2009)