Certified policy verification and synthesis for MDPs under distributional reach-avoidance properties
Markov Decision Processes (MDPs) are a classical model for decision making in the presence of uncertainty. Often they are viewed as state transformers with planning objectives defined with respect to paths over MDP states. An increasingly popular alternative is to view them as distribution transform...
Saved in:
Main Authors: | AKSHAY, S., CHATTERJEE, Krishnendu, MEGGENDORFER, Tobias, ZIKELIC, Dorde |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9340 https://ink.library.smu.edu.sg/context/sis_research/article/10340/viewcontent/0001.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
MDPs as distribution transformers: Affine invariant synthesis for safety objectives
by: AKSHAY, S., et al.
Published: (2023) -
An introduction to multi-agent systems
by: Balaji, P.G., et al.
Published: (2014) -
A learner-verifier framework for neural network controllers and certificates of stochastic systems
by: CHATTERJEE, Krishnendu, et al.
Published: (2023) -
Towards formal modeling and verification of cloud architectures: A case study on hadoop
by: Reddy, G.S., et al.
Published: (2014) -
Solving long-run average reward robust MDPs via stochastic games
by: CHATTERJEE, Krishnendu, et al.
Published: (2024)