Solving long-run average reward robust MDPs via stochastic games

Markov decision processes (MDPs) provide a standard framework for sequential decision making under uncertainty. However, MDPs do not take uncertainty in transition probabilities into account. Robust Markov decision processes (RMDPs) address this shortcoming of MDPs by assigning to each transition an...

Full description

Saved in:
Bibliographic Details
Main Authors: CHATTERJEE, Krishnendu, GOHARSHADY, Ehsan Kafshdar, KARRABI, Mehrdad, NOVOTNÝ, Petr, ZIKELIC, Dorde
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9341
https://ink.library.smu.edu.sg/context/sis_research/article/10341/viewcontent/0741.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10341
record_format dspace
spelling sg-smu-ink.sis_research-103412024-10-08T06:55:21Z Solving long-run average reward robust MDPs via stochastic games CHATTERJEE, Krishnendu GOHARSHADY, Ehsan Kafshdar KARRABI, Mehrdad NOVOTNÝ, Petr ZIKELIC, Dorde Markov decision processes (MDPs) provide a standard framework for sequential decision making under uncertainty. However, MDPs do not take uncertainty in transition probabilities into account. Robust Markov decision processes (RMDPs) address this shortcoming of MDPs by assigning to each transition an uncertainty set rather than a single probability value. In this work, we consider polytopic RMDPs in which all uncertainty sets are polytopes and study the problem of solving long-run average reward polytopic RMDPs. We present a novel perspective on this problem and show that it can be reduced to solving long-run average reward turn-based stochastic games with finite state and action spaces. This reduction allows us to derive several important consequences that were hitherto not known to hold for polytopic RMDPs. First, we derive new computational complexity bounds for solving long-run average reward polytopic RMDPs, showing for the first time that the threshold decision problem for them is in NP∩CONPandthattheyadmitarandomizedalgorithm with sub-exponential expected runtime. Second, we present Robust Polytopic Policy Iteration (RPPI), a novel policy iteration algorithm for solving long-run average reward polytopic RMDPs. Our experimental evaluation shows that RPPI is muchmoreefficient in solving long-run average reward polytopic RMDPs compared to state-of-theart methods based on value iteration. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9341 info:doi/10.24963/ijcai.2024/741 https://ink.library.smu.edu.sg/context/sis_research/article/10341/viewcontent/0741.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Artificial Intelligence and Robotics
spellingShingle Artificial Intelligence and Robotics
CHATTERJEE, Krishnendu
GOHARSHADY, Ehsan Kafshdar
KARRABI, Mehrdad
NOVOTNÝ, Petr
ZIKELIC, Dorde
Solving long-run average reward robust MDPs via stochastic games
description Markov decision processes (MDPs) provide a standard framework for sequential decision making under uncertainty. However, MDPs do not take uncertainty in transition probabilities into account. Robust Markov decision processes (RMDPs) address this shortcoming of MDPs by assigning to each transition an uncertainty set rather than a single probability value. In this work, we consider polytopic RMDPs in which all uncertainty sets are polytopes and study the problem of solving long-run average reward polytopic RMDPs. We present a novel perspective on this problem and show that it can be reduced to solving long-run average reward turn-based stochastic games with finite state and action spaces. This reduction allows us to derive several important consequences that were hitherto not known to hold for polytopic RMDPs. First, we derive new computational complexity bounds for solving long-run average reward polytopic RMDPs, showing for the first time that the threshold decision problem for them is in NP∩CONPandthattheyadmitarandomizedalgorithm with sub-exponential expected runtime. Second, we present Robust Polytopic Policy Iteration (RPPI), a novel policy iteration algorithm for solving long-run average reward polytopic RMDPs. Our experimental evaluation shows that RPPI is muchmoreefficient in solving long-run average reward polytopic RMDPs compared to state-of-theart methods based on value iteration.
format text
author CHATTERJEE, Krishnendu
GOHARSHADY, Ehsan Kafshdar
KARRABI, Mehrdad
NOVOTNÝ, Petr
ZIKELIC, Dorde
author_facet CHATTERJEE, Krishnendu
GOHARSHADY, Ehsan Kafshdar
KARRABI, Mehrdad
NOVOTNÝ, Petr
ZIKELIC, Dorde
author_sort CHATTERJEE, Krishnendu
title Solving long-run average reward robust MDPs via stochastic games
title_short Solving long-run average reward robust MDPs via stochastic games
title_full Solving long-run average reward robust MDPs via stochastic games
title_fullStr Solving long-run average reward robust MDPs via stochastic games
title_full_unstemmed Solving long-run average reward robust MDPs via stochastic games
title_sort solving long-run average reward robust mdps via stochastic games
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9341
https://ink.library.smu.edu.sg/context/sis_research/article/10341/viewcontent/0741.pdf
_version_ 1814047914877517824