Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects
In safe MDP planning, a cost function based on the current state and action is often used to specify safety aspects. In real world, often the state representation used may lack sufficient fidelity to specify such safety constraints. Operating based on an incomplete model can often produce unintended...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8604 https://ink.library.smu.edu.sg/context/sis_research/article/9607/viewcontent/2304.03081.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9607 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-96072024-01-25T08:29:20Z Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects LOW, Siow Meng KUMAR, Akshat SANNER, Scott In safe MDP planning, a cost function based on the current state and action is often used to specify safety aspects. In real world, often the state representation used may lack sufficient fidelity to specify such safety constraints. Operating based on an incomplete model can often produce unintended negative side effects (NSEs). To address these challenges, first, we associate safety signals with state-action trajectories (rather than just immediate state-action). This makes our safety model highly general. We also assume categorical safety labels are given for different trajectories, rather than a numerical cost function, which is harder to specify by the problem designer. We then employ a supervised learning model to learn such non-Markovian safety patterns. Second, we develop a Lagrange multiplier method, which incorporates the safety model and the underlying MDP model in a single computation graph to facilitate agent learning of safe behaviors. Finally, our empirical results on a variety of discrete and continuous domains show that this approach can satisfy complex non-Markovian safety constraints while optimizing agent's total returns, is highly scalable, and is also better than previous best approach for Markovian NSEs. 2023-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8604 info:doi/10.1609/icaps.v33i1.27241 https://ink.library.smu.edu.sg/context/sis_research/article/9607/viewcontent/2304.03081.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Artificial intelligence Cost functions Lagrange multipliers Learning systems Artificial Intelligence and Robotics Databases and Information Systems Programming Languages and Compilers |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Artificial intelligence Cost functions Lagrange multipliers Learning systems Artificial Intelligence and Robotics Databases and Information Systems Programming Languages and Compilers |
spellingShingle |
Artificial intelligence Cost functions Lagrange multipliers Learning systems Artificial Intelligence and Robotics Databases and Information Systems Programming Languages and Compilers LOW, Siow Meng KUMAR, Akshat SANNER, Scott Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
description |
In safe MDP planning, a cost function based on the current state and action is often used to specify safety aspects. In real world, often the state representation used may lack sufficient fidelity to specify such safety constraints. Operating based on an incomplete model can often produce unintended negative side effects (NSEs). To address these challenges, first, we associate safety signals with state-action trajectories (rather than just immediate state-action). This makes our safety model highly general. We also assume categorical safety labels are given for different trajectories, rather than a numerical cost function, which is harder to specify by the problem designer. We then employ a supervised learning model to learn such non-Markovian safety patterns. Second, we develop a Lagrange multiplier method, which incorporates the safety model and the underlying MDP model in a single computation graph to facilitate agent learning of safe behaviors. Finally, our empirical results on a variety of discrete and continuous domains show that this approach can satisfy complex non-Markovian safety constraints while optimizing agent's total returns, is highly scalable, and is also better than previous best approach for Markovian NSEs. |
format |
text |
author |
LOW, Siow Meng KUMAR, Akshat SANNER, Scott |
author_facet |
LOW, Siow Meng KUMAR, Akshat SANNER, Scott |
author_sort |
LOW, Siow Meng |
title |
Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
title_short |
Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
title_full |
Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
title_fullStr |
Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
title_full_unstemmed |
Safe MDP planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
title_sort |
safe mdp planning by learning temporal patterns of undesirable trajectories and averting negative side effects |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8604 https://ink.library.smu.edu.sg/context/sis_research/article/9607/viewcontent/2304.03081.pdf |
_version_ |
1789483285215182848 |