Constrained reinforcement learning in hard exploration problems

One approach to guaranteeing safety in Reinforcement Learning is through cost constraints that are imposed on trajectories. Recent works in constrained RL have developed methods that ensure constraints can be enforced even at learning time while maximizing the overall value of the policy. Unfortunat...

Full description

Saved in:
Bibliographic Details
Main Authors: PATHMANATHAN, Pankayaraj, VARAKANTHAM, Pradeep
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8590
https://ink.library.smu.edu.sg/context/sis_research/article/9593/viewcontent/26757_Article_Text_30820_1_2_20230626.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9593
record_format dspace
spelling sg-smu-ink.sis_research-95932024-01-25T08:45:44Z Constrained reinforcement learning in hard exploration problems PATHMANATHAN, Pankayaraj VARAKANTHAM, Pradeep One approach to guaranteeing safety in Reinforcement Learning is through cost constraints that are imposed on trajectories. Recent works in constrained RL have developed methods that ensure constraints can be enforced even at learning time while maximizing the overall value of the policy. Unfortunately, as demonstrated in our experimental results, such approaches do not perform well on complex multi-level tasks, with longer episode lengths or sparse rewards. To that end, wepropose a scalable hierarchical approach for constrained RL problems that employs backward cost value functions in the context of task hierarchy and a novel intrinsic reward function in lower levels of the hierarchy to enable cost constraint enforcement. One of our key contributions is in proving that backward value functions are theoretically viable even when there are multiple levels of decision making. We also show that our new approach, referred to as Hierarchically Limited consTraint Enforcement (HiLiTE) significantly improves on state of the art Constrained RL approaches for many benchmark problems from literature. We further demonstrate that this performance (on value and constraint enforcement) clearly outperforms existing best approaches for constrained RL and hierarchical RL. 2023-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8590 info:doi/10.1609/aaai.v37i12.26757 https://ink.library.smu.edu.sg/context/sis_research/article/9593/viewcontent/26757_Article_Text_30820_1_2_20230626.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University reinforcement learning Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic reinforcement learning
Artificial Intelligence and Robotics
spellingShingle reinforcement learning
Artificial Intelligence and Robotics
PATHMANATHAN, Pankayaraj
VARAKANTHAM, Pradeep
Constrained reinforcement learning in hard exploration problems
description One approach to guaranteeing safety in Reinforcement Learning is through cost constraints that are imposed on trajectories. Recent works in constrained RL have developed methods that ensure constraints can be enforced even at learning time while maximizing the overall value of the policy. Unfortunately, as demonstrated in our experimental results, such approaches do not perform well on complex multi-level tasks, with longer episode lengths or sparse rewards. To that end, wepropose a scalable hierarchical approach for constrained RL problems that employs backward cost value functions in the context of task hierarchy and a novel intrinsic reward function in lower levels of the hierarchy to enable cost constraint enforcement. One of our key contributions is in proving that backward value functions are theoretically viable even when there are multiple levels of decision making. We also show that our new approach, referred to as Hierarchically Limited consTraint Enforcement (HiLiTE) significantly improves on state of the art Constrained RL approaches for many benchmark problems from literature. We further demonstrate that this performance (on value and constraint enforcement) clearly outperforms existing best approaches for constrained RL and hierarchical RL.
format text
author PATHMANATHAN, Pankayaraj
VARAKANTHAM, Pradeep
author_facet PATHMANATHAN, Pankayaraj
VARAKANTHAM, Pradeep
author_sort PATHMANATHAN, Pankayaraj
title Constrained reinforcement learning in hard exploration problems
title_short Constrained reinforcement learning in hard exploration problems
title_full Constrained reinforcement learning in hard exploration problems
title_fullStr Constrained reinforcement learning in hard exploration problems
title_full_unstemmed Constrained reinforcement learning in hard exploration problems
title_sort constrained reinforcement learning in hard exploration problems
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8590
https://ink.library.smu.edu.sg/context/sis_research/article/9593/viewcontent/26757_Article_Text_30820_1_2_20230626.pdf
_version_ 1789483281573478400