Reward penalties on augmented states for solving richly constrained RL effectively

Constrained Reinforcement Learning employs trajectory-based cost constraints (such as expected cost, Value at Risk, or Conditional VaR cost) to compute safe policies. The challenge lies in handling these constraints effectively while optimizing expected reward. Existing methods convert such trajecto...

Full description

Saved in:
Bibliographic Details
Main Authors: HAO, Jiang, MAI, Tien, VARAKANTHAN, Pradeep, HOANG, Minh Huy
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9685
https://ink.library.smu.edu.sg/context/sis_research/article/10685/viewcontent/29962_Article_Text_34016_1_2_20240324.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10685
record_format dspace
spelling sg-smu-ink.sis_research-106852024-11-28T09:11:23Z Reward penalties on augmented states for solving richly constrained RL effectively HAO, Jiang MAI, Tien VARAKANTHAN, Pradeep HOANG, Minh Huy Constrained Reinforcement Learning employs trajectory-based cost constraints (such as expected cost, Value at Risk, or Conditional VaR cost) to compute safe policies. The challenge lies in handling these constraints effectively while optimizing expected reward. Existing methods convert such trajectory-based constraints into local cost constraints, but they rely on cost estimates, leading to either aggressive or conservative solutions with regards to cost. We propose an unconstrained formulation that employs reward penalties over states augmented with costs to compute safe policies. Unlike standard primal-dual methods, our approach penalizes only infeasible trajectories through state augmentation. This ensures that increasing the penalty parameter always guarantees a feasible policy, a feature lacking in primal-dual methods. Our approach exhibits strong empirical performance and theoretical properties, offering a fresh paradigm for solving complex Constrained RL problems, including rich constraints like expected cost, Value at Risk, and Conditional Value at Risk. Our experimental results demonstrate superior performance compared to leading approaches across various constraint types on multiple benchmark problems. 2024-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9685 info:doi/10.1609/aaai.v38i18.29962 https://ink.library.smu.edu.sg/context/sis_research/article/10685/viewcontent/29962_Article_Text_34016_1_2_20240324.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Safe reinforcement learning Reward penalties Constraint optimization Reinforcement learning Markov models (MDPs POMDPs) Stochastic optimization Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Safe reinforcement learning
Reward penalties
Constraint optimization
Reinforcement learning
Markov models (MDPs
POMDPs)
Stochastic optimization
Artificial Intelligence and Robotics
spellingShingle Safe reinforcement learning
Reward penalties
Constraint optimization
Reinforcement learning
Markov models (MDPs
POMDPs)
Stochastic optimization
Artificial Intelligence and Robotics
HAO, Jiang
MAI, Tien
VARAKANTHAN, Pradeep
HOANG, Minh Huy
Reward penalties on augmented states for solving richly constrained RL effectively
description Constrained Reinforcement Learning employs trajectory-based cost constraints (such as expected cost, Value at Risk, or Conditional VaR cost) to compute safe policies. The challenge lies in handling these constraints effectively while optimizing expected reward. Existing methods convert such trajectory-based constraints into local cost constraints, but they rely on cost estimates, leading to either aggressive or conservative solutions with regards to cost. We propose an unconstrained formulation that employs reward penalties over states augmented with costs to compute safe policies. Unlike standard primal-dual methods, our approach penalizes only infeasible trajectories through state augmentation. This ensures that increasing the penalty parameter always guarantees a feasible policy, a feature lacking in primal-dual methods. Our approach exhibits strong empirical performance and theoretical properties, offering a fresh paradigm for solving complex Constrained RL problems, including rich constraints like expected cost, Value at Risk, and Conditional Value at Risk. Our experimental results demonstrate superior performance compared to leading approaches across various constraint types on multiple benchmark problems.
format text
author HAO, Jiang
MAI, Tien
VARAKANTHAN, Pradeep
HOANG, Minh Huy
author_facet HAO, Jiang
MAI, Tien
VARAKANTHAN, Pradeep
HOANG, Minh Huy
author_sort HAO, Jiang
title Reward penalties on augmented states for solving richly constrained RL effectively
title_short Reward penalties on augmented states for solving richly constrained RL effectively
title_full Reward penalties on augmented states for solving richly constrained RL effectively
title_fullStr Reward penalties on augmented states for solving richly constrained RL effectively
title_full_unstemmed Reward penalties on augmented states for solving richly constrained RL effectively
title_sort reward penalties on augmented states for solving richly constrained rl effectively
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9685
https://ink.library.smu.edu.sg/context/sis_research/article/10685/viewcontent/29962_Article_Text_34016_1_2_20240324.pdf
_version_ 1819113102397931520