Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning

Many problems in Reinforcement Learning (RL) seek an optimal policy with large discrete multidimensional yet unordered action spaces; these include problems in randomized allocation of resources such as placements of multiple security resources and emergency response units, etc. A challenge in this...

Full description

Saved in:
Bibliographic Details
Main Authors: CHEN, Changyu, KARUNASENA, Ramesha, NGUYEN, Thanh Hong, SINHA, Arunesh, VARAKANTHAM, Pradeep
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8589
https://ink.library.smu.edu.sg/context/sis_research/article/9592/viewcontent/Generative.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9592
record_format dspace
spelling sg-smu-ink.sis_research-95922024-01-25T08:46:08Z Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning CHEN, Changyu KARUNASENA, Ramesha NGUYEN, Thanh Hong SINHA, Arunesh VARAKANTHAM, Pradeep Many problems in Reinforcement Learning (RL) seek an optimal policy with large discrete multidimensional yet unordered action spaces; these include problems in randomized allocation of resources such as placements of multiple security resources and emergency response units, etc. A challenge in this setting is that the underlying action space is categorical (discrete and unordered) and large, for which existing RL methods do not perform well. Moreover, these problems require validity of the realized action (allocation); this validity constraint is often difficult to express compactly in a closed mathematical form. The allocation nature of the problem also prefers stochastic optimal policies, if one exists. In this work, we address these challenges by (1) applying a (state) conditional normalizing flow to compactly represent the stochastic policy -- the compactness arises due to the network only producing one sampled action and the corresponding log probability of the action, which is then used by an actor-critic method; and (2) employing an invalid action rejection method (via a valid action oracle) to update the base policy. The action rejection is enabled by a modified policy gradient that we derive. Finally, we conduct extensive experiments to show the scalability of our approach compared to prior methods and the ability to enforce arbitrary state-conditional constraints on the support of the distribution of actions in any state. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8589 info:doi/10.48550/arXiv.2311.15341 https://ink.library.smu.edu.sg/context/sis_research/article/9592/viewcontent/Generative.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
CHEN, Changyu
KARUNASENA, Ramesha
NGUYEN, Thanh Hong
SINHA, Arunesh
VARAKANTHAM, Pradeep
Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
description Many problems in Reinforcement Learning (RL) seek an optimal policy with large discrete multidimensional yet unordered action spaces; these include problems in randomized allocation of resources such as placements of multiple security resources and emergency response units, etc. A challenge in this setting is that the underlying action space is categorical (discrete and unordered) and large, for which existing RL methods do not perform well. Moreover, these problems require validity of the realized action (allocation); this validity constraint is often difficult to express compactly in a closed mathematical form. The allocation nature of the problem also prefers stochastic optimal policies, if one exists. In this work, we address these challenges by (1) applying a (state) conditional normalizing flow to compactly represent the stochastic policy -- the compactness arises due to the network only producing one sampled action and the corresponding log probability of the action, which is then used by an actor-critic method; and (2) employing an invalid action rejection method (via a valid action oracle) to update the base policy. The action rejection is enabled by a modified policy gradient that we derive. Finally, we conduct extensive experiments to show the scalability of our approach compared to prior methods and the ability to enforce arbitrary state-conditional constraints on the support of the distribution of actions in any state.
format text
author CHEN, Changyu
KARUNASENA, Ramesha
NGUYEN, Thanh Hong
SINHA, Arunesh
VARAKANTHAM, Pradeep
author_facet CHEN, Changyu
KARUNASENA, Ramesha
NGUYEN, Thanh Hong
SINHA, Arunesh
VARAKANTHAM, Pradeep
author_sort CHEN, Changyu
title Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
title_short Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
title_full Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
title_fullStr Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
title_full_unstemmed Generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
title_sort generative modelling of stochastic actions with arbitrary constraints in reinforcement learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8589
https://ink.library.smu.edu.sg/context/sis_research/article/9592/viewcontent/Generative.pdf
_version_ 1789483281410949120