Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming

We propose a framework, called neural-progressive hedging (NP), that leverages stochastic programming during the online phase of executing a reinforcement learning (RL) policy. The goal is to ensure feasibility with respect to constraints and risk-based objectives such as conditional value-at-risk (...

Full description

Saved in:
Bibliographic Details
Main Authors: GHOSH, Supriyo, WYNTER, Laura, LIM, Shiau Hong, NGUYEN, Duc Thien
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7760
https://ink.library.smu.edu.sg/context/sis_research/article/8763/viewcontent/ghosh22a.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8763
record_format dspace
spelling sg-smu-ink.sis_research-87632023-02-08T03:04:40Z Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming GHOSH, Supriyo WYNTER, Laura LIM, Shiau Hong NGUYEN, Duc Thien We propose a framework, called neural-progressive hedging (NP), that leverages stochastic programming during the online phase of executing a reinforcement learning (RL) policy. The goal is to ensure feasibility with respect to constraints and risk-based objectives such as conditional value-at-risk (CVaR) during the execution of the policy, using probabilistic models of the state transitions to guide policy adjustments. The framework is particularly amenable to the class of sequential resource allocation problems since feasibility with respect to typical resource constraints cannot be enforced in a scalable manner. The NP framework provides an alternative that adds modest overhead during the online phase. Experimental results demonstrate the efficacy of the NP framework on two continuous real-world tasks: (i) the portfolio optimization problem with liquidity constraints for financial planning, characterized by non-stationary state distributions; and (ii) the dynamic repositioning problem in bike sharing systems, that embodies the class of supply-demand matching problems. We show that the NP framework produces policies that are better than deep RL and other baseline approaches, adapting to non-stationarity, whilst satisfying structural constraints and accommodating risk measures in the resulting policies. Additional benefits of the NP framework are ease of implementation and better explainability of the policies. 2022-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7760 https://ink.library.smu.edu.sg/context/sis_research/article/8763/viewcontent/ghosh22a.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Financial data processing Financial markets Risk assessment Stochastic programming Stochastic systems Value engineering Finance and Financial Management Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Financial data processing
Financial markets
Risk assessment
Stochastic programming
Stochastic systems
Value engineering
Finance and Financial Management
Theory and Algorithms
spellingShingle Financial data processing
Financial markets
Risk assessment
Stochastic programming
Stochastic systems
Value engineering
Finance and Financial Management
Theory and Algorithms
GHOSH, Supriyo
WYNTER, Laura
LIM, Shiau Hong
NGUYEN, Duc Thien
Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming
description We propose a framework, called neural-progressive hedging (NP), that leverages stochastic programming during the online phase of executing a reinforcement learning (RL) policy. The goal is to ensure feasibility with respect to constraints and risk-based objectives such as conditional value-at-risk (CVaR) during the execution of the policy, using probabilistic models of the state transitions to guide policy adjustments. The framework is particularly amenable to the class of sequential resource allocation problems since feasibility with respect to typical resource constraints cannot be enforced in a scalable manner. The NP framework provides an alternative that adds modest overhead during the online phase. Experimental results demonstrate the efficacy of the NP framework on two continuous real-world tasks: (i) the portfolio optimization problem with liquidity constraints for financial planning, characterized by non-stationary state distributions; and (ii) the dynamic repositioning problem in bike sharing systems, that embodies the class of supply-demand matching problems. We show that the NP framework produces policies that are better than deep RL and other baseline approaches, adapting to non-stationarity, whilst satisfying structural constraints and accommodating risk measures in the resulting policies. Additional benefits of the NP framework are ease of implementation and better explainability of the policies.
format text
author GHOSH, Supriyo
WYNTER, Laura
LIM, Shiau Hong
NGUYEN, Duc Thien
author_facet GHOSH, Supriyo
WYNTER, Laura
LIM, Shiau Hong
NGUYEN, Duc Thien
author_sort GHOSH, Supriyo
title Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming
title_short Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming
title_full Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming
title_fullStr Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming
title_full_unstemmed Neural-progressive hedging: Enforcing constraints in reinforcement learning with stochastic programming
title_sort neural-progressive hedging: enforcing constraints in reinforcement learning with stochastic programming
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7760
https://ink.library.smu.edu.sg/context/sis_research/article/8763/viewcontent/ghosh22a.pdf
_version_ 1770576451578363904