Difference of convex functions programming for policy optimization in reinforcement learning

We formulate the problem of optimizing an agent's policy within the Markov decision process (MDP) model as a difference-of-convex functions (DC) program. The DC perspective enables optimizing the policy iteratively where each iteration constructs an easier-to-optimize lower bound on the value f...

Full description

Saved in:
Bibliographic Details
Main Author: KUMAR, Akshat
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9926
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10926
record_format dspace
spelling sg-smu-ink.sis_research-109262025-01-02T08:03:58Z Difference of convex functions programming for policy optimization in reinforcement learning KUMAR, Akshat We formulate the problem of optimizing an agent's policy within the Markov decision process (MDP) model as a difference-of-convex functions (DC) program. The DC perspective enables optimizing the policy iteratively where each iteration constructs an easier-to-optimize lower bound on the value function using the well known concave-convex procedure. We show that several popular policy gradient based deep RL algorithms (both for discrete and continuous state, action spaces, and stochastic/deterministic policies) such as actor-critic, deterministic policy gradient (DPG), and soft actor critic (SAC) can be derived from the DC perspective. Additionally, the DC formulation enables more sample efficient learning approaches by exploiting the structure of the value function lower bound, and when the policy has a simpler parametric form, allows using efficient nonlinear programming solvers. Furthermore, we show that the DC perspective extends easily to constrained RL and partially observable and multiagent settings. Such connections provide new insight on previous algorithms, and also help develop new algorithms for RL. 2024-05-06T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/9926 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Agent policy Reinforcement learning optimization Difference-of-convex functions Reinforcement learning algorithm Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Agent policy
Reinforcement learning optimization
Difference-of-convex functions
Reinforcement learning algorithm
Artificial Intelligence and Robotics
spellingShingle Agent policy
Reinforcement learning optimization
Difference-of-convex functions
Reinforcement learning algorithm
Artificial Intelligence and Robotics
KUMAR, Akshat
Difference of convex functions programming for policy optimization in reinforcement learning
description We formulate the problem of optimizing an agent's policy within the Markov decision process (MDP) model as a difference-of-convex functions (DC) program. The DC perspective enables optimizing the policy iteratively where each iteration constructs an easier-to-optimize lower bound on the value function using the well known concave-convex procedure. We show that several popular policy gradient based deep RL algorithms (both for discrete and continuous state, action spaces, and stochastic/deterministic policies) such as actor-critic, deterministic policy gradient (DPG), and soft actor critic (SAC) can be derived from the DC perspective. Additionally, the DC formulation enables more sample efficient learning approaches by exploiting the structure of the value function lower bound, and when the policy has a simpler parametric form, allows using efficient nonlinear programming solvers. Furthermore, we show that the DC perspective extends easily to constrained RL and partially observable and multiagent settings. Such connections provide new insight on previous algorithms, and also help develop new algorithms for RL.
format text
author KUMAR, Akshat
author_facet KUMAR, Akshat
author_sort KUMAR, Akshat
title Difference of convex functions programming for policy optimization in reinforcement learning
title_short Difference of convex functions programming for policy optimization in reinforcement learning
title_full Difference of convex functions programming for policy optimization in reinforcement learning
title_fullStr Difference of convex functions programming for policy optimization in reinforcement learning
title_full_unstemmed Difference of convex functions programming for policy optimization in reinforcement learning
title_sort difference of convex functions programming for policy optimization in reinforcement learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9926
_version_ 1821237287462109184