Difference of convex functions programming for policy optimization in reinforcement learning
We formulate the problem of optimizing an agent's policy within the Markov decision process (MDP) model as a difference-of-convex functions (DC) program. The DC perspective enables optimizing the policy iteratively where each iteration constructs an easier-to-optimize lower bound on the value f...
Saved in:
主要作者: | |
---|---|
格式: | text |
語言: | English |
出版: |
Institutional Knowledge at Singapore Management University
2024
|
主題: | |
在線閱讀: | https://ink.library.smu.edu.sg/sis_research/9926 https://ink.library.smu.edu.sg/context/sis_research/article/10926/viewcontent/ConvexFunctionsProg_pvoa_cc_by.pdf |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
機構: | Singapore Management University |
語言: | English |
總結: | We formulate the problem of optimizing an agent's policy within the Markov decision process (MDP) model as a difference-of-convex functions (DC) program. The DC perspective enables optimizing the policy iteratively where each iteration constructs an easier-to-optimize lower bound on the value function using the well known concave-convex procedure. We show that several popular policy gradient based deep RL algorithms (both for discrete and continuous state, action spaces, and stochastic/deterministic policies) such as actor-critic, deterministic policy gradient (DPG), and soft actor critic (SAC) can be derived from the DC perspective. Additionally, the DC formulation enables more sample efficient learning approaches by exploiting the structure of the value function lower bound, and when the policy has a simpler parametric form, allows using efficient nonlinear programming solvers. Furthermore, we show that the DC perspective extends easily to constrained RL and partially observable and multiagent settings. Such connections provide new insight on previous algorithms, and also help develop new algorithms for RL. |
---|