Trust-region inverse reinforcement learning

This paper proposes a new unified inverse reinforcement learning (IRL) framework based on trust-region methods and a recently proposed Pontryagin differential programming (PDP) method in Jin et al. (2020), which aims to learn the parameters in both the system model and the cost function for three ty...

全面介紹

Saved in:
書目詳細資料
Main Authors: Cao, Kun, Xie, Lihua
其他作者: School of Electrical and Electronic Engineering
格式: Article
語言:English
出版: 2023
主題:
PMP
在線閱讀:https://hdl.handle.net/10356/170705
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:This paper proposes a new unified inverse reinforcement learning (IRL) framework based on trust-region methods and a recently proposed Pontryagin differential programming (PDP) method in Jin et al. (2020), which aims to learn the parameters in both the system model and the cost function for three types of problems, namely, N-player nonzero-sum multistage games, 2-player zero-sum multistage games and 1-player optimal control, from demonstrated trajectories. Different from the existing frameworks using gradient to update learning parameters, our framework updates them with the candidate solution of trust-region subproblem (TRS), where its required gradient and Hessian are obtained by differentiating Pontryagin's Maximum Principle (PMP) equations once and twice, respectively. The differentiated equations are shown to be equivalent to the PMP equations for affine-quadratic games / optimal control problems and can be solved by some explicit recursions. Extensive simulation examples and comparisons are presented to demonstrate the effectiveness of our proposed algorithm.