Game-theoretic inverse reinforcement learning: a differential pontryagin's maximum principle approach
This paper proposes a game-theoretic inverse reinforcement learning (GT-IRL) framework, which aims to learn the parameters in both the dynamic system and individual cost function of multistage games from demonstrated trajectories. Different from the probabilistic approaches in computer science commu...
Saved in:
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/162585 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This paper proposes a game-theoretic inverse reinforcement learning (GT-IRL) framework, which aims to learn the parameters in both the dynamic system and individual cost function of multistage games from demonstrated trajectories. Different from the probabilistic approaches in computer science community and residual minimization solutions in control community, our framework addresses the problem in a deterministic setting by differentiating Pontryagin’s Maximum Principle (PMP) equations of open-loop Nash equilibrium (OLNE), which is inspired by [1]. The differentiated equations for a multi-player nonzero-sum multistage game are shown to be equivalent to the PMP equations for another affine-quadratic nonzero-sum multistage game and can be solved by some explicit recursions. A similar result is established for 2-player zero-sum games. Simulation examples are presented to demonstrate the effectiveness of our proposed algorithms. |
---|