A differential dynamic programming framework for inverse reinforcement learning
A differential dynamic programming (DDP)-based framework for inverse reinforcement learning (IRL) is introduced to recover the parameters in the cost function, system dynamics, and constraints from demonstrations. Different from existing work, where DDP was used for the inner forward problem with...
محفوظ في:
المؤلفون الرئيسيون: | , , , , |
---|---|
مؤلفون آخرون: | |
التنسيق: | مقال |
اللغة: | English |
منشور في: |
2025
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/181965 http://arxiv.org/abs/2407.19902v1 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Nanyang Technological University |
اللغة: | English |
الملخص: | A differential dynamic programming (DDP)-based framework for inverse
reinforcement learning (IRL) is introduced to recover the parameters in the
cost function, system dynamics, and constraints from demonstrations. Different
from existing work, where DDP was used for the inner forward problem with
inequality constraints, our proposed framework uses it for efficient
computation of the gradient required in the outer inverse problem with equality
and inequality constraints. The equivalence between the proposed method and
existing methods based on Pontryagin's Maximum Principle (PMP) is established.
More importantly, using this DDP-based IRL with an open-loop loss function, a
closed-loop IRL framework is presented. In this framework, a loss function is
proposed to capture the closed-loop nature of demonstrations. It is shown to be
better than the commonly used open-loop loss function. We show that the
closed-loop IRL framework reduces to a constrained inverse optimal control
problem under certain assumptions. Under these assumptions and a rank
condition, it is proven that the learning parameters can be recovered from the
demonstration data. The proposed framework is extensively evaluated through
four numerical robot examples and one real-world quadrotor system. The
experiments validate the theoretical results and illustrate the practical
relevance of the approach. |
---|