Automated tuning of nonlinear model predictive controller by reinforcement learning
One of the major challenges of model predictive control (MPC) for robotic applications is the non-trivial weight tuning process while crafting the objective function. This process is often executed using the trial-and-error method by the user. Consequently, the optimality of the weights and the time...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/143042 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-143042 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1430422023-03-04T17:07:08Z Automated tuning of nonlinear model predictive controller by reinforcement learning Mehndiratta, Mohit Camci, Efe Kayacan, Erdal School of Mechanical and Aerospace Engineering 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Engineering::Mechanical engineering Tuning Rotors One of the major challenges of model predictive control (MPC) for robotic applications is the non-trivial weight tuning process while crafting the objective function. This process is often executed using the trial-and-error method by the user. Consequently, the optimality of the weights and the time required for the process become highly dependent on the skill set and experience of the user. In this study, we present a generic and user-independent framework which automates the tuning process by reinforcement learning. The proposed method shows competency in tuning a nonlinear MPC (NMPC) which is employed for trajectory tracking control of aerial robots. It explores the desirable weights within less than an hour in iterative Gazebo simulations running on a standard desktop computer. The real world experiments illustrate that the NMPC weights explored by the proposed method result in a satisfactory trajectory tracking performance. Ministry of Education (MOE) Accepted version This work was financially supported by the Singapore Ministry of Education (RG185/17) and Aarhus University, Department of Engineering (28173). 2020-07-22T08:36:13Z 2020-07-22T08:36:13Z 2019 Conference Paper Mehndiratta, M., Camci, E., & Kayacan, E. (2018). Automated tuning of nonlinear model predictive controller by reinforcement learning. Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3016-3021. doi:10.1109/iros.2018.8594350 978-1-5386-8095-7 https://hdl.handle.net/10356/143042 10.1109/iros.2018.8594350 2-s2.0-85062939914 3016 3021 en © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/iros.2018.8594350 application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Mechanical engineering Tuning Rotors |
spellingShingle |
Engineering::Mechanical engineering Tuning Rotors Mehndiratta, Mohit Camci, Efe Kayacan, Erdal Automated tuning of nonlinear model predictive controller by reinforcement learning |
description |
One of the major challenges of model predictive control (MPC) for robotic applications is the non-trivial weight tuning process while crafting the objective function. This process is often executed using the trial-and-error method by the user. Consequently, the optimality of the weights and the time required for the process become highly dependent on the skill set and experience of the user. In this study, we present a generic and user-independent framework which automates the tuning process by reinforcement learning. The proposed method shows competency in tuning a nonlinear MPC (NMPC) which is employed for trajectory tracking control of aerial robots. It explores the desirable weights within less than an hour in iterative Gazebo simulations running on a standard desktop computer. The real world experiments illustrate that the NMPC weights explored by the proposed method result in a satisfactory trajectory tracking performance. |
author2 |
School of Mechanical and Aerospace Engineering |
author_facet |
School of Mechanical and Aerospace Engineering Mehndiratta, Mohit Camci, Efe Kayacan, Erdal |
format |
Conference or Workshop Item |
author |
Mehndiratta, Mohit Camci, Efe Kayacan, Erdal |
author_sort |
Mehndiratta, Mohit |
title |
Automated tuning of nonlinear model predictive controller by reinforcement learning |
title_short |
Automated tuning of nonlinear model predictive controller by reinforcement learning |
title_full |
Automated tuning of nonlinear model predictive controller by reinforcement learning |
title_fullStr |
Automated tuning of nonlinear model predictive controller by reinforcement learning |
title_full_unstemmed |
Automated tuning of nonlinear model predictive controller by reinforcement learning |
title_sort |
automated tuning of nonlinear model predictive controller by reinforcement learning |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/143042 |
_version_ |
1759853536406929408 |