Optimized dynamic policy for robust receding horizon control

As an on-line-optimization-based control technique, receding horizon control (RHC) has been a prominent control method for real-time control applications. Since this control approach relies on a model of the system being controlled, the presence of uncertainties in the system description has to be a...

Full description

Saved in:
Bibliographic Details
Main Author: Ajay Gautam
Other Authors: Soh Yeng Chai
Format: Theses and Dissertations
Language:English
Published: 2012
Subjects:
Online Access:https://hdl.handle.net/10356/49509
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As an on-line-optimization-based control technique, receding horizon control (RHC) has been a prominent control method for real-time control applications. Since this control approach relies on a model of the system being controlled, the presence of uncertainties in the system description has to be addressed with robust algorithms which, if designed naively, may lead to conservative results even with complex on-line computations, thus limiting the wider applicability of the method. The research in this thesis is aimed at developing RHC algorithms that allow to achieve a suitable tradeoff among control performance, applicability and on-line computational complexity, for control problems that require a systematic handling of uncertainties and constraints with low-complexity on-line computations. With a focus on (possibly uncertain) linear time-varying systems with a polytopic system description and with (possibly unmeasurable) bounded additive disturbances, this thesis studies a class of admissible controller dynamics, and proposes a dynamic control policy that is computationally attractive and offers reduced conservativeness. The proposed policy uses time-varying controller dynamics with controller matrices that need not be explicitly determined on-line but only assumed to follow the same convex combination as the plant matrices, and with a disturbance feedforward term that does not require the disturbance to be measured. Essentially, the proposed policy incorporates all the 'uncertain' information into the controller dynamics and this reduces the conservativeness in the assessment of feasible control inputs and hence the feasible invariant set for the controlled system. Furthermore, this policy allows the control optimization problem to be split into two separate problems: one to determine the convex hull of the controller matrices and the other to compute the controller initial state. With the former carried out off-line, the on-line computations involving the latter part are considerably simplified. The dynamics of the proposed policy can also be optimized such that the resulting RHC law ensures a control performance with a suitable H_infinity performance bound.