Optimization-based learning control of aerial robots operating in uncertain environments
Places that were hard to reach are now well accessible to the world with the help of aerial robots. As one of the biggest inventions of mankind in robotics, these robots place no risk on human lives because they are unmanned and remotely/autonomously operated in hostile situations. Together, these r...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/144162 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Places that were hard to reach are now well accessible to the world with the help of aerial robots. As one of the biggest inventions of mankind in robotics, these robots place no risk on human lives because they are unmanned and remotely/autonomously operated in hostile situations. Together, these reasons make them the most promising candidate in numerous applications.
Howbeit, their coupled and significantly nonlinear dynamics accompanied by open-loop instabilities lead to a complicated control problem. Although conventional control approaches such as proportional-integral-derivative (PID) and linear-quadratic-regulator (LQR), have been widely adopted, the underlying linearization leads to suboptimal performance during agile operations.
Besides, there are environment-specific difficulties, like external disturbances during offshore operations, that result in an uncertain system model. Since the performance of model-based controllers is critically linked to the model accuracy, modeling uncertainties may significantly degrade their performance to the extent of instability. Therefore, rather than utilizing a sophisticated robot which is trained -- and tuned -- for a scenario in a specific environment perfectly, most people are interested in seeing them operating in unexplored conditions. In that vein, an aerial robot must learn from its own experiences and interactions with the environment for daily operations in real application scenarios. Moreover, realtime implementations of the control algorithms necessitate a tuning process that is arduous yet dangerous when performed directly on the real robot.
Taking inspiration and identifying an opportunity in these issues, this thesis throws light on the development of various learning algorithms to facilitate precise tracking control of multirotor aerial robots in uncertain environmental conditions. Firstly, to cater to the nonlinear dynamics, it implements two control algorithms, namely, the nonlinear model predictive controller (NMPC) and the feedback linearization control (FLC) method. Both the control approaches explicitly accommodate the nonlinear dynamics rather than linearizing the system. Their overall efficacy is demonstrated for the position and attitude tracking problems of the aerial robots.
Secondly, to accommodate the limited processing power that is available onboard aerial robots, this thesis employs fast solutions methodologies. Thanks to the efficient C++ scripts and the direct multiple shooting method along with the special realtime iteration scheme adopted in automatic-control-and-dynamic-optimization (\texttt{ACADO}) toolkit; successful onboard implementation of the control algorithms is achieved for all the real-world tests. What is more, in the case of the NMPC-NMHE framework, a complete onboard implementation is realized on a low-cost embedded processor, named Raspberry Pi 3.
Thirdly, to tackle the uncertainties in the system model, this thesis proposes a few learning-based control approaches that are broadly categorized as: instantaneous learning control (InLC) and iterative learning control (ILC). In essence, the InLC technique utilizes an estimator to learn the model parameters, whereas the ILC scheme identifies the uncertain dynamics based on the experience from system repetitions. Besides, the learned system model is subsequently updated within the controller definition in both the approaches. In terms of the InLC scheme, two control frameworks are developed. The first incorporates a nonlinear moving horizon estimator (NMHE) to estimate the time-varying model parameters, thus making NMPC adaptive to the changing working conditions. The second framework constitutes a simple learning (SL) strategy to cater to the limitations of the traditional FLC method by updating controller gains and disturbance estimate within the feedback control law. In the ILC scheme, on the other hand, a Gaussian process (GP)-based regression technique models the disturbance forces that are encountered during the offshore visualization operation. Several simulations and real-world tests manifest that both InLC and ILC schemes have compelling abilities to substantially reduce the tracking error over their conventional counterparts throughout the operation.
Lastly, to circumvent the tedious tuning process, an active exploration approach is proposed to obtain the NMPC's weights. The auto-tuning framework extends the basic trial-and-error method to intelligently tune the weight sets. In essence, it benefits from the retrospective knowledge gained over previous trials, and thus, expedites the tuning procedure. Moreover, the safety of the robot is ensured by employing a deep neural network-based robot model. What is more, a seamless sim-to-real transition is exhibited via the direct deployment of the weight sets from simulation tuning for the real-world trajectory tracking application. |
---|