En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective

It is a conventional wisdom that real world problems seldom occur in isolation. The motivation for this work, inspired from the observation that humans rarely tackle every problem from scratch, is to improve optimization performance through adap- tive knowledge transfer across related problems. The...

Full description

Saved in:
Bibliographic Details
Main Author: Bali, Kavitesh Kumar
Other Authors: Ong Yew Soon
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/152658
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:It is a conventional wisdom that real world problems seldom occur in isolation. The motivation for this work, inspired from the observation that humans rarely tackle every problem from scratch, is to improve optimization performance through adap- tive knowledge transfer across related problems. The scope for spontaneous trans- fers under the simultaneous occurrence of multiple problems unveils the benefits of multitasking. Multitask optimization has recently demonstrated competence in solving multiple (related) optimization tasks concurrently. Notably, in the presence of underlying relationships between problems, the transfer of high quality solutions across them has shown to facilitate superior performance characteristics - as the cost of re-exploring overlapping regions of the search space is reduced. However, in the absence of any prior knowledge about the inter-task synergies (as is often the case with general black-box optimization), the threat of predominantly negative transfer prevails. Susceptibility to negative inter-task interactions can in fact be detrimental, often impeding the overall convergence behavior. To allay such fears, this thesis presents viable solutions towards automated extraction and transfer of (fruitful) knowledge such that any deleterious e↵ects of otherwise negative inter- task exchanges are suppressed. To this end, an in-depth theoretical analysis is first conducted to unveil the primary caveats which concern the global convergence char- acteristics of the present day multitasking evolutionary optimization framework. Next, a novel evolutionary computation framework is proposed that enables online learning and exploitation of the similarities (and discrepancies) between distinct tasks in multitask settings via probabilistic mixture models. The proposed method is based on principled theoretical arguments that seek to minimize the tendency of harmful interactions between tasks, based on a purely data-driven learning of relationships among them. As a proof of concept, the method is initially validated experimentally on a wide range of synthetic discrete and continuous single-objective benchmarks. Thereafter, a realization of similar concepts of the proposed method is extended to the domain of multi-objective optimization (an omnipresent scenario in our daily lives). It is noteworthy that this work shall be among the first to utilize probabilistic modeling to capture inter-task relationships between multi-objective optimization tasks in the context of evolutionary multitasking. Empirical studies on a series of benchmark test functions show that the method is able to decipher and adapt to the degree of similarity between distinct multi-objective optimization tasks on the fly. Finally, the practicality of the proposed methods are substantiated on various real world case studies including reinforcement learning, multi-fidelity optimization and evolutionary deep learning. Not only do the practical studies provide insights into the behavior of the methods in the face of several (many) complex tasks occurring at once, but also underscores the benefits of omnidirec- tional knowledge exchanges, the boon of intentional-unintentional problem solving capabilities as well as knowledge transfers from low fidelity optimization tasks to substantially reduce the cost of (otherwise expensive) high fidelity optimization.