Methods in multi-source data-driven transfer optimization
In the global optimization literature, traditional optimization algorithms typically start their search process from scratch while facing a new problem of practical interest. That is to say, their problem-solving capabilities do not grow along with accumulated experiences or solved problems. Under t...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/136964 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In the global optimization literature, traditional optimization algorithms typically start their search process from scratch while facing a new problem of practical interest. That is to say, their problem-solving capabilities do not grow along with accumulated experiences or solved problems. Under the observation that optimization problems of practical interest seldom exist in isolation, ignoring the prior experience often implies the wastage of a rich pool of knowledge that can otherwise be exploited to facilitate efficient re-exploration of possibly overlapping search spaces. However, in practical settings, the ability to leverage such a rich pool of knowledge often yields substantial convergence speedup as well as cost-saving benefits. Given today's competitive need for high-quality solutions promptly, the necessity to adaptively reuse past experience is not hard to comprehend. Nevertheless, transfer learning has continuously drawn research attention during the years in the machine learning community, while only a handful of research works have been focused on knowledge transfer in optimization. Thus, in the present thesis, it is aimed to accelerate the optimization process on the task of practical interest by automatically selecting, adapting and integrating knowledge from past problems, under the recently introduced concept of transfer optimization. In particular, inspired by transfer learning in supervised learning, learning generalizable probabilistic models for transfer optimization is first presented. Taking supervised signals from several related source probabilistic models, it is demonstrated that a more generalizable probabilistic model could be learned, capable of predicting high-quality solutions directly for combinatorial optimization problems drawn from different distributions. Subsequently, it is observed that in some real-world settings, some source probabilistic model can cause negative influence on the target optimization task, as it is impractical to guarantee that all the diverse source models are beneficial to the task of practical interest. Therefore, a new transfer optimization paradigm, namely adaptive model-based transfer, is proposed. The proposed paradigm enables online learning and exploitation of similarities across different optimization problems. The experience on certain optimization task either takes the form of probabilistic model directly, or is encoded as a probabilistic distribution. By taking advantage of different source probabilistic models, this framework is able to automatically modulate the amount of knowledge needed to be transferred from multiple source tasks, hence the threat of negative transfer is minimized. By transforming various search spaces into a universal search space, the proposed framework can tackle discrete, continuous, as well as single- and multi-objective optimization problems. Finally, when the target optimization problem becomes computationally expensive, the aforementioned methods might not be practical to be deployed in real-world applications. Therefore, surrogate-assisted optimization is a promising optimization tool for computationally expensive problems in continuous domain. In order to adaptively take advantage of past experiences on solving various optimization tasks, a scalable multi-source surrogate-assisted transfer optimization framework is proposed, facilitating efficient global optimization on continuous optimization problems with high computational complexities. |
---|