Evolutionary optimization for computationally expensive problems

Despite all the appealing features of Evolutionary Algorithms (EAs), thousands of calls to the analysis or simulation codes are often required to locate a near optimal solution. Two major solutions for this issue are: 1) to use computationally less expensive surrogate models, and 2) to use parallel...

Full description

Saved in:
Bibliographic Details
Main Author: Lim, Dudy
Other Authors: Jin Yaochu
Format: Theses and Dissertations
Language:English
Published: 2009
Subjects:
Online Access:https://hdl.handle.net/10356/19262
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Despite all the appealing features of Evolutionary Algorithms (EAs), thousands of calls to the analysis or simulation codes are often required to locate a near optimal solution. Two major solutions for this issue are: 1) to use computationally less expensive surrogate models, and 2) to use parallel and distributed computers. In this thesis, model management frameworks utilizing a diverse set of surrogate models are proposed. The proposed Generalized Surrogate Memetic (GSM) framework aims to unify diverse set of data-fitting models synergistically in the evolutionary search. In particular, the GSM framework exploits both the positive and negative impacts of approximation errors in the surrogate models used. An extended management framework is also proposed for EAs using multi-scale models and demonstrated on two real-world examples. Experimental study performed using data-fitting and multi-scale models indicates that the proposed frameworks are capable of attaining reliable, high quality, and e±cient performance under a limited omputational budget. In what follows, possibilities for further acceleration of the evolutionary optimization life cycle through parallelization are also considered. When applied to small-scale, dedicated, and homogeneous computing nodes, this seems to be a formidable solution. However, in a large-scale computing farm such as the Grid, reality proves otherwise. In a Grid computing environment, which emphasizes on the seamless sharing of computing resources across institutions, heterogeneity of resources is inevitable. In such situation, conventional parallelization without considering the heterogeneity of computing resources is likely to produce ine±cient optimization. The latter part of this thesis summarizes our works on parallelizing evolutionary optimization in a heterogeneous Grid computing environment.