Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models
Bayesian optimization (BO) has emerged as the algorithm of choice for guiding the selection of experimental parameters in automated active learning driven high throughput experiments in materials science and chemistry. Previous studies suggest that optimization performance of the typical surrogate m...
Saved in:
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159296 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-159296 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1592962023-07-14T16:05:31Z Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models Lim, Yee-Fun Ng, Chee Koon Vaitesswar, U. S. Hippalgaonkar, Kedar School of Materials Science and Engineering Institute of Materials Research and Engineering, A*STAR Engineering::Materials Automated Experiments Bayesian Optimization Bayesian optimization (BO) has emerged as the algorithm of choice for guiding the selection of experimental parameters in automated active learning driven high throughput experiments in materials science and chemistry. Previous studies suggest that optimization performance of the typical surrogate model in the BO algorithm, Gaussian processes (GPs), may be limited due to its inability to handle complex datasets. Herein, various surrogate models for BO, including GPs and neural network ensembles (NNEs), are investigated. Two materials datasets of different complexity with different properties are used, to compare the performance of GP and NNE—the first is the compressive strength of concrete (8 inputs and 1 target), and the second is a simulated high-dimensional dataset of thermoelectric properties of inorganic materials (22 inputs and 1 target). While NNEs can converge faster toward optimum values, GPs with optimized kernels are able to ultimately achieve the best evaluated values after 100 iterations, even for the most complex dataset. This surprising result is contrary to expectations. It is believed that these findings shed new light on the understanding of surrogate models for BO, and can help accelerate the inverse design of new materials with better structural and functional performance. Agency for Science, Technology and Research (A*STAR) Published version This work was supported by the Agency of Science, Technology and Research (A*STAR), Singapore, via two programmatic research grants (grant nos. A1898b0043 and A19E9a0103). 2022-06-10T08:12:50Z 2022-06-10T08:12:50Z 2021 Journal Article Lim, Y., Ng, C. K., Vaitesswar, U. S. & Hippalgaonkar, K. (2021). Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models. Advanced Intelligent Systems, 3(11), 2100101-. https://dx.doi.org/10.1002/aisy.202100101 2640-4567 https://hdl.handle.net/10356/159296 10.1002/aisy.202100101 11 3 2100101 en A1898b0043 A19E9a0103 Advanced Intelligent Systems © 2021 The Authors. Advanced Intelligent Systems published by Wiley-VCH GmbH. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Materials Automated Experiments Bayesian Optimization |
spellingShingle |
Engineering::Materials Automated Experiments Bayesian Optimization Lim, Yee-Fun Ng, Chee Koon Vaitesswar, U. S. Hippalgaonkar, Kedar Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models |
description |
Bayesian optimization (BO) has emerged as the algorithm of choice for guiding the selection of experimental parameters in automated active learning driven high throughput experiments in materials science and chemistry. Previous studies suggest that optimization performance of the typical surrogate model in the BO algorithm, Gaussian processes (GPs), may be limited due to its inability to handle complex datasets. Herein, various surrogate models for BO, including GPs and neural network ensembles (NNEs), are investigated. Two materials datasets of different complexity with different properties are used, to compare the performance of GP and NNE—the first is the compressive strength of concrete (8 inputs and 1 target), and the second is a simulated high-dimensional dataset of thermoelectric properties of inorganic materials (22 inputs and 1 target). While NNEs can converge faster toward optimum values, GPs with optimized kernels are able to ultimately achieve the best evaluated values after 100 iterations, even for the most complex dataset. This surprising result is contrary to expectations. It is believed that these findings shed new light on the understanding of surrogate models for BO, and can help accelerate the inverse design of new materials with better structural and functional performance. |
author2 |
School of Materials Science and Engineering |
author_facet |
School of Materials Science and Engineering Lim, Yee-Fun Ng, Chee Koon Vaitesswar, U. S. Hippalgaonkar, Kedar |
format |
Article |
author |
Lim, Yee-Fun Ng, Chee Koon Vaitesswar, U. S. Hippalgaonkar, Kedar |
author_sort |
Lim, Yee-Fun |
title |
Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models |
title_short |
Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models |
title_full |
Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models |
title_fullStr |
Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models |
title_full_unstemmed |
Extrapolative Bayesian optimization with Gaussian process and neural network ensemble surrogate models |
title_sort |
extrapolative bayesian optimization with gaussian process and neural network ensemble surrogate models |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/159296 |
_version_ |
1773551279060025344 |