A new formulation of gradient boosting

In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge...

Full description

Saved in:
Bibliographic Details
Main Authors: Wozniakowski, Alex, Thompson, Jane, Gu, Mile, Binder, Felix C.
Other Authors: School of Physical and Mathematical Sciences
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164179
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-164179
record_format dspace
spelling sg-ntu-dr.10356-1641792023-02-28T20:06:51Z A new formulation of gradient boosting Wozniakowski, Alex Thompson, Jane Gu, Mile Binder, Felix C. School of Physical and Mathematical Sciences Centre for Quantum Technologies, NUS Complexity Institute Science::Physics Multi-Target Regression Ensemble Learning In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge or physical insight about the data generating process. Moreover, we introduce a simple variant of multi-target stacking that extends our approach to the setting of multi-target regression. An experiment on a real-world superconducting quantum device calibration dataset demonstrates that our approach outperforms the state-of-the-art calibration model even though it only receives a paucity of training examples. Further, it significantly outperforms a well-known gradient boosting algorithm, known as LightGBM, as well as an entirely data-driven reimplementation of the calibration model, which suggests the viability of our approach. Ministry of Education (MOE) National Research Foundation (NRF) Published version This work is supported by the Singapore Ministry of Education Tier 1 Grant RG162/19, Singapore National Research Foundation Fellowship NRF-NRFF2016-02 and NRF-ANR Grant NRF2017-NRF-ANR004 VanQuTe, the Quantum Engineering Program QEP-SF3, and the FQXi large Grant FQXi-RFP-IPW-1903; Alex Wozniakowski was partially supported by the Grant TRT 0159 on mathematical picture language from the Templeton Religion Trust and thanks the Academy of Mathematics and Systems Science (AMSS) of the Chinese Academy of Sciences for their hospitality, where part of this work was done. Felix C. Binder acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 801110 and the Austrian Federal Ministry of Education, Science and Research (BMBWF). 2023-01-09T02:14:36Z 2023-01-09T02:14:36Z 2021 Journal Article Wozniakowski, A., Thompson, J., Gu, M. & Binder, F. C. (2021). A new formulation of gradient boosting. Machine Learning: Science and Technology, 2(4), 045022-. https://dx.doi.org/10.1088/2632-2153/ac1ee9 2632-2153 https://hdl.handle.net/10356/164179 10.1088/2632-2153/ac1ee9 2-s2.0-85116744784 4 2 045022 en RG162/19 NRF-NRFF2016-02 NRF2017-NRF-ANR004 Machine Learning: Science and Technology © 2021 The Author(s). Published by IOP Publishing Ltd. Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Science::Physics
Multi-Target Regression
Ensemble Learning
spellingShingle Science::Physics
Multi-Target Regression
Ensemble Learning
Wozniakowski, Alex
Thompson, Jane
Gu, Mile
Binder, Felix C.
A new formulation of gradient boosting
description In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge or physical insight about the data generating process. Moreover, we introduce a simple variant of multi-target stacking that extends our approach to the setting of multi-target regression. An experiment on a real-world superconducting quantum device calibration dataset demonstrates that our approach outperforms the state-of-the-art calibration model even though it only receives a paucity of training examples. Further, it significantly outperforms a well-known gradient boosting algorithm, known as LightGBM, as well as an entirely data-driven reimplementation of the calibration model, which suggests the viability of our approach.
author2 School of Physical and Mathematical Sciences
author_facet School of Physical and Mathematical Sciences
Wozniakowski, Alex
Thompson, Jane
Gu, Mile
Binder, Felix C.
format Article
author Wozniakowski, Alex
Thompson, Jane
Gu, Mile
Binder, Felix C.
author_sort Wozniakowski, Alex
title A new formulation of gradient boosting
title_short A new formulation of gradient boosting
title_full A new formulation of gradient boosting
title_fullStr A new formulation of gradient boosting
title_full_unstemmed A new formulation of gradient boosting
title_sort new formulation of gradient boosting
publishDate 2023
url https://hdl.handle.net/10356/164179
_version_ 1759854238081482752