A new formulation of gradient boosting

In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge...

Full description

Saved in:
Bibliographic Details
Main Authors: Wozniakowski, Alex, Thompson, Jane, Gu, Mile, Binder, Felix C.
Other Authors: School of Physical and Mathematical Sciences
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/164179
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In the setting of regression, the standard formulation of gradient boosting generates a sequence of improvements to a constant model. In this paper, we reformulate gradient boosting such that it is able to generate a sequence of improvements to a nonconstant model, which may contain prior knowledge or physical insight about the data generating process. Moreover, we introduce a simple variant of multi-target stacking that extends our approach to the setting of multi-target regression. An experiment on a real-world superconducting quantum device calibration dataset demonstrates that our approach outperforms the state-of-the-art calibration model even though it only receives a paucity of training examples. Further, it significantly outperforms a well-known gradient boosting algorithm, known as LightGBM, as well as an entirely data-driven reimplementation of the calibration model, which suggests the viability of our approach.