Boosting for partially linear additive models
Additive models are widely applied in statistical learning. The partially linear additive model is a special form of additive models, which combines the strengths of linear and nonlinear models by allowing linear and nonlinear predictors to coexist. One of the most interesting questions associated w...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Theses and Dissertations |
Language: | English |
Published: |
2016
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/69082 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Additive models are widely applied in statistical learning. The partially
linear additive model is a special form of additive models, which combines
the strengths of linear and nonlinear models by allowing linear and nonlinear
predictors to coexist. One of the most interesting questions associated
with the partially linear additive model is to identify nonlinear, linear, and
non-informative covariates with no such pre-specification given, and to simultaneously
recover underlying component functions which indicate how
each covariate affects the response.
In this thesis, algorithms are developed to solve the above question.
Main technique used is gradient boosting, in which simple linear regressions
and univariate penalized splines are together used as base learners. In this
way our proposed algorithms are able to estimate component functions and
simultaneously specify model structure. Twin boosting is incorporated as
well to achieve better variable selection accuracy. The proposed methods
can be applied to mean and quantile regressions as well as survival analysis.
Simulation studies as well as real data applications illustrate the strength
of our proposed approaches. |
---|