An empirical evaluation of stacked ensembles with different meta-learners in imbalanced classification

The selection of a meta-learner determines the success of a stacked ensemble as the meta-learner is responsible for the final predictions of the stacked ensemble. Unfortunately, in imbalanced classification, selecting an appropriate and well-performing meta-learner of stacked ensemble is not straigh...

Full description

Saved in:
Bibliographic Details
Main Authors: Zian, Seng, Abdul Kareem, Sameem, Varathan, Kasturi Dewi
Format: Article
Published: Institute of Electrical and Electronics Engineers 2021
Subjects:
Online Access:http://eprints.um.edu.my/27115/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Malaya
Description
Summary:The selection of a meta-learner determines the success of a stacked ensemble as the meta-learner is responsible for the final predictions of the stacked ensemble. Unfortunately, in imbalanced classification, selecting an appropriate and well-performing meta-learner of stacked ensemble is not straightforward as different meta-learners are advocated by different researchers. To investigate and identify a well-performing type of meta-learner in stacked ensemble for imbalanced classification, an experiment consisting of 19 meta-learners was conducted, detailed in this paper. Among the 19 meta-learners of stacked ensembles, a new weighted combination-based meta-learner that maximizes the H-measure during the training of stacked ensemble was first introduced and implemented in the empirical evaluation of this paper. The classification performances of stacked ensembles with 19 different meta-learners were recorded using both the area under the receiver operating characteristic curve (AUC) and H-measure (a metric that overcomes the deficiencies of the AUC). The weighted combination-based meta-learners of stacked ensembles have better classification performances on imbalanced datasets when compared to bagging-based, boosting-based, Decision Trees, Support Vector Machines, Naive Bayes, and Feedforward Neural Network meta-learners. Thus, the adoption of weighted combination-based meta-learners in stacked ensembles is recommended for their better performance on imbalanced datasets. Also, based on the empirical results, we identified better-performing meta-learners (such as the AUC maximizing meta-learner and the H-measure maximizing meta-learner) than the widely adopted meta-learner - Logistic Regression - in imbalanced classification.