Safe level graph for majority under-sampling techniques

© 2014, Chiang Mai University. All rights reserved. In classification tasks, imbalance data causes the inadequate predictive performance of a tiny minority class because the decision boundary determined by trivial classifiers tends to be biased toward a huge majority class. For handling the class im...

Full description

Saved in:
Bibliographic Details
Main Author: Chumphol Bunkhumpornpat
Format: Journal
Published: 2018
Online Access:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84936056675&origin=inward
http://cmuir.cmu.ac.th/jspui/handle/6653943832/45475
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Chiang Mai University
Description
Summary:© 2014, Chiang Mai University. All rights reserved. In classification tasks, imbalance data causes the inadequate predictive performance of a tiny minority class because the decision boundary determined by trivial classifiers tends to be biased toward a huge majority class. For handling the class imbalance problem, over- and under-sampling are applied at the data level. Over-sampling duplicates or synthesizes instances into a minority class. Although redundant instances do not harm correct classifications, they increase classification costs. Additionally, while synthetic instances expand the learning region, they are not actual instances. Under-sampling removes instances from a majority class to remedy the overlapping problem. Consequently, a downsized dataset can speed up a classification algorithm. This research investigates the behavior of several under-sampling techniques, while cleansing distinct majority class regions. We also propose a safe level graph to justify an appropriate parameter of our prior work, MUTE. The experiment shows that our decision from a safe level graph can improve the F-measure of RIPPER when evaluating minority classes.