DML-PL: deep metric learning based pseudo-labeling framework for class imbalanced semi-supervised learning

Traditional class imbalanced learning algorithms require training data to be labeled, whereas semi-supervised learning algorithms assume that the class distribution is balanced. However, class imbalance and insufficient labeled data problems often coexist in practical real-world applications. Curren...

Full description

Saved in:
Bibliographic Details
Main Authors: Yan, Mi, Hui, Siu Cheung, Li, Ning
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170840
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Traditional class imbalanced learning algorithms require training data to be labeled, whereas semi-supervised learning algorithms assume that the class distribution is balanced. However, class imbalance and insufficient labeled data problems often coexist in practical real-world applications. Currently, most existing class-imbalanced semi-supervised learning methods tackle these two problems separately, resulting in the trained model biased towards majority classes that have more data samples. In this study, we propose a deep metric learning based pseudo-labeling (DML-PL) framework that tackles both problems simultaneously for class-imbalanced semi-supervised learning. The proposed DML-PL framework comprises three modules: Deep Metric Learning, Pseudo-Labeling and Network Fine-tuning. An iterative self-training strategy is used to train the model multiple times. For each time of training, Deep Metric Learning trains a deep metric network to learn compact feature representations of labeled and unlabeled data. Pseudo-Labeling then generates reliable pseudo-labels for unlabeled data through labeled data clustering with nearest neighbors selection. Finally, Network Fine-tuning fine-tunes the deep metric network to generate better pseudo-labels in the subsequent training. The training ends when all the unlabeled data are pseudo-labeled. The proposed framework achieved state-of-the-art performance on the long-tailed CIFAR-10, CIFAR-100, and ImageNet127 benchmark datasets compared with baseline models.