Addressing the cold start problem in active learning using self-supervised learning
Active learning promises to improve annotation efficiency by iteratively selecting the most important data to be annotated first. However, we uncover a striking contrast to this promise: Active querying strategies fail to select data as effectively as random selection at the first choice. We identif...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/158461 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Active learning promises to improve annotation efficiency by iteratively selecting the most important data to be annotated first. However, we uncover a striking contrast to this promise: Active querying strategies fail to select data as effectively as random selection at the first choice. We identify it as the cold start problem in vision active learning. Systematic ablation experiments and qualitative visualizations reveal that the level of label uniformity (the uniform distribution of categories in a query) is an explicit criterion for determining the annotation importance. However, computing the label uniformity requires manual annotation, which is not available according to the nature of active learning. In this paper, we find that without manual annotation, contrastive learning can approximate label uniformity based on pseudo-labeled features generated from image feature clustering. Moreover, within each cluster, selecting hard-to-contrast data (low confidence in instance discrimination with low variability along the contrastive learning trajectory) is preferable to those ambiguous and easy-to-contrast data. In this paper, we find that without manual annotation, contrastive learning can approximate these two criteria based on pseudo-labeled features generated from image feature clustering. Extensive benchmark experiments show that our initial query sheds light on surpassing random sampling on medical imaging datasets (e.g. Colon Pathology, Dermatoscope, and Blood Cell Microscope). In summary, this study (1) illustrates the cold start problem in vision active learning, (2) investigates the underlying causes of the problem with rigorous analysis and visualization, and (3) determines effective initial queries to start the “human-in-the-loop” procedure. We hope our potential solution to the cold start problem can be used as a simple yet strong baseline to sample the initial query for active learning in image classification. |
---|