Learning to self-train for semi-supervised few-shot classification
Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/4445 https://ink.library.smu.edu.sg/context/sis_research/article/5448/viewcontent/NeurIPS_2019_semi_supervised_camera_ready.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-5448 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-54482021-02-19T03:10:44Z Learning to self-train for semi-supervised few-shot classification LI, Xinzhe SUN, Qianru LIU, Yaoyao ZHENG, Shibao ZHOU, Qin CHUA, Tat-Seng SCHIELE, Bernt Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance. To this end, we train the LST model through a large number of semi-supervised few-shot tasks. On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning. We additionally learn a soft weighting network (SWN) to optimize the self-training weights of pseudo labels so that better ones can contribute more to gradient descent optimization. We evaluate our LST method on two ImageNet benchmarks for semi-supervised few-shot classification and achieve large improvements over the state-of-the-art. 2019-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/4445 https://ink.library.smu.edu.sg/context/sis_research/article/5448/viewcontent/NeurIPS_2019_semi_supervised_camera_ready.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Few-shot learning semi-supervised learning meta-learning image classification Artificial Intelligence and Robotics Computer Sciences Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Few-shot learning semi-supervised learning meta-learning image classification Artificial Intelligence and Robotics Computer Sciences Numerical Analysis and Scientific Computing |
spellingShingle |
Few-shot learning semi-supervised learning meta-learning image classification Artificial Intelligence and Robotics Computer Sciences Numerical Analysis and Scientific Computing LI, Xinzhe SUN, Qianru LIU, Yaoyao ZHENG, Shibao ZHOU, Qin CHUA, Tat-Seng SCHIELE, Bernt Learning to self-train for semi-supervised few-shot classification |
description |
Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance. To this end, we train the LST model through a large number of semi-supervised few-shot tasks. On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning. We additionally learn a soft weighting network (SWN) to optimize the self-training weights of pseudo labels so that better ones can contribute more to gradient descent optimization. We evaluate our LST method on two ImageNet benchmarks for semi-supervised few-shot classification and achieve large improvements over the state-of-the-art. |
format |
text |
author |
LI, Xinzhe SUN, Qianru LIU, Yaoyao ZHENG, Shibao ZHOU, Qin CHUA, Tat-Seng SCHIELE, Bernt |
author_facet |
LI, Xinzhe SUN, Qianru LIU, Yaoyao ZHENG, Shibao ZHOU, Qin CHUA, Tat-Seng SCHIELE, Bernt |
author_sort |
LI, Xinzhe |
title |
Learning to self-train for semi-supervised few-shot classification |
title_short |
Learning to self-train for semi-supervised few-shot classification |
title_full |
Learning to self-train for semi-supervised few-shot classification |
title_fullStr |
Learning to self-train for semi-supervised few-shot classification |
title_full_unstemmed |
Learning to self-train for semi-supervised few-shot classification |
title_sort |
learning to self-train for semi-supervised few-shot classification |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/4445 https://ink.library.smu.edu.sg/context/sis_research/article/5448/viewcontent/NeurIPS_2019_semi_supervised_camera_ready.pdf |
_version_ |
1770574840068046848 |