Learning to teach and learn for semi-supervised few-shot image classification

This paper presents a novel semi-supervised few-shot image classification method named Learning to Teach and Learn (LTTL) to effectively leverage unlabeled samples in small-data regimes. Our method is based on self-training, which assigns pseudo labels to unlabeled data. However, the conventional ps...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Xinzhe, HUANG, Jianqiang, LIU, Yaoyao, ZHOU, Qin, ZHENG, Shibao, SCHIELE, Bernt, SUN, Qianru
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6628
https://ink.library.smu.edu.sg/context/sis_research/article/7631/viewcontent/1_s20_S1077314221001144_main.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7631
record_format dspace
spelling sg-smu-ink.sis_research-76312022-01-14T03:41:33Z Learning to teach and learn for semi-supervised few-shot image classification LI, Xinzhe HUANG, Jianqiang LIU, Yaoyao ZHOU, Qin ZHENG, Shibao SCHIELE, Bernt SUN, Qianru This paper presents a novel semi-supervised few-shot image classification method named Learning to Teach and Learn (LTTL) to effectively leverage unlabeled samples in small-data regimes. Our method is based on self-training, which assigns pseudo labels to unlabeled data. However, the conventional pseudo-labeling operation heavily relies on the initial model trained by using a handful of labeled data and may produce many noisy labeled samples. We propose to solve the problem with three steps: firstly, cherry-picking searches valuable samples from pseudo-labeled data by using a soft weighting network; and then, cross-teaching allows the classifiers to teach mutually for rejecting more noisy labels. A feature synthesizing strategy is introduced for cross-teaching to avoid clean samples being rejected by mistake; finally, the classifiers are fine-tuned with a few labeled data to avoid gradient drifts. We use the meta-learning paradigm to optimize the parameters in the whole framework. The proposed LTTL combines the power of meta-learning and self-training, achieving superior performance compared with the baseline methods on two public benchmarks. 2021-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6628 info:doi/10.1016/j.cviu.2021.103270 https://ink.library.smu.edu.sg/context/sis_research/article/7631/viewcontent/1_s20_S1077314221001144_main.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Few-shot learning Meta-learning Semi-supervised learning Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Few-shot learning
Meta-learning
Semi-supervised learning
Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Few-shot learning
Meta-learning
Semi-supervised learning
Databases and Information Systems
Graphics and Human Computer Interfaces
LI, Xinzhe
HUANG, Jianqiang
LIU, Yaoyao
ZHOU, Qin
ZHENG, Shibao
SCHIELE, Bernt
SUN, Qianru
Learning to teach and learn for semi-supervised few-shot image classification
description This paper presents a novel semi-supervised few-shot image classification method named Learning to Teach and Learn (LTTL) to effectively leverage unlabeled samples in small-data regimes. Our method is based on self-training, which assigns pseudo labels to unlabeled data. However, the conventional pseudo-labeling operation heavily relies on the initial model trained by using a handful of labeled data and may produce many noisy labeled samples. We propose to solve the problem with three steps: firstly, cherry-picking searches valuable samples from pseudo-labeled data by using a soft weighting network; and then, cross-teaching allows the classifiers to teach mutually for rejecting more noisy labels. A feature synthesizing strategy is introduced for cross-teaching to avoid clean samples being rejected by mistake; finally, the classifiers are fine-tuned with a few labeled data to avoid gradient drifts. We use the meta-learning paradigm to optimize the parameters in the whole framework. The proposed LTTL combines the power of meta-learning and self-training, achieving superior performance compared with the baseline methods on two public benchmarks.
format text
author LI, Xinzhe
HUANG, Jianqiang
LIU, Yaoyao
ZHOU, Qin
ZHENG, Shibao
SCHIELE, Bernt
SUN, Qianru
author_facet LI, Xinzhe
HUANG, Jianqiang
LIU, Yaoyao
ZHOU, Qin
ZHENG, Shibao
SCHIELE, Bernt
SUN, Qianru
author_sort LI, Xinzhe
title Learning to teach and learn for semi-supervised few-shot image classification
title_short Learning to teach and learn for semi-supervised few-shot image classification
title_full Learning to teach and learn for semi-supervised few-shot image classification
title_fullStr Learning to teach and learn for semi-supervised few-shot image classification
title_full_unstemmed Learning to teach and learn for semi-supervised few-shot image classification
title_sort learning to teach and learn for semi-supervised few-shot image classification
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6628
https://ink.library.smu.edu.sg/context/sis_research/article/7631/viewcontent/1_s20_S1077314221001144_main.pdf
_version_ 1770576012726239232