Class-incremental exemplar compression for class-incremental learning

Exemplar-based class-incremental learning (CIL) finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the "few-shot" abides by the limited memory budget. In this paper, we break this "few-shot" limit based on a...

Full description

Saved in:
Bibliographic Details
Main Authors: LUO, Zilin, LIU, Yaoyao, SCHIELE, Bernt, SUN, Qianru
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8055
https://ink.library.smu.edu.sg/context/sis_research/article/9058/viewcontent/Luo_Class_Incremental_Exemplar_Compression_for_Class_Incremental_Learning_CVPR_2023_paper.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9058
record_format dspace
spelling sg-smu-ink.sis_research-90582023-09-07T08:07:48Z Class-incremental exemplar compression for class-incremental learning LUO, Zilin LIU, Yaoyao SCHIELE, Bernt SUN, Qianru Exemplar-based class-incremental learning (CIL) finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the "few-shot" abides by the limited memory budget. In this paper, we break this "few-shot" limit based on a simple yet surprisingly effective idea: compressing exemplars by downsampling non-discriminative pixels and saving "many-shot" compressed exemplars in the memory. Without needing any manual annotation, we achieve this compression by generating 0-1 masks on discriminative pixels from class activation maps (CAM). We propose an adaptive mask generation model called class-incremental masking (CIM) to explicitly resolve two difficulties of using CAM: 1) transforming the heatmaps of CAM to 0-1 masks with an arbitrary threshold leads to a trade-off between the coverage on discriminative pixels and the quantity of exemplars, as the total memory is fixed; and 2) optimal thresholds vary for different object classes, which is particularly obvious in the dynamic environment of CIL. We optimize the CIM model alternatively with the conventional CIL model through a bilevel optimization problem. We conduct extensive experiments on high-resolution CIL benchmarks including Food-101, ImageNet-100, and ImageNet-1000, and show that using the compressed exemplars by CIM can achieve a new state-of-the-art CIL accuracy, e.g., 4.8 percentage points higher than FOSTER on 10-Phase ImageNet-1000. 2023-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8055 https://ink.library.smu.edu.sg/context/sis_research/article/9058/viewcontent/Luo_Class_Incremental_Exemplar_Compression_for_Class_Incremental_Learning_CVPR_2023_paper.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Databases and Information Systems
Graphics and Human Computer Interfaces
LUO, Zilin
LIU, Yaoyao
SCHIELE, Bernt
SUN, Qianru
Class-incremental exemplar compression for class-incremental learning
description Exemplar-based class-incremental learning (CIL) finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the "few-shot" abides by the limited memory budget. In this paper, we break this "few-shot" limit based on a simple yet surprisingly effective idea: compressing exemplars by downsampling non-discriminative pixels and saving "many-shot" compressed exemplars in the memory. Without needing any manual annotation, we achieve this compression by generating 0-1 masks on discriminative pixels from class activation maps (CAM). We propose an adaptive mask generation model called class-incremental masking (CIM) to explicitly resolve two difficulties of using CAM: 1) transforming the heatmaps of CAM to 0-1 masks with an arbitrary threshold leads to a trade-off between the coverage on discriminative pixels and the quantity of exemplars, as the total memory is fixed; and 2) optimal thresholds vary for different object classes, which is particularly obvious in the dynamic environment of CIL. We optimize the CIM model alternatively with the conventional CIL model through a bilevel optimization problem. We conduct extensive experiments on high-resolution CIL benchmarks including Food-101, ImageNet-100, and ImageNet-1000, and show that using the compressed exemplars by CIM can achieve a new state-of-the-art CIL accuracy, e.g., 4.8 percentage points higher than FOSTER on 10-Phase ImageNet-1000.
format text
author LUO, Zilin
LIU, Yaoyao
SCHIELE, Bernt
SUN, Qianru
author_facet LUO, Zilin
LIU, Yaoyao
SCHIELE, Bernt
SUN, Qianru
author_sort LUO, Zilin
title Class-incremental exemplar compression for class-incremental learning
title_short Class-incremental exemplar compression for class-incremental learning
title_full Class-incremental exemplar compression for class-incremental learning
title_fullStr Class-incremental exemplar compression for class-incremental learning
title_full_unstemmed Class-incremental exemplar compression for class-incremental learning
title_sort class-incremental exemplar compression for class-incremental learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8055
https://ink.library.smu.edu.sg/context/sis_research/article/9058/viewcontent/Luo_Class_Incremental_Exemplar_Compression_for_Class_Incremental_Learning_CVPR_2023_paper.pdf
_version_ 1779157091865329664