ACIL: analytic class-incremental learning with absolute memorization and privacy protection

Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively. Existing CIL either suffers from serious accuracy loss due to catastrophic forgetting, or invades data privacy by revisiting used exemplars. Inspired by linear learning fo...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhuang, Huiping, Weng, Zhenyu, Xie, Renchunzi, Toh, Kar-Ann, Lin, Zhiping
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/174481
https://proceedings.neurips.cc/paper_files/paper/2022
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-174481
record_format dspace
spelling sg-ntu-dr.10356-1744812024-04-05T15:40:25Z ACIL: analytic class-incremental learning with absolute memorization and privacy protection Zhuang, Huiping Weng, Zhenyu Xie, Renchunzi Toh, Kar-Ann Lin, Zhiping School of Electrical and Electronic Engineering School of Computer Science and Engineering 36th Conference on Neural Information Processing Systems (NeurIPS 2022) Computer and Information Science Class-incremental learning Data privacy Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively. Existing CIL either suffers from serious accuracy loss due to catastrophic forgetting, or invades data privacy by revisiting used exemplars. Inspired by linear learning formulations, we propose an analytic class-incremental learning (ACIL) with absolute memorization of past knowledge while avoiding breaching of data privacy (i.e., without storing historical data). The absolute memorization is demonstrated in the sense that class-incremental learning using ACIL given present data would give identical results to that from its joint-learning counterpart which consumes both present and historical samples. This equality is theoretically validated. Data privacy is ensured since no historical data are involved during the learning process. Empirical validations demonstrate ACIL’s competitive accuracy performance with near-identical results for various incremental task settings (e.g., 5-50 phases). This also allows ACIL to outperform the state-of-the-art methods for large-phase scenarios (e.g., 25 and 50 phases). Agency for Science, Technology and Research (A*STAR) Published version This work was supported in part by the Science and Engineering Research Council, Agency of Science, Technology and Research, Singapore, through the National Robotics Program under Grant 1922500054. 2024-04-02T01:27:42Z 2024-04-02T01:27:42Z 2022 Conference Paper Zhuang, H., Weng, Z., Xie, R., Toh, K. & Lin, Z. (2022). ACIL: analytic class-incremental learning with absolute memorization and privacy protection. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 9781713871088 https://hdl.handle.net/10356/174481 https://proceedings.neurips.cc/paper_files/paper/2022 en NRP-1922500054 © 2022 The Author(s). All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at https://proceedings.neurips.cc/paper_files/paper/2022. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Class-incremental learning
Data privacy
spellingShingle Computer and Information Science
Class-incremental learning
Data privacy
Zhuang, Huiping
Weng, Zhenyu
Xie, Renchunzi
Toh, Kar-Ann
Lin, Zhiping
ACIL: analytic class-incremental learning with absolute memorization and privacy protection
description Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively. Existing CIL either suffers from serious accuracy loss due to catastrophic forgetting, or invades data privacy by revisiting used exemplars. Inspired by linear learning formulations, we propose an analytic class-incremental learning (ACIL) with absolute memorization of past knowledge while avoiding breaching of data privacy (i.e., without storing historical data). The absolute memorization is demonstrated in the sense that class-incremental learning using ACIL given present data would give identical results to that from its joint-learning counterpart which consumes both present and historical samples. This equality is theoretically validated. Data privacy is ensured since no historical data are involved during the learning process. Empirical validations demonstrate ACIL’s competitive accuracy performance with near-identical results for various incremental task settings (e.g., 5-50 phases). This also allows ACIL to outperform the state-of-the-art methods for large-phase scenarios (e.g., 25 and 50 phases).
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Zhuang, Huiping
Weng, Zhenyu
Xie, Renchunzi
Toh, Kar-Ann
Lin, Zhiping
format Conference or Workshop Item
author Zhuang, Huiping
Weng, Zhenyu
Xie, Renchunzi
Toh, Kar-Ann
Lin, Zhiping
author_sort Zhuang, Huiping
title ACIL: analytic class-incremental learning with absolute memorization and privacy protection
title_short ACIL: analytic class-incremental learning with absolute memorization and privacy protection
title_full ACIL: analytic class-incremental learning with absolute memorization and privacy protection
title_fullStr ACIL: analytic class-incremental learning with absolute memorization and privacy protection
title_full_unstemmed ACIL: analytic class-incremental learning with absolute memorization and privacy protection
title_sort acil: analytic class-incremental learning with absolute memorization and privacy protection
publishDate 2024
url https://hdl.handle.net/10356/174481
https://proceedings.neurips.cc/paper_files/paper/2022
_version_ 1814047211683577856