Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach

Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes. Graph CIL (GCIL) follows the same setting but needs to deal with graph tasks (e.g., node classification in a graph). The key characteristic of CIL lies in the absence...

Full description

Saved in:
Bibliographic Details
Main Authors: NIU, Chaoxi, PANG, Guansong, CHEN, Ling, LIU, Bing
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9875
https://ink.library.smu.edu.sg/context/sis_research/article/10875/viewcontent/8497_Replay_and_Forget_Free_Gr__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10875
record_format dspace
spelling sg-smu-ink.sis_research-108752025-01-02T09:15:33Z Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach NIU, Chaoxi PANG, Guansong CHEN, Ling LIU, Bing Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes. Graph CIL (GCIL) follows the same setting but needs to deal with graph tasks (e.g., node classification in a graph). The key characteristic of CIL lies in the absence of task identifiers (IDs) during inference, which causes a significant challenge in separating classes from different tasks (i.e., inter-task class separation). Being able to accurately predict the task IDs can help address this issue, but it is a challenging problem. In this paper, we show theoretically that accurate task ID prediction on graph data can be achieved by a Laplacian smoothing-based graph task profiling approach, in which each graph task is modeled by a task prototype based on Laplacian smoothing over the graph. It guarantees that the task prototypes of the same graph task are nearly the same with a large smoothing step, while those of different tasks are distinct due to differences in graph structure and node attributes. Further, to avoid the catastrophic forgetting of the knowledge learned in previous graph tasks, we propose a novel graph prompting approach for GCIL which learns a small discriminative graph prompt for each task, essentially resulting in a separate classification model for each task. The prompt learning requires the training of a single graph neural network (GNN) only once on the first task, and no data replay is required thereafter, thereby obtaining a GCIL model being both replay-free and forget-free. Extensive experiments on four GCIL benchmarks show that i) our task prototype-based method can achieve 100% task ID prediction accuracy on all four datasets, ii) our GCIL model significantly outperforms state-of-the-art competing methods by at least 18% in average CIL accuracy, and iii) our model is fully free of forgetting on the four datasets. Code is available at https://github.com/mala-lab/TPP. 2024-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9875 info:doi/10.48550/ARXIV.2410.10341 https://ink.library.smu.edu.sg/context/sis_research/article/10875/viewcontent/8497_Replay_and_Forget_Free_Gr__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Databases and Information Systems
Graphics and Human Computer Interfaces
NIU, Chaoxi
PANG, Guansong
CHEN, Ling
LIU, Bing
Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach
description Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes. Graph CIL (GCIL) follows the same setting but needs to deal with graph tasks (e.g., node classification in a graph). The key characteristic of CIL lies in the absence of task identifiers (IDs) during inference, which causes a significant challenge in separating classes from different tasks (i.e., inter-task class separation). Being able to accurately predict the task IDs can help address this issue, but it is a challenging problem. In this paper, we show theoretically that accurate task ID prediction on graph data can be achieved by a Laplacian smoothing-based graph task profiling approach, in which each graph task is modeled by a task prototype based on Laplacian smoothing over the graph. It guarantees that the task prototypes of the same graph task are nearly the same with a large smoothing step, while those of different tasks are distinct due to differences in graph structure and node attributes. Further, to avoid the catastrophic forgetting of the knowledge learned in previous graph tasks, we propose a novel graph prompting approach for GCIL which learns a small discriminative graph prompt for each task, essentially resulting in a separate classification model for each task. The prompt learning requires the training of a single graph neural network (GNN) only once on the first task, and no data replay is required thereafter, thereby obtaining a GCIL model being both replay-free and forget-free. Extensive experiments on four GCIL benchmarks show that i) our task prototype-based method can achieve 100% task ID prediction accuracy on all four datasets, ii) our GCIL model significantly outperforms state-of-the-art competing methods by at least 18% in average CIL accuracy, and iii) our model is fully free of forgetting on the four datasets. Code is available at https://github.com/mala-lab/TPP.
format text
author NIU, Chaoxi
PANG, Guansong
CHEN, Ling
LIU, Bing
author_facet NIU, Chaoxi
PANG, Guansong
CHEN, Ling
LIU, Bing
author_sort NIU, Chaoxi
title Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach
title_short Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach
title_full Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach
title_fullStr Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach
title_full_unstemmed Replay-and-forget-free graph class-incremental learning: A task profiling and prompting approach
title_sort replay-and-forget-free graph class-incremental learning: a task profiling and prompting approach
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9875
https://ink.library.smu.edu.sg/context/sis_research/article/10875/viewcontent/8497_Replay_and_Forget_Free_Gr__1_.pdf
_version_ 1821237270738370560