Graph continual learning with debiased lossless memory replay

Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over t...

Full description

Saved in:
Bibliographic Details
Main Authors: NIU, Chaoxi, PANG, Guansong, CHEN, Ling
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9911
https://ink.library.smu.edu.sg/context/sis_research/article/10911/viewcontent/FAIA_392_FAIA240692.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10911
record_format dspace
spelling sg-smu-ink.sis_research-109112025-01-02T08:48:46Z Graph continual learning with debiased lossless memory replay NIU, Chaoxi PANG, Guansong CHEN, Ling Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks. Memory replay-based methods, which aim to replay data of previous tasks when learning new tasks, have been explored as one principled approach to mitigate the forgetting of the knowledge learned from the previous tasks. In this paper we extend this methodology with a novel framework, called Debiased Lossless Memory replay (DeLoMe). Unlike existing methods that sample nodes/edges of previous graphs to construct the memory, DeLoMe learns small lossless synthetic node representations as the memory. The learned memory can not only preserve the graph data privacy but also capture the holistic graph information, for which the samplingbased methods are not viable. Further, prior methods suffer from bias toward the current task due to the data imbalance between the classes in the memory data and the current data. A debiased GCL loss function is devised in DeLoMe to effectively alleviate this bias. Extensive experiments on four graph datasets show the effectiveness of DeLoMe under both class- and task-incremental learning settings. 2024-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9911 info:doi/10.3233/FAIA240692 https://ink.library.smu.edu.sg/context/sis_research/article/10911/viewcontent/FAIA_392_FAIA240692.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Graphics and Human Computer Interfaces
spellingShingle Graphics and Human Computer Interfaces
NIU, Chaoxi
PANG, Guansong
CHEN, Ling
Graph continual learning with debiased lossless memory replay
description Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks. Memory replay-based methods, which aim to replay data of previous tasks when learning new tasks, have been explored as one principled approach to mitigate the forgetting of the knowledge learned from the previous tasks. In this paper we extend this methodology with a novel framework, called Debiased Lossless Memory replay (DeLoMe). Unlike existing methods that sample nodes/edges of previous graphs to construct the memory, DeLoMe learns small lossless synthetic node representations as the memory. The learned memory can not only preserve the graph data privacy but also capture the holistic graph information, for which the samplingbased methods are not viable. Further, prior methods suffer from bias toward the current task due to the data imbalance between the classes in the memory data and the current data. A debiased GCL loss function is devised in DeLoMe to effectively alleviate this bias. Extensive experiments on four graph datasets show the effectiveness of DeLoMe under both class- and task-incremental learning settings.
format text
author NIU, Chaoxi
PANG, Guansong
CHEN, Ling
author_facet NIU, Chaoxi
PANG, Guansong
CHEN, Ling
author_sort NIU, Chaoxi
title Graph continual learning with debiased lossless memory replay
title_short Graph continual learning with debiased lossless memory replay
title_full Graph continual learning with debiased lossless memory replay
title_fullStr Graph continual learning with debiased lossless memory replay
title_full_unstemmed Graph continual learning with debiased lossless memory replay
title_sort graph continual learning with debiased lossless memory replay
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9911
https://ink.library.smu.edu.sg/context/sis_research/article/10911/viewcontent/FAIA_392_FAIA240692.pdf
_version_ 1821237282988883968