Iterative graph self-distillation

Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a se...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Hanlin, LIN, Shuai, LIU, Weiyang, ZHOU, Pan, TANG, Jian, LIANG, Xiaodan, XING, Eric
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8992
https://ink.library.smu.edu.sg/context/sis_research/article/9995/viewcontent/2023_TKDE_Self_Distillation.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9995
record_format dspace
spelling sg-smu-ink.sis_research-99952024-07-25T08:26:04Z Iterative graph self-distillation ZHANG, Hanlin LIN, Shuai LIU, Weiyang ZHOU, Pan TANG, Jian LIANG, Xiaodan XING, Eric Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that finetuning the IGSD-trained models with self-training can further improve the graph representation power. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD. 2024-03-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8992 info:doi/10.1109/TKDE.2023.3303885 https://ink.library.smu.edu.sg/context/sis_research/article/9995/viewcontent/2023_TKDE_Self_Distillation.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University graph representation learning self-supervised learning contrastive learning Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic graph representation learning
self-supervised learning
contrastive learning
Graphics and Human Computer Interfaces
spellingShingle graph representation learning
self-supervised learning
contrastive learning
Graphics and Human Computer Interfaces
ZHANG, Hanlin
LIN, Shuai
LIU, Weiyang
ZHOU, Pan
TANG, Jian
LIANG, Xiaodan
XING, Eric
Iterative graph self-distillation
description Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that finetuning the IGSD-trained models with self-training can further improve the graph representation power. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
format text
author ZHANG, Hanlin
LIN, Shuai
LIU, Weiyang
ZHOU, Pan
TANG, Jian
LIANG, Xiaodan
XING, Eric
author_facet ZHANG, Hanlin
LIN, Shuai
LIU, Weiyang
ZHOU, Pan
TANG, Jian
LIANG, Xiaodan
XING, Eric
author_sort ZHANG, Hanlin
title Iterative graph self-distillation
title_short Iterative graph self-distillation
title_full Iterative graph self-distillation
title_fullStr Iterative graph self-distillation
title_full_unstemmed Iterative graph self-distillation
title_sort iterative graph self-distillation
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/8992
https://ink.library.smu.edu.sg/context/sis_research/article/9995/viewcontent/2023_TKDE_Self_Distillation.pdf
_version_ 1814047702789390336