Learning to pre-train graph neural networks
Graph neural networks (GNNs) have become the de facto standard for representation learning on graphs, which derive effective node representations by recursively aggregating information from graph neighborhoods. While GNNs can be trained from scratch, pre-training GNNs to learn transferable knowledge...
Saved in:
Main Authors: | LU, Yuanfu, JIANG, Xunqiang, FANG, Yuan, SHI, Chuan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6125 https://ink.library.smu.edu.sg/context/sis_research/article/7128/viewcontent/16552_Article_Text_20046_1_2_20210518.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Contrastive pre-training of GNNs on heterogeneous graphs
by: JIANG, Xunqiang, et al.
Published: (2021) -
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks
by: LIU, Zemin, et al.
Published: (2023) -
Pre-training on large-scale heterogeneous graph
by: JIANG, Xunqiang, et al.
Published: (2021) -
Augmenting low-resource text classification with graph-grounded pre-training and prompting
by: WEN, Zhihao, et al.
Published: (2023) -
Generalizing graph neural network across graphs and time
by: WEN, Zhihao
Published: (2023)