Learning to pre-train graph neural networks
Graph neural networks (GNNs) have become the de facto standard for representation learning on graphs, which derive effective node representations by recursively aggregating information from graph neighborhoods. While GNNs can be trained from scratch, pre-training GNNs to learn transferable knowledge...
Saved in:
Main Authors: | LU, Yuanfu, JIANG, Xunqiang, FANG, Yuan, SHI, Chuan |
---|---|
格式: | text |
語言: | English |
出版: |
Institutional Knowledge at Singapore Management University
2021
|
主題: | |
在線閱讀: | https://ink.library.smu.edu.sg/sis_research/6125 https://ink.library.smu.edu.sg/context/sis_research/article/7128/viewcontent/16552_Article_Text_20046_1_2_20210518.pdf |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
相似書籍
-
Contrastive pre-training of GNNs on heterogeneous graphs
由: JIANG, Xunqiang, et al.
出版: (2021) -
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks
由: LIU, Zemin, et al.
出版: (2023) -
Pre-training on large-scale heterogeneous graph
由: JIANG, Xunqiang, et al.
出版: (2021) -
Generalizing graph neural network across graphs and time
由: WEN, Zhihao
出版: (2023) -
Augmenting low-resource text classification with graph-grounded pre-training and prompting
由: WEN, Zhihao, et al.
出版: (2023)