Graphprompt: Unifying pre-training and downstream tasks for graph neural networks
Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised settin...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8191 https://ink.library.smu.edu.sg/context/sis_research/article/9194/viewcontent/TheWebConf23_GraphPrompt.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9194 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-91942023-09-26T10:25:33Z Graphprompt: Unifying pre-training and downstream tasks for graph neural networks LIU, Zemin YU, Xingtong FANG, Yuan ZHANG, Xinming Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the "pre-train, fine-tune"and "pre-train, prompt"paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt. 2023-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8191 info:doi/10.1145/3543507.3583386 https://ink.library.smu.edu.sg/context/sis_research/article/9194/viewcontent/TheWebConf23_GraphPrompt.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graph neural networks pre-training prompt few-shot learning Information Security |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Graph neural networks pre-training prompt few-shot learning Information Security |
spellingShingle |
Graph neural networks pre-training prompt few-shot learning Information Security LIU, Zemin YU, Xingtong FANG, Yuan ZHANG, Xinming Graphprompt: Unifying pre-training and downstream tasks for graph neural networks |
description |
Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the "pre-train, fine-tune"and "pre-train, prompt"paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt. |
format |
text |
author |
LIU, Zemin YU, Xingtong FANG, Yuan ZHANG, Xinming |
author_facet |
LIU, Zemin YU, Xingtong FANG, Yuan ZHANG, Xinming |
author_sort |
LIU, Zemin |
title |
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks |
title_short |
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks |
title_full |
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks |
title_fullStr |
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks |
title_full_unstemmed |
Graphprompt: Unifying pre-training and downstream tasks for graph neural networks |
title_sort |
graphprompt: unifying pre-training and downstream tasks for graph neural networks |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8191 https://ink.library.smu.edu.sg/context/sis_research/article/9194/viewcontent/TheWebConf23_GraphPrompt.pdf |
_version_ |
1779157220166991872 |