MultiGPrompt for multi-task pre-training and prompting on graphs
Graph Neural Networks (GNNs) have emerged as a mainstream technique for graph representation learning. However, their efficacy within an end-to-end supervised framework is significantly tied to the availability of task-specific labels. To mitigate labeling costs and enhance robustness in few-shot se...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8711 https://ink.library.smu.edu.sg/context/sis_research/article/9714/viewcontent/Multi_task_Graph_Prompt__Camera_ready_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9714 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-97142024-04-04T09:04:05Z MultiGPrompt for multi-task pre-training and prompting on graphs YU, Xingtong ZHOU, Chang FANG, Yuan ZHAN, Xinming Graph Neural Networks (GNNs) have emerged as a mainstream technique for graph representation learning. However, their efficacy within an end-to-end supervised framework is significantly tied to the availability of task-specific labels. To mitigate labeling costs and enhance robustness in few-shot settings, pre-training on self-supervised tasks has emerged as a promising method, while prompting has been proposed to further narrow the objective gap between pretext and downstream tasks. Although there has been some initial exploration of prompt-based learning on graphs, they primarily leverage a single pretext task, resulting in a limited subset of general knowledge that could be learned from the pre-training data. Hence, in this paper, we propose MultiGPrompt, a novel multi-task pre-training and prompting framework to exploit multiple pretext tasks for more comprehensive pre-trained knowledge. First, in pre-training, we design a set of pretext tokens to synergize multiple pretext tasks. Second, we propose a dual-prompt mechanism consisting of composed and open prompts to leverage task-specific and global pre-training knowledge, to guide downstream tasks in few-shot settings. Finally, we conduct extensive experiments on six public datasets to evaluate and analyze MultiGPrompt. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8711 info:doi/10.1145/3589334.3645423 https://ink.library.smu.edu.sg/context/sis_research/article/9714/viewcontent/Multi_task_Graph_Prompt__Camera_ready_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Graph learning prompting multi-task few-shot learning Databases and Information Systems Graphics and Human Computer Interfaces |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Graph learning prompting multi-task few-shot learning Databases and Information Systems Graphics and Human Computer Interfaces |
spellingShingle |
Graph learning prompting multi-task few-shot learning Databases and Information Systems Graphics and Human Computer Interfaces YU, Xingtong ZHOU, Chang FANG, Yuan ZHAN, Xinming MultiGPrompt for multi-task pre-training and prompting on graphs |
description |
Graph Neural Networks (GNNs) have emerged as a mainstream technique for graph representation learning. However, their efficacy within an end-to-end supervised framework is significantly tied to the availability of task-specific labels. To mitigate labeling costs and enhance robustness in few-shot settings, pre-training on self-supervised tasks has emerged as a promising method, while prompting has been proposed to further narrow the objective gap between pretext and downstream tasks. Although there has been some initial exploration of prompt-based learning on graphs, they primarily leverage a single pretext task, resulting in a limited subset of general knowledge that could be learned from the pre-training data. Hence, in this paper, we propose MultiGPrompt, a novel multi-task pre-training and prompting framework to exploit multiple pretext tasks for more comprehensive pre-trained knowledge. First, in pre-training, we design a set of pretext tokens to synergize multiple pretext tasks. Second, we propose a dual-prompt mechanism consisting of composed and open prompts to leverage task-specific and global pre-training knowledge, to guide downstream tasks in few-shot settings. Finally, we conduct extensive experiments on six public datasets to evaluate and analyze MultiGPrompt. |
format |
text |
author |
YU, Xingtong ZHOU, Chang FANG, Yuan ZHAN, Xinming |
author_facet |
YU, Xingtong ZHOU, Chang FANG, Yuan ZHAN, Xinming |
author_sort |
YU, Xingtong |
title |
MultiGPrompt for multi-task pre-training and prompting on graphs |
title_short |
MultiGPrompt for multi-task pre-training and prompting on graphs |
title_full |
MultiGPrompt for multi-task pre-training and prompting on graphs |
title_fullStr |
MultiGPrompt for multi-task pre-training and prompting on graphs |
title_full_unstemmed |
MultiGPrompt for multi-task pre-training and prompting on graphs |
title_sort |
multigprompt for multi-task pre-training and prompting on graphs |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/8711 https://ink.library.smu.edu.sg/context/sis_research/article/9714/viewcontent/Multi_task_Graph_Prompt__Camera_ready_.pdf |
_version_ |
1814047473256103936 |