Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs

Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised settin...

Full description

Saved in:
Bibliographic Details
Main Authors: YU, Xingtong, LIU, Zhenghao, FANG, Yuan, et al.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9703
https://ink.library.smu.edu.sg/context/sis_research/article/10703/viewcontent/TKDE24_GeneralizedGraphPrompt.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10703
record_format dspace
spelling sg-smu-ink.sis_research-107032024-11-28T08:57:41Z Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs YU, Xingtong LIU, Zhenghao FANG, Yuan et al., Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the 'pre-train, fine-tune' and 'pre-train, prompt' paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. In particular, GraphPrompt adopts simple yet effective designs in both pre-training and prompt tuning: During pre-training, a link prediction-based task is used to materialize the task template; during prompt tuning, a learnable prompt vector is applied to the ReadOut layer of the graph encoder. To further enhance GraphPrompt in these two stages, we extend it into GraphPrompt+ with two major enhancements. First, we generalize a few popular graph pre-training tasks beyond simple link prediction to broaden the compatibility with our task template. Second, we propose a more generalized prompt design that incorporates a series of prompt vectors within every layer of the pre-trained graph encoder, in order to capitalize on the hierarchical information across different layers beyond just the readout layer. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt and GraphPrompt+. 2024-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9703 info:doi/10.1109/TKDE.2024.3419109 https://ink.library.smu.edu.sg/context/sis_research/article/10703/viewcontent/TKDE24_GeneralizedGraphPrompt.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Few-shot learning Fine tuning Graph mining Graph neural networks Metalearning Pre-training Prompting Representation learning Task analysis Tuning Databases and Information Systems Graphics and Human Computer Interfaces
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Few-shot learning
Fine tuning
Graph mining
Graph neural networks
Metalearning
Pre-training
Prompting
Representation learning
Task analysis
Tuning
Databases and Information Systems
Graphics and Human Computer Interfaces
spellingShingle Few-shot learning
Fine tuning
Graph mining
Graph neural networks
Metalearning
Pre-training
Prompting
Representation learning
Task analysis
Tuning
Databases and Information Systems
Graphics and Human Computer Interfaces
YU, Xingtong
LIU, Zhenghao
FANG, Yuan
et al.,
Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs
description Graphs can model complex relationships between objects, enabling a myriad of Web applications such as online page/article classification and social recommendation. While graph neural networks (GNNs) have emerged as a powerful tool for graph representation learning, in an end-to-end supervised setting, their performance heavily relies on a large amount of task-specific supervision. To reduce labeling requirement, the 'pre-train, fine-tune' and 'pre-train, prompt' paradigms have become increasingly common. In particular, prompting is a popular alternative to fine-tuning in natural language processing, which is designed to narrow the gap between pre-training and downstream objectives in a task-specific manner. However, existing study of prompting on graphs is still limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template, but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. In particular, GraphPrompt adopts simple yet effective designs in both pre-training and prompt tuning: During pre-training, a link prediction-based task is used to materialize the task template; during prompt tuning, a learnable prompt vector is applied to the ReadOut layer of the graph encoder. To further enhance GraphPrompt in these two stages, we extend it into GraphPrompt+ with two major enhancements. First, we generalize a few popular graph pre-training tasks beyond simple link prediction to broaden the compatibility with our task template. Second, we propose a more generalized prompt design that incorporates a series of prompt vectors within every layer of the pre-trained graph encoder, in order to capitalize on the hierarchical information across different layers beyond just the readout layer. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt and GraphPrompt+.
format text
author YU, Xingtong
LIU, Zhenghao
FANG, Yuan
et al.,
author_facet YU, Xingtong
LIU, Zhenghao
FANG, Yuan
et al.,
author_sort YU, Xingtong
title Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs
title_short Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs
title_full Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs
title_fullStr Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs
title_full_unstemmed Generalized graph prompt: Toward a unification of pre-training and downstream tasks on graphs
title_sort generalized graph prompt: toward a unification of pre-training and downstream tasks on graphs
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9703
https://ink.library.smu.edu.sg/context/sis_research/article/10703/viewcontent/TKDE24_GeneralizedGraphPrompt.pdf
_version_ 1819113107672268800