Graph attention networks and approximate personalized propagation of neural prediction models for unsupervised graph representation learning
Recent years have brought progress in the graph machine learning space, with the unsupervised graph representation learning field gaining traction due to the immense resources required to label graph data. A leading approach in the field, Deep Graph InfoMax, has been shown to provide good perform...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156556 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Recent years have brought progress in the graph machine learning space, with the
unsupervised graph representation learning field gaining traction due to the immense
resources required to label graph data. A leading approach in the field, Deep Graph
InfoMax, has been shown to provide good performance in training Graph Convolutional Networks (GCNs) for the task in an unsupervised manner suing mutual information. In this paper, we proposed the novel approach of using Graph Attention
Networks (GATs) and Approximate Personalized Propagation of Neural Prediction
(APPNP) models trained with the Deep Graph InfoMax training method. We tested
the transductively trained models on three challenging graph benchmarks and used a
small training sample along with a Logistic Regression classifier to evaluate the quality of the representations generated. GAT models showed good performance and were
able to attain a similar accuracy to GCN-based approaches. However, APPNP models were not able to learn well from the Deep Graph InfoMax training method, with
lacklustre performance. The success of the GAT models solidifies the theory behind
the training method, and we suggest that more developments on GAT variants suited
to Deep Graph InfoMax be done to bring better learning through mutual information.
On the other hand, the APPNP models require further improvements to be trained with
mutual information for arbitrary graphs. Increased computing power to tackle larger
benchmarks would also prove to be useful for the graph representation learning task. |
---|