Graph attention networks and approximate personalized propagation of neural prediction models for unsupervised graph representation learning

Recent years have brought progress in the graph machine learning space, with the unsupervised graph representation learning field gaining traction due to the immense resources required to label graph data. A leading approach in the field, Deep Graph InfoMax, has been shown to provide good perform...

全面介紹

Saved in:
書目詳細資料
主要作者: Bharadwaja, Tanay
其他作者: Ke Yiping, Kelly
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/156556
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Recent years have brought progress in the graph machine learning space, with the unsupervised graph representation learning field gaining traction due to the immense resources required to label graph data. A leading approach in the field, Deep Graph InfoMax, has been shown to provide good performance in training Graph Convolutional Networks (GCNs) for the task in an unsupervised manner suing mutual information. In this paper, we proposed the novel approach of using Graph Attention Networks (GATs) and Approximate Personalized Propagation of Neural Prediction (APPNP) models trained with the Deep Graph InfoMax training method. We tested the transductively trained models on three challenging graph benchmarks and used a small training sample along with a Logistic Regression classifier to evaluate the quality of the representations generated. GAT models showed good performance and were able to attain a similar accuracy to GCN-based approaches. However, APPNP models were not able to learn well from the Deep Graph InfoMax training method, with lacklustre performance. The success of the GAT models solidifies the theory behind the training method, and we suggest that more developments on GAT variants suited to Deep Graph InfoMax be done to bring better learning through mutual information. On the other hand, the APPNP models require further improvements to be trained with mutual information for arbitrary graphs. Increased computing power to tackle larger benchmarks would also prove to be useful for the graph representation learning task.