Semi supervised learning with graph convolutional networks
Deep learning has achieved unprecedented performances on a broad range of problems involving data in the euclidean space such as 2-D images in object recognition and 1-D paragraphs of text in machine translation. The availability of new datasets in the non-euclidean domain, such as social networks a...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/76922 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep learning has achieved unprecedented performances on a broad range of problems involving data in the euclidean space such as 2-D images in object recognition and 1-D paragraphs of text in machine translation. The availability of new datasets in the non-euclidean domain, such as social networks and 3D point clouds, have spurred recent efforts in generalising deep neural networks to graphs. In this report, we present the first comparative study between Graph Convolutional Networks (GCNs), Residual Gated Graph ConvNets (RGGCNs) and Graph Attention Networks (GATs), on two fundamental tasks in network science, semi-supervised classification and semi-supervised clustering, to analyse their experimental performances. We improve the existing capabilities of GATs by increasing the number of graph attention layers, and RGGCNs by reducing the number of learnable parameters together with the use of edge gate normalization. We introduce edge dropin, a novel method for regularizing graphs through the addition of edge-level noise. Our final RGGCN and GAT models are within 1% and 5% of GCN’s and RGGCN’s test accuracy on the Cora and semi-supervised clustering dataset generated with the stochastic block model respectively. |
---|