Deep learning for graph structured data

Graph-structured data is ubiquitous across diverse domains, representing valuable relational information between entities. However, most deep learning techniques like convolutional and recurrent neural networks are tailored for grid-structured data and struggle to handle such graphs. This has led to...

Full description

Saved in:
Bibliographic Details
Main Author: Dwivedi Vijay Prakash
Other Authors: Luu Anh Tuan
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175787
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Graph-structured data is ubiquitous across diverse domains, representing valuable relational information between entities. However, most deep learning techniques like convolutional and recurrent neural networks are tailored for grid-structured data and struggle to handle such graphs. This has led to growing interest in graph representation learning using graph neural networks (GNNs). GNNs integrate graph structure into neural network layers through message-passing in general. However, several challenges exist like the lack of rigorous benchmarks, limitations in model expressiveness, and poor scalability. This thesis aims to advance graph representation learning by tackling these key challenges. First, it develops comprehensive benchmarks for standardized assessment of GNNs. This includes medium-scale tasks covering supervised and semi-supervised node, edge and graph classifications across domains like social networks, computer vision, and combinatorial optimization. The thesis also introduces a novel benchmark specifically designed to test modeling of long-range interactions in larger graphs. Second, the thesis focuses on developing new GNN architectures for learning on graphs with higher expressivity and generalization. It extends Transformer networks to graph domains by introducing graph-based inductive biases like leveraging sparsity and designing Laplacian positional encodings. Another technique learns separate structural and positional representations in GNNs through using informative graph diffusion features. This boosts model capacity significantly. Finally, the thesis addresses the problem of scaling graph models, in particular Graph Transformers, to massive graphs. It investigates design principles like incorporating efficient local and global graph representations. Following this, a scalable Graph Transformer framework is proposed. It uses novel neighborhood sampling and global attention schemes to capture both local structure and global dependencies in very large graphs. Overall, through rigorous benchmarks, expressive architectures, and scalable models, this thesis makes significant contributions towards advancing deep learning on graph- structured data across multiple fronts. The techniques pave the way for adoption of GNNs in real-world applications involving complex relational data.