Graph neural differential equation networks for improved representation learning and robustness
Graph representation learning distills the complex structures of graphs into tractable, low-dimensional vector spaces, capturing essential topological and attribute-based properties. Graph Neural Networks (GNNs) have become a pivotal tool in this domain, leveraging graph structures to iteratively up...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2025
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/182340 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Graph representation learning distills the complex structures of graphs into tractable, low-dimensional vector spaces, capturing essential topological and attribute-based properties. Graph Neural Networks (GNNs) have become a pivotal tool in this domain, leveraging graph structures to iteratively update node representations through neighbor aggregations. These representations support fundamental tasks such as node classification, link prediction, and graph classification, applicable across diverse fields from social networks and biological systems to citation networks. Despite their success, GNNs face critical challenges: they often underperform on heterophilic graph data where connected nodes display dissimilar characteristics, suffer from oversmoothing which impairs performance as network depth increases, and are sensitive to hierarchical structures. Furthermore, they are vulnerable to adversarial attacks that can severely compromise model integrity.
This thesis introduces the use of neural differential equations in GNNs to enhance representation learning and robustness, addressing these challenges comprehensively.
The adoption of Graph Neural Differential Equation Networks (GDENs) employs a dynamic systems approach to evolve node features over continuous time, thereby enhancing the capacity of GNNs to process and learn from graph-structured data. This method governs node feature propagation through differential equations, enabling more refined control over the learning process compared to conventional methods.
The initial contribution of this thesis enhances representation learning on heterophilic graphs through a neural convection-diffusion differential equation. Subsequently, the thesis explores the relationship between stability in dynamical systems and robustness within GDENs. A neural Hamiltonian differential equation model is developed, establishing energy-conservative systems within GNNs to bolster robustness against adversarial attacks. Extending beyond traditional integer-order differential equations, the thesis incorporates fractional calculus through the Fractional-Order Graph Neural Differential Equation Networks (F-GDENs) framework. This approach introduces memory and non-local interactions, boosting the networks' ability to handle hierarchical structures and mitigate oversmoothing. F-GDENs not only integrate seamlessly with existing GDENs to enhance representation learning across various datasets, but also demonstrate tighter output perturbation bounds in scenarios involving input and topology perturbations. Empirical results further validate the superior robustness of F-GDENs models compared to integer-order GDENs.
In summary, this thesis advances the robustness and capacity of representation learning through GDENs by innovating with new differential equations and extending to fractional-order derivatives. These advancements establish a solid foundation for future research into robust and adaptive GNN architectures, presenting promising implications for practical applications. |
---|