Neighbor-anchoring adversarial graph neural networks

Graph neural networks (GNNs) have witnessed widespread adoption due to their ability to learn superior representations for graph data. While GNNs exhibit strong discriminative power, they often fall short of learning the underlying node distribution for increased robustness. To deal with this, inspi...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Zemin, Fang, Yuan, Liu, Yong, Zheng, Vincent Wenchen
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172860
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Graph neural networks (GNNs) have witnessed widespread adoption due to their ability to learn superior representations for graph data. While GNNs exhibit strong discriminative power, they often fall short of learning the underlying node distribution for increased robustness. To deal with this, inspired by generative adversarial networks (GANs), we investigate the problem of adversarial learning on graph neural networks, and propose a novel framework named NAGNN (i.e., Neighbor-anchoring Adversarial Graph Neural Networks) for graph representation learning, which trains not only a discriminator but also a generator that compete with each other. In particular, we propose a novel neighbor-anchoring strategy, where the generator produces samples with explicit features and neighborhood structures anchored on a reference real node, so that the discriminator can perform neighborhood aggregation on the fake samples to learn superior representation. The advantage of our neighbor-anchoring strategy can be demonstrated both theoretically and empirically. Furthermore, as a by-product, our generator can synthesize realistic-looking features, enabling potential applications such as automatic content summarization. Finally, we conduct extensive experiments on four public benchmark datasets, and achieve promising results under both quantitative and qualitative evaluations.