A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks

The graph neural networks (GNNs) are susceptible to adversarial perturbations and distribution biases, which pose potential security concerns for real-world applications. Current endeavors mainly focus on graph matching, while the subtle relationships between the nodes and structures of graph-struct...

Full description

Saved in:
Bibliographic Details
Main Authors: JIA, Ju, MA, Siqi, LIU, Yang, WANG, Lina, DENG, Robert H.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8501
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9504
record_format dspace
spelling sg-smu-ink.sis_research-95042024-01-04T04:18:03Z A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks JIA, Ju MA, Siqi LIU, Yang WANG, Lina DENG, Robert H. The graph neural networks (GNNs) are susceptible to adversarial perturbations and distribution biases, which pose potential security concerns for real-world applications. Current endeavors mainly focus on graph matching, while the subtle relationships between the nodes and structures of graph-structured data remain under-explored. Accordingly, two fundamental challenges arise as follows: 1) the intricate connections among nodes may induce the distribution shift of graph samples even under the same scenario, and 2) the perturbations of inherent graph-structured representations can introduce spurious shortcuts, which lead to GNN models relying on biased data to make unstable predictions. To address these problems, we propose a novel causality-aligned structure rationalization (CASR) scheme to construct invariant rationales by probing the coherent and causal patterns, which facilitates GNN models to make stable and reliable predictions in case of adversarial biased perturbations. Specifically, the initial graph samples across domains are leveraged to boost the diversity of datasets and perceive the interaction between shortcuts. Subsequently, the causal invariant rationales can be obtained during the interventions. This allows the GNN model to extrapolate risk variations from a single observed environment to multiple unknown environments. Moreover, the query feedback mechanism can progressively promote the consistency-driven optimal rationalization by reinforcing real essences and eliminating spurious shortcuts. Extensive experiments demonstrate the effectiveness of our scheme against adversarial biased perturbations from data manipulation attacks and out-of-distribution (OOD) shifts on various graph-structured datasets. Notably, we reveal that the capture of distinctive rationales can greatly reduce the dependence on shortcut cues and improve the robustness of OOD generalization. 2023-09-25T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/8501 info:doi/10.1109/TIFS.2023.3318936 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Perturbation methods Predictive models Reliability Robustness Graph neural networks Data models Correlation Adversarial biased perturbations spurious correlations invariant causal rationales OOD generalization Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Perturbation methods
Predictive models
Reliability
Robustness
Graph neural networks
Data models
Correlation
Adversarial biased perturbations
spurious correlations
invariant causal rationales
OOD generalization
Information Security
spellingShingle Perturbation methods
Predictive models
Reliability
Robustness
Graph neural networks
Data models
Correlation
Adversarial biased perturbations
spurious correlations
invariant causal rationales
OOD generalization
Information Security
JIA, Ju
MA, Siqi
LIU, Yang
WANG, Lina
DENG, Robert H.
A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
description The graph neural networks (GNNs) are susceptible to adversarial perturbations and distribution biases, which pose potential security concerns for real-world applications. Current endeavors mainly focus on graph matching, while the subtle relationships between the nodes and structures of graph-structured data remain under-explored. Accordingly, two fundamental challenges arise as follows: 1) the intricate connections among nodes may induce the distribution shift of graph samples even under the same scenario, and 2) the perturbations of inherent graph-structured representations can introduce spurious shortcuts, which lead to GNN models relying on biased data to make unstable predictions. To address these problems, we propose a novel causality-aligned structure rationalization (CASR) scheme to construct invariant rationales by probing the coherent and causal patterns, which facilitates GNN models to make stable and reliable predictions in case of adversarial biased perturbations. Specifically, the initial graph samples across domains are leveraged to boost the diversity of datasets and perceive the interaction between shortcuts. Subsequently, the causal invariant rationales can be obtained during the interventions. This allows the GNN model to extrapolate risk variations from a single observed environment to multiple unknown environments. Moreover, the query feedback mechanism can progressively promote the consistency-driven optimal rationalization by reinforcing real essences and eliminating spurious shortcuts. Extensive experiments demonstrate the effectiveness of our scheme against adversarial biased perturbations from data manipulation attacks and out-of-distribution (OOD) shifts on various graph-structured datasets. Notably, we reveal that the capture of distinctive rationales can greatly reduce the dependence on shortcut cues and improve the robustness of OOD generalization.
format text
author JIA, Ju
MA, Siqi
LIU, Yang
WANG, Lina
DENG, Robert H.
author_facet JIA, Ju
MA, Siqi
LIU, Yang
WANG, Lina
DENG, Robert H.
author_sort JIA, Ju
title A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
title_short A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
title_full A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
title_fullStr A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
title_full_unstemmed A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
title_sort causality-aligned structure rationalization scheme against adversarial biased perturbations for graph neural networks
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8501
_version_ 1787590781384523776