On generalized degree fairness in graph neural networks
Conventional graph neural networks (GNNs) are often confronted with fairness issues that may stem from their input, including node attributes and neighbors surrounding a node. While several recent approaches have been proposed to eliminate the bias rooted in sensitive attributes, they ignore the oth...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8189 https://ink.library.smu.edu.sg/context/sis_research/article/9192/viewcontent/AAAI23_DegFairGNN.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9192 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-91922023-09-26T10:26:25Z On generalized degree fairness in graph neural networks LIU, Zemin NGUYEN, Trung Kien FANG, Yuan Conventional graph neural networks (GNNs) are often confronted with fairness issues that may stem from their input, including node attributes and neighbors surrounding a node. While several recent approaches have been proposed to eliminate the bias rooted in sensitive attributes, they ignore the other key input of GNNs, namely the neighbors of a node, which can introduce bias since GNNs hinge on neighborhood structures to generate node representations. In particular, the varying neighborhood structures across nodes, manifesting themselves in drastically different node degrees, give rise to the diverse behaviors of nodes and biased outcomes. In this paper, we first define and generalize the degree bias using a generalized definition of node degree as a manifestation and quantification of different multi-hop structures around different nodes. To address the bias in the context of node classification, we propose a novel GNN framework called Generalized Degree Fairness-centric Graph Neural Network (DegFairGNN). Specifically, in each GNN layer, we employ a learnable debiasing function to generate debiasing contexts, which modulate the layer-wise neighborhood aggregation to eliminate the degree bias originating from the diverse degrees among nodes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our model on both accuracy and fairness metrics. 2023-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8189 info:doi/10.48550/arXiv.2302.03881 https://ink.library.smu.edu.sg/context/sis_research/article/9192/viewcontent/AAAI23_DegFairGNN.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University De-biasing Graph neural networks Layer-wise Multi-hops Neighborhood structure Network frameworks Node attribute Node degree Sensitive attribute Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
De-biasing Graph neural networks Layer-wise Multi-hops Neighborhood structure Network frameworks Node attribute Node degree Sensitive attribute Databases and Information Systems |
spellingShingle |
De-biasing Graph neural networks Layer-wise Multi-hops Neighborhood structure Network frameworks Node attribute Node degree Sensitive attribute Databases and Information Systems LIU, Zemin NGUYEN, Trung Kien FANG, Yuan On generalized degree fairness in graph neural networks |
description |
Conventional graph neural networks (GNNs) are often confronted with fairness issues that may stem from their input, including node attributes and neighbors surrounding a node. While several recent approaches have been proposed to eliminate the bias rooted in sensitive attributes, they ignore the other key input of GNNs, namely the neighbors of a node, which can introduce bias since GNNs hinge on neighborhood structures to generate node representations. In particular, the varying neighborhood structures across nodes, manifesting themselves in drastically different node degrees, give rise to the diverse behaviors of nodes and biased outcomes. In this paper, we first define and generalize the degree bias using a generalized definition of node degree as a manifestation and quantification of different multi-hop structures around different nodes. To address the bias in the context of node classification, we propose a novel GNN framework called Generalized Degree Fairness-centric Graph Neural Network (DegFairGNN). Specifically, in each GNN layer, we employ a learnable debiasing function to generate debiasing contexts, which modulate the layer-wise neighborhood aggregation to eliminate the degree bias originating from the diverse degrees among nodes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our model on both accuracy and fairness metrics. |
format |
text |
author |
LIU, Zemin NGUYEN, Trung Kien FANG, Yuan |
author_facet |
LIU, Zemin NGUYEN, Trung Kien FANG, Yuan |
author_sort |
LIU, Zemin |
title |
On generalized degree fairness in graph neural networks |
title_short |
On generalized degree fairness in graph neural networks |
title_full |
On generalized degree fairness in graph neural networks |
title_fullStr |
On generalized degree fairness in graph neural networks |
title_full_unstemmed |
On generalized degree fairness in graph neural networks |
title_sort |
on generalized degree fairness in graph neural networks |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8189 https://ink.library.smu.edu.sg/context/sis_research/article/9192/viewcontent/AAAI23_DegFairGNN.pdf |
_version_ |
1779157219646898176 |