Probablistic verification of neural networks against group fairness

Fairness is crucial for neural networks which are used in applications with important societal implication. Recently, there have been multiple attempts on improving fairness of neural networks, with a focus on fairness testing (e.g., generating individual discriminatory instances) and fairness train...

Full description

Saved in:
Bibliographic Details
Main Authors: SUN, Bing, SUN, Jun, DAI, Ting, ZHANG, Lijun
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6214
https://ink.library.smu.edu.sg/context/sis_research/article/7217/viewcontent/2107.08362.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7217
record_format dspace
spelling sg-smu-ink.sis_research-72172021-10-14T06:02:25Z Probablistic verification of neural networks against group fairness SUN, Bing SUN, Jun DAI, Ting ZHANG, Lijun Fairness is crucial for neural networks which are used in applications with important societal implication. Recently, there have been multiple attempts on improving fairness of neural networks, with a focus on fairness testing (e.g., generating individual discriminatory instances) and fairness training (e.g., enhancing fairness through augmented training). In this work, we propose an approach to formally verify neural networks against fairness, with a focus on independence-based fairness such as group fairness. Our method is built upon an approach for learning Markov Chains from a user-provided neural network (i.e., a feed-forward neural network or a recurrent neural network) which is guaranteed to facilitate sound analysis. The learned Markov Chain not only allows us to verify (with Probably Approximate Correctness guarantee) whether the neural network is fair or not, but also facilities sensitivity analysis which helps to understand why fairness is violated. We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness. Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient. 2021-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6214 https://ink.library.smu.edu.sg/context/sis_research/article/7217/viewcontent/2107.08362.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Software Engineering
spellingShingle Software Engineering
SUN, Bing
SUN, Jun
DAI, Ting
ZHANG, Lijun
Probablistic verification of neural networks against group fairness
description Fairness is crucial for neural networks which are used in applications with important societal implication. Recently, there have been multiple attempts on improving fairness of neural networks, with a focus on fairness testing (e.g., generating individual discriminatory instances) and fairness training (e.g., enhancing fairness through augmented training). In this work, we propose an approach to formally verify neural networks against fairness, with a focus on independence-based fairness such as group fairness. Our method is built upon an approach for learning Markov Chains from a user-provided neural network (i.e., a feed-forward neural network or a recurrent neural network) which is guaranteed to facilitate sound analysis. The learned Markov Chain not only allows us to verify (with Probably Approximate Correctness guarantee) whether the neural network is fair or not, but also facilities sensitivity analysis which helps to understand why fairness is violated. We demonstrate that with our analysis results, the neural weights can be optimized to improve fairness. Our approach has been evaluated with multiple models trained on benchmark datasets and the experiment results show that our approach is effective and efficient.
format text
author SUN, Bing
SUN, Jun
DAI, Ting
ZHANG, Lijun
author_facet SUN, Bing
SUN, Jun
DAI, Ting
ZHANG, Lijun
author_sort SUN, Bing
title Probablistic verification of neural networks against group fairness
title_short Probablistic verification of neural networks against group fairness
title_full Probablistic verification of neural networks against group fairness
title_fullStr Probablistic verification of neural networks against group fairness
title_full_unstemmed Probablistic verification of neural networks against group fairness
title_sort probablistic verification of neural networks against group fairness
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6214
https://ink.library.smu.edu.sg/context/sis_research/article/7217/viewcontent/2107.08362.pdf
_version_ 1770575892698890240