TESTSGD: Interpretable testing of neural networks against subtle group discrimination

Discrimination has been shown in many machine learning applications, which calls for sufficient fairness testing before their deployment in ethic-relevant domains. One widely concerning type of discrimination, testing against group discrimination, mostly hidden, is much less studied, compared with i...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Mengdi, SUN, Jun, WANG, Jingyi, SUN, Bing
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8144
https://ink.library.smu.edu.sg/context/sis_research/article/9147/viewcontent/3591869_pvoa.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9147
record_format dspace
spelling sg-smu-ink.sis_research-91472023-11-09T01:48:47Z TESTSGD: Interpretable testing of neural networks against subtle group discrimination ZHANG, Mengdi SUN, Jun WANG, Jingyi SUN, Bing Discrimination has been shown in many machine learning applications, which calls for sufficient fairness testing before their deployment in ethic-relevant domains. One widely concerning type of discrimination, testing against group discrimination, mostly hidden, is much less studied, compared with identifying individual discrimination. In this work, we propose TestSGD, an interpretable testing approach which systematically identifies and measures hidden (which we call ‘subtle’) group discrimination of a neural network characterized by conditions over combinations of the sensitive attributes. Specifically, given a neural network, TestSGD first automatically generates an interpretable rule set which categorizes the input space into two groups. Alongside, TestSGD also provides an estimated group discrimination score based on sampling the input space to measure the degree of the identified subtle group discrimination, which is guaranteed to be accurate up to an error bound. We evaluate TestSGD on multiple neural network models trained on popular datasets including both structured data and text data. The experiment results show that TestSGD is effective and efficient in identifying and measuring such subtle group discrimination that has never been revealed before. Furthermore, we show that the testing results of TestSGD can be used to mitigate such discrimination through retraining with negligible accuracy drop. 2023-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8144 info:doi/10.1145/3591869 https://ink.library.smu.edu.sg/context/sis_research/article/9147/viewcontent/3591869_pvoa.pdf http://creativecommons.org/licenses/by/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Fairness Improvement Fairness Fairness Testing Machine Learning Information Security Numerical Analysis and Scientific Computing Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Fairness Improvement
Fairness
Fairness Testing
Machine Learning
Information Security
Numerical Analysis and Scientific Computing
Software Engineering
spellingShingle Fairness Improvement
Fairness
Fairness Testing
Machine Learning
Information Security
Numerical Analysis and Scientific Computing
Software Engineering
ZHANG, Mengdi
SUN, Jun
WANG, Jingyi
SUN, Bing
TESTSGD: Interpretable testing of neural networks against subtle group discrimination
description Discrimination has been shown in many machine learning applications, which calls for sufficient fairness testing before their deployment in ethic-relevant domains. One widely concerning type of discrimination, testing against group discrimination, mostly hidden, is much less studied, compared with identifying individual discrimination. In this work, we propose TestSGD, an interpretable testing approach which systematically identifies and measures hidden (which we call ‘subtle’) group discrimination of a neural network characterized by conditions over combinations of the sensitive attributes. Specifically, given a neural network, TestSGD first automatically generates an interpretable rule set which categorizes the input space into two groups. Alongside, TestSGD also provides an estimated group discrimination score based on sampling the input space to measure the degree of the identified subtle group discrimination, which is guaranteed to be accurate up to an error bound. We evaluate TestSGD on multiple neural network models trained on popular datasets including both structured data and text data. The experiment results show that TestSGD is effective and efficient in identifying and measuring such subtle group discrimination that has never been revealed before. Furthermore, we show that the testing results of TestSGD can be used to mitigate such discrimination through retraining with negligible accuracy drop.
format text
author ZHANG, Mengdi
SUN, Jun
WANG, Jingyi
SUN, Bing
author_facet ZHANG, Mengdi
SUN, Jun
WANG, Jingyi
SUN, Bing
author_sort ZHANG, Mengdi
title TESTSGD: Interpretable testing of neural networks against subtle group discrimination
title_short TESTSGD: Interpretable testing of neural networks against subtle group discrimination
title_full TESTSGD: Interpretable testing of neural networks against subtle group discrimination
title_fullStr TESTSGD: Interpretable testing of neural networks against subtle group discrimination
title_full_unstemmed TESTSGD: Interpretable testing of neural networks against subtle group discrimination
title_sort testsgd: interpretable testing of neural networks against subtle group discrimination
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8144
https://ink.library.smu.edu.sg/context/sis_research/article/9147/viewcontent/3591869_pvoa.pdf
_version_ 1783955656331493376