Automatic fairness testing of neural classifiers through adversarial sampling
Although deep learning has demonstrated astonishing performance in many applications, there are still concerns about its dependability. One desirable property of deep learning applications with societal impact is fairness (i.e., non-discrimination). Unfortunately, discrimination might be intrinsical...
Saved in:
Main Authors: | ZHANG, Peixin, WANG, Jingyi, SUN, Jun, WANG, Xinyu, DONG, Guoliang, WANG, Xinggen, DAI, Ting, DONG, Jinsong |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6224 https://ink.library.smu.edu.sg/context/sis_research/article/7227/viewcontent/09506918.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
White-box fairness testing through adversarial sampling
by: ZHANG, Peixin, et al.
Published: (2020) -
Adversarial sample detection for deep neural network through model mutation testing
by: WANG, Jingyi, et al.
Published: (2019) -
Towards explainable neural network fairness
by: ZHANG, Mengdi
Published: (2024) -
QuoTe: Quality-oriented Testing for deep learning systems
by: CHEN, Jialuo, et al.
Published: (2022) -
An empirical study on correlation between coverage and robustness for deep neural networks
by: DONG, Yizhen, et al.
Published: (2020)