Towards explainable neural network fairness
Neural networks are widely applied in solving many real-world problems. At the same time, they are shown to be vulnerable to attacks, difficult to debug, non-transparent and subject to fairness issues. Discrimination has been observed in various machine learning models, including Large Language Mode...
Saved in:
Main Author: | ZHANG, Mengdi |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/etd_coll/547 https://ink.library.smu.edu.sg/context/etd_coll/article/1545/viewcontent/GPIS_AY2019_PhD_Mengdi_Zhang.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
TESTSGD: Interpretable testing of neural networks against subtle group discrimination
by: ZHANG, Mengdi, et al.
Published: (2023) -
Adaptive fairness improvement based causality analysis
by: ZHANG, Mengdi, et al.
Published: (2022) -
Automatic fairness testing of neural classifiers through adversarial sampling
by: ZHANG, Peixin, et al.
Published: (2021) -
Efficient white-box fairness testing through gradient search
by: ZHANG, Lingfeng, et al.
Published: (2021) -
Which neural network makes more explainable decisions? An approach towards measuring explainability
by: ZHANG, Mengdi, et al.
Published: (2022)