Exploring and repairing gender fairness violations in word embedding-based sentiment analysis model through adversarial patches
With the advancement of sentiment analysis (SA) models and their incorporation into our daily lives, fairness testing on these models is crucial, since unfair decisions can cause discrimination to a large population. Nevertheless, some challenges in fairness testing include the unknown oracle, the d...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8514 https://ink.library.smu.edu.sg/context/sis_research/article/9517/viewcontent/Exploring_and_Repairing_Gender_Fairness_Violations_in_Word_Embedding_based_Sentiment_Analysis_Model_through_Adversarial_Patches.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | With the advancement of sentiment analysis (SA) models and their incorporation into our daily lives, fairness testing on these models is crucial, since unfair decisions can cause discrimination to a large population. Nevertheless, some challenges in fairness testing include the unknown oracle, the difficulty in generating suitable test inputs, and the lack of a reliable way of fixing the issues. To fill in these gaps, BiasRV, a tool based on metamorphic testing (MT), was introduced and succeeded in uncovering fairness issues in a transformer-based model. However, the extent of unfairness in other SA models has not been thoroughly investigated. Our work conducts a more comprehensive empirical study to reveal the extent of fairness violations, specifically gender fairness, exhibited by other popular word embedding-based SA models. We define fairness violation as the behavior in which an SA model predicts variants created from a text, which merely differ in gender classes, to have different sentiments. Our inspection utilizing BiasRV uncovers at least 30 fairness violations (at BiasRV's default threshold) in all three SA models. Realizing the importance of addressing such significant violations, we introduce adversarial patches (AP) as a way of patch generation in an automated program repair (APR) system to fix them. We adopt adversarial fine-tuning in AP by retraining SA models using adversarial examples, which are bias-uncovering test cases dynamically generated by a tool named BiasFinder at runtime. Evaluation of the SA models shows that our proposed AP reduces fairness violations by at least 25%. |
---|