BiasRV: uncovering biased sentiment predictions at runtime

Sentiment analysis (SA) systems, though widely applied in many domains, have been demonstrated to produce biased results. Some research works have been done in automatically generating test cases to reveal unfairness in SA systems, but the community still lacks tools that can monitor and uncover bia...

Full description

Saved in:
Bibliographic Details
Main Authors: YANG, Zhou, ASYROFI, Muhammad Hilmi, LO, David
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6671
https://ink.library.smu.edu.sg/context/sis_research/article/7674/viewcontent/BiasRV_uncovering_biased_sentiment_predictions_at_runtime.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7674
record_format dspace
spelling sg-smu-ink.sis_research-76742022-01-13T09:10:41Z BiasRV: uncovering biased sentiment predictions at runtime YANG, Zhou ASYROFI, Muhammad Hilmi LO, David Sentiment analysis (SA) systems, though widely applied in many domains, have been demonstrated to produce biased results. Some research works have been done in automatically generating test cases to reveal unfairness in SA systems, but the community still lacks tools that can monitor and uncover biased predictions at runtime. This paper fills this gap by proposing BiasRV, the first tool to raise an alarm when a deployed SA system makes a biased prediction on a given input text. To implement this feature, BiasRV dynamically extracts a template from an input text and from the template generates gender-discriminatory mutants (semanticallyequivalent texts that only differ in gender information). Based on popular metrics used to evaluate the overall fairness of an SA system, we define distributional fairness property for an individual prediction of an SA system. This property specifies a requirement that for one piece of text, mutants from different gender classes should be treated similarly as a whole. Verifying the distributional fairness property causes much overhead to the running system. To run more efficiently, BiasRV adopts a two-step heuristic: (1) sampling several mutants from each gender and checking if the system predicts them as of the same sentiment, (2) checking distributional fairness only when sampled mutants have conflicting results. Experiments show that compared to directly checking the distributional fairness property for each input text, our two-step heuristic can decrease overhead used for analyzing mutants by 73.81% while only resulting in 6.7% of biased predictions being missed. Besides, BiasRV can be used conveniently without knowing the implementation of SA systems. Future researchers can easily extend BiasRV to detect more types of bias, e.g. race and occupation. The demo video for BiasRV can be viewed at https://youtu.be/WPe4Ml77d3U and the source code can be found at https://github.com/soarsmu/BiasRV. 2021-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6671 info:doi/10.1145/3468264.3473117 https://ink.library.smu.edu.sg/context/sis_research/article/7674/viewcontent/BiasRV_uncovering_biased_sentiment_predictions_at_runtime.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Sentiment analysis Ethical AI Fairness Runtime verification Artificial Intelligence and Robotics Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Sentiment analysis
Ethical AI
Fairness
Runtime verification
Artificial Intelligence and Robotics
Software Engineering
spellingShingle Sentiment analysis
Ethical AI
Fairness
Runtime verification
Artificial Intelligence and Robotics
Software Engineering
YANG, Zhou
ASYROFI, Muhammad Hilmi
LO, David
BiasRV: uncovering biased sentiment predictions at runtime
description Sentiment analysis (SA) systems, though widely applied in many domains, have been demonstrated to produce biased results. Some research works have been done in automatically generating test cases to reveal unfairness in SA systems, but the community still lacks tools that can monitor and uncover biased predictions at runtime. This paper fills this gap by proposing BiasRV, the first tool to raise an alarm when a deployed SA system makes a biased prediction on a given input text. To implement this feature, BiasRV dynamically extracts a template from an input text and from the template generates gender-discriminatory mutants (semanticallyequivalent texts that only differ in gender information). Based on popular metrics used to evaluate the overall fairness of an SA system, we define distributional fairness property for an individual prediction of an SA system. This property specifies a requirement that for one piece of text, mutants from different gender classes should be treated similarly as a whole. Verifying the distributional fairness property causes much overhead to the running system. To run more efficiently, BiasRV adopts a two-step heuristic: (1) sampling several mutants from each gender and checking if the system predicts them as of the same sentiment, (2) checking distributional fairness only when sampled mutants have conflicting results. Experiments show that compared to directly checking the distributional fairness property for each input text, our two-step heuristic can decrease overhead used for analyzing mutants by 73.81% while only resulting in 6.7% of biased predictions being missed. Besides, BiasRV can be used conveniently without knowing the implementation of SA systems. Future researchers can easily extend BiasRV to detect more types of bias, e.g. race and occupation. The demo video for BiasRV can be viewed at https://youtu.be/WPe4Ml77d3U and the source code can be found at https://github.com/soarsmu/BiasRV.
format text
author YANG, Zhou
ASYROFI, Muhammad Hilmi
LO, David
author_facet YANG, Zhou
ASYROFI, Muhammad Hilmi
LO, David
author_sort YANG, Zhou
title BiasRV: uncovering biased sentiment predictions at runtime
title_short BiasRV: uncovering biased sentiment predictions at runtime
title_full BiasRV: uncovering biased sentiment predictions at runtime
title_fullStr BiasRV: uncovering biased sentiment predictions at runtime
title_full_unstemmed BiasRV: uncovering biased sentiment predictions at runtime
title_sort biasrv: uncovering biased sentiment predictions at runtime
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6671
https://ink.library.smu.edu.sg/context/sis_research/article/7674/viewcontent/BiasRV_uncovering_biased_sentiment_predictions_at_runtime.pdf
_version_ 1770576020918763520