VLStereoSet: A study of stereotypical bias in pre-trained vision-language models
In this paper we study how to measure stereotypical bias in pre-trained vision-language models. We leverage a recently released text-only dataset, StereoSet, which covers a wide range of stereotypical bias, and extend it into a vision-language probing dataset called VLStereoSet to measure stereotypi...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7617 https://ink.library.smu.edu.sg/context/sis_research/article/8620/viewcontent/2022.aacl_main.40.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-8620 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-86202022-12-22T03:24:22Z VLStereoSet: A study of stereotypical bias in pre-trained vision-language models ZHOU, Kankan LAI, Yibin JIANG, Jing In this paper we study how to measure stereotypical bias in pre-trained vision-language models. We leverage a recently released text-only dataset, StereoSet, which covers a wide range of stereotypical bias, and extend it into a vision-language probing dataset called VLStereoSet to measure stereotypical bias in vision-language models. We analyze the differences between text and image and propose a probing task that detects bias by evaluating a model’s tendency to pick stereotypical statements as captions for anti-stereotypical images. We further define several metrics to measure both a vision-language model’s overall stereotypical bias and its intra-modal and inter-modal bias. Experiments on six representative pre-trained vision-language models demonstrate that stereotypical biases clearly exist in most of these models and across all four bias categories, with gender bias slightly more evident. Further analysis using gender bias data and two vision-language models also suggest that both intra-modal and inter-modal bias exist. 2022-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7617 https://ink.library.smu.edu.sg/context/sis_research/article/8620/viewcontent/2022.aacl_main.40.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Programming Languages and Compilers |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Databases and Information Systems Programming Languages and Compilers |
spellingShingle |
Databases and Information Systems Programming Languages and Compilers ZHOU, Kankan LAI, Yibin JIANG, Jing VLStereoSet: A study of stereotypical bias in pre-trained vision-language models |
description |
In this paper we study how to measure stereotypical bias in pre-trained vision-language models. We leverage a recently released text-only dataset, StereoSet, which covers a wide range of stereotypical bias, and extend it into a vision-language probing dataset called VLStereoSet to measure stereotypical bias in vision-language models. We analyze the differences between text and image and propose a probing task that detects bias by evaluating a model’s tendency to pick stereotypical statements as captions for anti-stereotypical images. We further define several metrics to measure both a vision-language model’s overall stereotypical bias and its intra-modal and inter-modal bias. Experiments on six representative pre-trained vision-language models demonstrate that stereotypical biases clearly exist in most of these models and across all four bias categories, with gender bias slightly more evident. Further analysis using gender bias data and two vision-language models also suggest that both intra-modal and inter-modal bias exist. |
format |
text |
author |
ZHOU, Kankan LAI, Yibin JIANG, Jing |
author_facet |
ZHOU, Kankan LAI, Yibin JIANG, Jing |
author_sort |
ZHOU, Kankan |
title |
VLStereoSet: A study of stereotypical bias in pre-trained vision-language models |
title_short |
VLStereoSet: A study of stereotypical bias in pre-trained vision-language models |
title_full |
VLStereoSet: A study of stereotypical bias in pre-trained vision-language models |
title_fullStr |
VLStereoSet: A study of stereotypical bias in pre-trained vision-language models |
title_full_unstemmed |
VLStereoSet: A study of stereotypical bias in pre-trained vision-language models |
title_sort |
vlstereoset: a study of stereotypical bias in pre-trained vision-language models |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2022 |
url |
https://ink.library.smu.edu.sg/sis_research/7617 https://ink.library.smu.edu.sg/context/sis_research/article/8620/viewcontent/2022.aacl_main.40.pdf |
_version_ |
1770576395413487616 |