On explaining multimodal hateful meme detection models

Hateful meme detection is a new multimodal task that has gained significant traction in academic and industry research communities. Recently, researchers have applied pre-trained visual-linguistic models to perform the multimodal classification task, and some of these solutions have yielded promisin...

Full description

Saved in:
Bibliographic Details
Main Authors: HEE, Ming Shan, LEE, Roy Ka-Wei, CHONG, Wen Haw
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8262
https://ink.library.smu.edu.sg/context/sis_research/article/9265/viewcontent/on_explaining.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9265
record_format dspace
spelling sg-smu-ink.sis_research-92652023-11-10T08:53:54Z On explaining multimodal hateful meme detection models HEE, Ming Shan LEE, Roy Ka-Wei CHONG, Wen Haw Hateful meme detection is a new multimodal task that has gained significant traction in academic and industry research communities. Recently, researchers have applied pre-trained visual-linguistic models to perform the multimodal classification task, and some of these solutions have yielded promising results. However, what these visual-linguistic models learn for the hateful meme classification task remains unclear. For instance, it is unclear if these models are able to capture the derogatory or slurs references in multimodality (i.e., image and text) of the hateful memes. To fill this research gap, this paper propose three research questions to improve our understanding of these visual-linguistic models performing the hateful meme classification task. We found that the image modality contributes more to the hateful meme classification task, and the visual-linguistic models are able to perform visual-text slurs grounding to a certain extent. Our error analysis also shows that the visual-linguistic models have acquired biases, which resulted in false-positive predictions. 2022-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8262 info:doi/10.1145/3485447.3512260 https://ink.library.smu.edu.sg/context/sis_research/article/9265/viewcontent/on_explaining.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Explainable machine learning Hate speech Hateful memes Multimodal Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Explainable machine learning
Hate speech
Hateful memes
Multimodal
Databases and Information Systems
spellingShingle Explainable machine learning
Hate speech
Hateful memes
Multimodal
Databases and Information Systems
HEE, Ming Shan
LEE, Roy Ka-Wei
CHONG, Wen Haw
On explaining multimodal hateful meme detection models
description Hateful meme detection is a new multimodal task that has gained significant traction in academic and industry research communities. Recently, researchers have applied pre-trained visual-linguistic models to perform the multimodal classification task, and some of these solutions have yielded promising results. However, what these visual-linguistic models learn for the hateful meme classification task remains unclear. For instance, it is unclear if these models are able to capture the derogatory or slurs references in multimodality (i.e., image and text) of the hateful memes. To fill this research gap, this paper propose three research questions to improve our understanding of these visual-linguistic models performing the hateful meme classification task. We found that the image modality contributes more to the hateful meme classification task, and the visual-linguistic models are able to perform visual-text slurs grounding to a certain extent. Our error analysis also shows that the visual-linguistic models have acquired biases, which resulted in false-positive predictions.
format text
author HEE, Ming Shan
LEE, Roy Ka-Wei
CHONG, Wen Haw
author_facet HEE, Ming Shan
LEE, Roy Ka-Wei
CHONG, Wen Haw
author_sort HEE, Ming Shan
title On explaining multimodal hateful meme detection models
title_short On explaining multimodal hateful meme detection models
title_full On explaining multimodal hateful meme detection models
title_fullStr On explaining multimodal hateful meme detection models
title_full_unstemmed On explaining multimodal hateful meme detection models
title_sort on explaining multimodal hateful meme detection models
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/8262
https://ink.library.smu.edu.sg/context/sis_research/article/9265/viewcontent/on_explaining.pdf
_version_ 1783955660017238016