May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability

Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-fo...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Tong, Yang, Jessie X., Li, Boyang
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180616
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-180616
record_format dspace
spelling sg-ntu-dr.10356-1806162024-10-18T15:37:28Z May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability Zhang, Tong Yang, Jessie X. Li, Boyang School of Computer Science and Engineering College of Computing and Data Science Computer and Information Science Explainable AI (XAI) Conversation Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users’ comprehension of static explanations in image classification, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. We conduct a human-subject experiment with 120 participants. Half serve as the experimental group and engage in a conversation with a human expert regarding the static explanations, while the other half are in the control group and read the materials regarding static explanations independently. We measure the participants’ objective and self-reported comprehension, acceptance, and trust of static explanations. Results show that conversations significantly improve participants’ comprehension, acceptance, trust, and collaboration with static explanations, while reading the explanations independently does not have these effects and even decreases users’ acceptance of explanations. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations. Submitted/Accepted version This work has been supported by the Nanyang Associate Professorshipand the National Research Foundation Fellowship (NRFNRFF13-2021-0006), Singapore. 2024-10-15T04:23:31Z 2024-10-15T04:23:31Z 2024 Journal Article Zhang, T., Yang, J. X. & Li, B. (2024). May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability. International Journal of Human-Computer Interaction. https://dx.doi.org/10.1080/10447318.2024.2364986 1044-7318 https://hdl.handle.net/10356/180616 10.1080/10447318.2024.2364986 2-s2.0-85200984129 en NRFNRFF13-2021-0006 International Journal of Human-Computer Interaction © 2024 Taylor & Francis Group, LLC. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1080/10447318.2024.2364986. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Explainable AI (XAI)
Conversation
spellingShingle Computer and Information Science
Explainable AI (XAI)
Conversation
Zhang, Tong
Yang, Jessie X.
Li, Boyang
May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
description Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users’ comprehension of static explanations in image classification, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. We conduct a human-subject experiment with 120 participants. Half serve as the experimental group and engage in a conversation with a human expert regarding the static explanations, while the other half are in the control group and read the materials regarding static explanations independently. We measure the participants’ objective and self-reported comprehension, acceptance, and trust of static explanations. Results show that conversations significantly improve participants’ comprehension, acceptance, trust, and collaboration with static explanations, while reading the explanations independently does not have these effects and even decreases users’ acceptance of explanations. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhang, Tong
Yang, Jessie X.
Li, Boyang
format Article
author Zhang, Tong
Yang, Jessie X.
Li, Boyang
author_sort Zhang, Tong
title May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
title_short May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
title_full May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
title_fullStr May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
title_full_unstemmed May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
title_sort may i ask a follow-up question? understanding the benefits of conversations inneural network explainability
publishDate 2024
url https://hdl.handle.net/10356/180616
_version_ 1814777712852598784