May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability

Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-fo...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Tong, Yang, Jessie X., Li, Boyang
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/180616
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Research in explainable AI (XAI) aims to provide insights into the decision-making process of opaque AI models. To date, most XAI methods offer one-off and static explanations, which cannot cater to the diverse backgrounds and understanding levels of users. With this paper, we investigate if free-form conversations can enhance users’ comprehension of static explanations in image classification, improve acceptance and trust in the explanation methods, and facilitate human-AI collaboration. We conduct a human-subject experiment with 120 participants. Half serve as the experimental group and engage in a conversation with a human expert regarding the static explanations, while the other half are in the control group and read the materials regarding static explanations independently. We measure the participants’ objective and self-reported comprehension, acceptance, and trust of static explanations. Results show that conversations significantly improve participants’ comprehension, acceptance, trust, and collaboration with static explanations, while reading the explanations independently does not have these effects and even decreases users’ acceptance of explanations. Our findings highlight the importance of customized model explanations in the format of free-form conversations and provide insights for the future design of conversational explanations.