Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making

Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In t...

Full description

Saved in:
Bibliographic Details
Main Authors: LEE, Min Hun, CHEW, Chong Jun
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8274
https://ink.library.smu.edu.sg/context/sis_research/article/9277/viewcontent/3610218_pvoa_cc_by.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9277
record_format dspace
spelling sg-smu-ink.sis_research-92772023-11-10T08:43:25Z Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making LEE, Min Hun CHEW, Chong Jun Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In this work, we utilized salient feature explanations along with what-if, counterfactual explanations to make humans review AI suggestions more analytically to reduce overreliance on AI and explored the effect of these explanations on trust and reliance on AI during clinical decision-making. We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion, and analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations. Our results showed that the AI model with both salient features and counterfactual explanations assisted therapists and laypersons to improve their performance and agreement level on the task when 'right' AI outputs are presented. While both therapists and laypersons over-relied on 'wrong' AI outputs, counterfactual explanations assisted both therapists and laypersons to reduce their over-reliance on 'wrong' AI outputs by 21% compared to salient feature explanations. Specifically, laypersons had higher performance degrades by 18.0 f1-score with salient feature explanations and 14.0 f1-score with counterfactual explanations than therapists with performance degrades of 8.6 and 2.8 f1-scores respectively. Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on 'wrong' AI outputs and implications for improving human-AI collaborative decision-making. 2023-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8274 info:doi/10.1145/3610218 https://ink.library.smu.edu.sg/context/sis_research/article/9277/viewcontent/3610218_pvoa_cc_by.pdf http://creativecommons.org/licenses/by/3.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University clinical decision support systems explainable AI human centered AI human-AI collaboration physical stroke rehabilitation assessment reliance trust Artificial Intelligence and Robotics Health Information Technology
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic clinical decision support systems
explainable AI
human centered AI
human-AI collaboration
physical stroke rehabilitation assessment
reliance
trust
Artificial Intelligence and Robotics
Health Information Technology
spellingShingle clinical decision support systems
explainable AI
human centered AI
human-AI collaboration
physical stroke rehabilitation assessment
reliance
trust
Artificial Intelligence and Robotics
Health Information Technology
LEE, Min Hun
CHEW, Chong Jun
Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making
description Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In this work, we utilized salient feature explanations along with what-if, counterfactual explanations to make humans review AI suggestions more analytically to reduce overreliance on AI and explored the effect of these explanations on trust and reliance on AI during clinical decision-making. We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality of motion, and analyzed their performance, agreement level on the task, and reliance on AI without and with two types of AI explanations. Our results showed that the AI model with both salient features and counterfactual explanations assisted therapists and laypersons to improve their performance and agreement level on the task when 'right' AI outputs are presented. While both therapists and laypersons over-relied on 'wrong' AI outputs, counterfactual explanations assisted both therapists and laypersons to reduce their over-reliance on 'wrong' AI outputs by 21% compared to salient feature explanations. Specifically, laypersons had higher performance degrades by 18.0 f1-score with salient feature explanations and 14.0 f1-score with counterfactual explanations than therapists with performance degrades of 8.6 and 2.8 f1-scores respectively. Our work discusses the potential of counterfactual explanations to better estimate the accuracy of an AI model and reduce over-reliance on 'wrong' AI outputs and implications for improving human-AI collaborative decision-making.
format text
author LEE, Min Hun
CHEW, Chong Jun
author_facet LEE, Min Hun
CHEW, Chong Jun
author_sort LEE, Min Hun
title Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making
title_short Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making
title_full Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making
title_fullStr Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making
title_full_unstemmed Understanding the effect of counterfactual explanations on trust and reliance on AI for human-AI collaborative clinical decision making
title_sort understanding the effect of counterfactual explanations on trust and reliance on ai for human-ai collaborative clinical decision making
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8274
https://ink.library.smu.edu.sg/context/sis_research/article/9277/viewcontent/3610218_pvoa_cc_by.pdf
_version_ 1783955662857830400