Trust perceptions towards XAI in healthcare

This study explores the perceptions of medical professionals towards artificial intelligence (AI) and the influence of Explainable AI (XAI) algorithms on their trust in AI. The research focuses on one-on-one interviews conducted with 12 medical students, divided into two groups of six, each group ex...

Full description

Saved in:
Bibliographic Details
Main Author: Cai, Xinrui
Other Authors: Fan Xiuyi
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175388
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175388
record_format dspace
spelling sg-ntu-dr.10356-1753882024-04-26T15:43:02Z Trust perceptions towards XAI in healthcare Cai, Xinrui Fan Xiuyi School of Computer Science and Engineering xyfan@ntu.edu.sg Computer and Information Science This study explores the perceptions of medical professionals towards artificial intelligence (AI) and the influence of Explainable AI (XAI) algorithms on their trust in AI. The research focuses on one-on-one interviews conducted with 12 medical students, divided into two groups of six, each group exposed to one of two XAI algorithms: SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), explaining how Random Forest Classifier predicted lung cancer from a set of predictive attributes including demographic information, basic health metrics, lifestyle habits and findings from genetic PCA (Principal Component Analyses). Additionally, online surveys were administered to 50 medical students on the same case as interviews. Both interviews and surveys had selected participants ensuring equal representation from two medical schools in Singapore, with considerations for gender and self-rated confidence in AI knowledge. Qualitative data from interviews were analysed using Reflexive Thematic Analysis, revealing themes related to trust in AI and perceptions of XAI algorithms. Quantitative data were analysed using Microsoft Excel to visualize trends and patterns in the survey responses. Four research questions were developed in this study and the findings suggest that the type of XAI algorithm used does not significant impact medical professionals' trust in AI, and XAI’s impact on trust of medical professionals on AI may not be direct and requires more research and improving on XAI output to achieve its intended purposes. This study contributes to the understanding of how XAI can enhance trust in AI among medical professionals, with implications for the design and implementation of AI systems in healthcare settings. Bachelor's degree 2024-04-24T01:01:31Z 2024-04-24T01:01:31Z 2024 Final Year Project (FYP) Cai, X. (2024). Trust perceptions towards XAI in healthcare. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175388 https://hdl.handle.net/10356/175388 en SCSE23-0700 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
spellingShingle Computer and Information Science
Cai, Xinrui
Trust perceptions towards XAI in healthcare
description This study explores the perceptions of medical professionals towards artificial intelligence (AI) and the influence of Explainable AI (XAI) algorithms on their trust in AI. The research focuses on one-on-one interviews conducted with 12 medical students, divided into two groups of six, each group exposed to one of two XAI algorithms: SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), explaining how Random Forest Classifier predicted lung cancer from a set of predictive attributes including demographic information, basic health metrics, lifestyle habits and findings from genetic PCA (Principal Component Analyses). Additionally, online surveys were administered to 50 medical students on the same case as interviews. Both interviews and surveys had selected participants ensuring equal representation from two medical schools in Singapore, with considerations for gender and self-rated confidence in AI knowledge. Qualitative data from interviews were analysed using Reflexive Thematic Analysis, revealing themes related to trust in AI and perceptions of XAI algorithms. Quantitative data were analysed using Microsoft Excel to visualize trends and patterns in the survey responses. Four research questions were developed in this study and the findings suggest that the type of XAI algorithm used does not significant impact medical professionals' trust in AI, and XAI’s impact on trust of medical professionals on AI may not be direct and requires more research and improving on XAI output to achieve its intended purposes. This study contributes to the understanding of how XAI can enhance trust in AI among medical professionals, with implications for the design and implementation of AI systems in healthcare settings.
author2 Fan Xiuyi
author_facet Fan Xiuyi
Cai, Xinrui
format Final Year Project
author Cai, Xinrui
author_sort Cai, Xinrui
title Trust perceptions towards XAI in healthcare
title_short Trust perceptions towards XAI in healthcare
title_full Trust perceptions towards XAI in healthcare
title_fullStr Trust perceptions towards XAI in healthcare
title_full_unstemmed Trust perceptions towards XAI in healthcare
title_sort trust perceptions towards xai in healthcare
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175388
_version_ 1806059839562973184