Explainable artificial intelligence (XAI) for healthcare decision-making
With the increasing ubiquity of Artificial Intelligence (AI) in daily life and into more critical applications, explainability has been brought under the spotlight due to ethical, social, and legal concerns. Explainability is particularly important in application domains involving high-stakes decisi...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/155849 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-155849 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Zeng, Zhiwei Explainable artificial intelligence (XAI) for healthcare decision-making |
description |
With the increasing ubiquity of Artificial Intelligence (AI) in daily life and into more critical applications, explainability has been brought under the spotlight due to ethical, social, and legal concerns. Explainability is particularly important in application domains involving high-stakes decision-making, such as healthcare, medicine, law, and defence, as simply having high performance may not satisfy all the desiderata of stakeholders. Stakeholders in these domains often require greater insights into the decision-making process, better support for tracing decision trails and identifying errors, as well as greater trustworthiness from an AI system. An eXplainable AI (XAI) is one that has the ability to explain the outputs or functioning of a model in a clear and understandable manner to a human. In the explanation process of XAI, explainability approaches provide explanatory information to human stakeholders in order to facilitate their understanding of the outputs and functioning of an AI model. The renewed understanding then affects the extent to which the desiderata are satisfied.
Over the past decade, a plethora of explainability approaches have been proposed, ranging from ante-hoc methods that seek to build explainability features into a model itself to post-hoc methods that use techniques to improve the explainability of an already developed model. Despite the rapid growth of the XAI field, three key open problems emerge from a review of the existing XAI methods. Firstly, post-hoc methods have been increasingly used for high-stakes decisions, but most of them provide explanatory information that is not faithful or not insightful to what the original model computes. Secondly, most existing methods can only provide associational explanations and lack the structure and reasoning ability to provide contrastive and counterfactual explanations. Thirdly, most existing methods do not provide explanations to suit different levels of understanding.
Research problem 1 - Post-hoc explainability methods have quickly gained popularity as they are mostly model-agnostic and highly extensible. However, without elucidating or understanding the model’s inner workings, explanations generated by post-hoc methods may not faithfully represent the underlying relationships of the model and often do not provide enough information to facilitate the understanding of how the output is obtained. In this thesis, I propose an ante-hoc explainability methodology to support the making and explaining of high-stakes decisions in healthcare. It is an argumentative methodology that can infuse explainability into the model and generate explanations that are faithful and insightful to the inner workings and internal representations of the model. It follows a “debate-like” process that shares many similarities with the way humans deliberate and make decisions, thereby giving it unique advantages in generating human-understandable explanations. I instantiate the argumentative methodology in three explainable decision-making approaches for the modelling and reasoning of contexts, qualitative preferences as well as both attack and support relationships.
Research problem 2 - Although contrastive and counterfactual explanations have attracted more research interest recently, existing XAI methods mostly generate associational explana- tions. Most existing methods are good at answering “Why X? ” but poor at answering questions that require higher-level inference, such as “Why not-X? ”. This can be attributed to the fact that most of them lack the necessary structures and mechanisms to perform high-level logical reasoning. Building on the structures and reasoning ability of the three proposed explain- able decision-making approaches, I construct different forms of contrastive and counterfactual explanations that can answer questions such as “Why X instead of Y? ” and “Why not-X? ”.
Research problem 3 - Different levels of understanding are often required in most practical scenarios. However, most existing XAI methods do not provide explanations that can cater to the different levels of understanding. In this thesis, I propose and study formal properties of explanatory information in order to generate such explanations. I propose notions and computational constructs for generating explanations that can provide different amounts of justification or have different dialectical strengths, as well as focused explanations that only include selective information in the decision-making process.
As a case study, I implement a proposed explainable decision-making approach for the diagnostics and prognostics of Alzheimer’s Disease (AD). Empirical evaluation with real-world AD datasets and human evaluations were conducted to assess the performance and explainability of the approach. In terms of decision performance, my approach achieved the highest accuracies and F1-scores for both the diagnosis and prognosis task when compared with six machine learning models. In terms of explainability, the evaluation results with 107 human subjects indicate that the explanations generated by my approach achieved better accessibility and verifiability than existing XAI models. With the results on accuracy and explainability, it may be concluded that the proposed argumentative explainability methodology can better satisfy the desiderata for XAI in the healthcare domain. |
author2 |
Miao Chun Yan |
author_facet |
Miao Chun Yan Zeng, Zhiwei |
format |
Thesis-Doctor of Philosophy |
author |
Zeng, Zhiwei |
author_sort |
Zeng, Zhiwei |
title |
Explainable artificial intelligence (XAI) for healthcare decision-making |
title_short |
Explainable artificial intelligence (XAI) for healthcare decision-making |
title_full |
Explainable artificial intelligence (XAI) for healthcare decision-making |
title_fullStr |
Explainable artificial intelligence (XAI) for healthcare decision-making |
title_full_unstemmed |
Explainable artificial intelligence (XAI) for healthcare decision-making |
title_sort |
explainable artificial intelligence (xai) for healthcare decision-making |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/155849 |
_version_ |
1759857591609982976 |
spelling |
sg-ntu-dr.10356-1558492023-03-05T16:37:42Z Explainable artificial intelligence (XAI) for healthcare decision-making Zeng, Zhiwei Miao Chun Yan Interdisciplinary Graduate School (IGS) Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) Cyril Leung Jing Jih Chin ASCYMiao@ntu.edu.sg, CLeung@ntu.edu.sg, jjchin@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence With the increasing ubiquity of Artificial Intelligence (AI) in daily life and into more critical applications, explainability has been brought under the spotlight due to ethical, social, and legal concerns. Explainability is particularly important in application domains involving high-stakes decision-making, such as healthcare, medicine, law, and defence, as simply having high performance may not satisfy all the desiderata of stakeholders. Stakeholders in these domains often require greater insights into the decision-making process, better support for tracing decision trails and identifying errors, as well as greater trustworthiness from an AI system. An eXplainable AI (XAI) is one that has the ability to explain the outputs or functioning of a model in a clear and understandable manner to a human. In the explanation process of XAI, explainability approaches provide explanatory information to human stakeholders in order to facilitate their understanding of the outputs and functioning of an AI model. The renewed understanding then affects the extent to which the desiderata are satisfied. Over the past decade, a plethora of explainability approaches have been proposed, ranging from ante-hoc methods that seek to build explainability features into a model itself to post-hoc methods that use techniques to improve the explainability of an already developed model. Despite the rapid growth of the XAI field, three key open problems emerge from a review of the existing XAI methods. Firstly, post-hoc methods have been increasingly used for high-stakes decisions, but most of them provide explanatory information that is not faithful or not insightful to what the original model computes. Secondly, most existing methods can only provide associational explanations and lack the structure and reasoning ability to provide contrastive and counterfactual explanations. Thirdly, most existing methods do not provide explanations to suit different levels of understanding. Research problem 1 - Post-hoc explainability methods have quickly gained popularity as they are mostly model-agnostic and highly extensible. However, without elucidating or understanding the model’s inner workings, explanations generated by post-hoc methods may not faithfully represent the underlying relationships of the model and often do not provide enough information to facilitate the understanding of how the output is obtained. In this thesis, I propose an ante-hoc explainability methodology to support the making and explaining of high-stakes decisions in healthcare. It is an argumentative methodology that can infuse explainability into the model and generate explanations that are faithful and insightful to the inner workings and internal representations of the model. It follows a “debate-like” process that shares many similarities with the way humans deliberate and make decisions, thereby giving it unique advantages in generating human-understandable explanations. I instantiate the argumentative methodology in three explainable decision-making approaches for the modelling and reasoning of contexts, qualitative preferences as well as both attack and support relationships. Research problem 2 - Although contrastive and counterfactual explanations have attracted more research interest recently, existing XAI methods mostly generate associational explana- tions. Most existing methods are good at answering “Why X? ” but poor at answering questions that require higher-level inference, such as “Why not-X? ”. This can be attributed to the fact that most of them lack the necessary structures and mechanisms to perform high-level logical reasoning. Building on the structures and reasoning ability of the three proposed explain- able decision-making approaches, I construct different forms of contrastive and counterfactual explanations that can answer questions such as “Why X instead of Y? ” and “Why not-X? ”. Research problem 3 - Different levels of understanding are often required in most practical scenarios. However, most existing XAI methods do not provide explanations that can cater to the different levels of understanding. In this thesis, I propose and study formal properties of explanatory information in order to generate such explanations. I propose notions and computational constructs for generating explanations that can provide different amounts of justification or have different dialectical strengths, as well as focused explanations that only include selective information in the decision-making process. As a case study, I implement a proposed explainable decision-making approach for the diagnostics and prognostics of Alzheimer’s Disease (AD). Empirical evaluation with real-world AD datasets and human evaluations were conducted to assess the performance and explainability of the approach. In terms of decision performance, my approach achieved the highest accuracies and F1-scores for both the diagnosis and prognosis task when compared with six machine learning models. In terms of explainability, the evaluation results with 107 human subjects indicate that the explanations generated by my approach achieved better accessibility and verifiability than existing XAI models. With the results on accuracy and explainability, it may be concluded that the proposed argumentative explainability methodology can better satisfy the desiderata for XAI in the healthcare domain. Doctor of Philosophy 2022-03-24T07:06:43Z 2022-03-24T07:06:43Z 2022 Thesis-Doctor of Philosophy Zeng, Z. (2022). Explainable artificial intelligence (XAI) for healthcare decision-making. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/155849 https://hdl.handle.net/10356/155849 10.32657/10356/155849 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |