A survey on explainable artificial intelligence (XAI) : toward medical XAI

Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fi...

Full description

Saved in:
Bibliographic Details
Main Authors: Tjoa, Erico, Guan, Cuntai
Other Authors: Interdisciplinary Graduate School (IGS)
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/154295
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-154295
record_format dspace
spelling sg-ntu-dr.10356-1542952021-12-16T13:59:26Z A survey on explainable artificial intelligence (XAI) : toward medical XAI Tjoa, Erico Guan, Cuntai Interdisciplinary Graduate School (IGS) School of Computer Science and Engineering Alibaba Group Holding Ltd. Alibaba-NTU JRI Engineering::Computer science and engineering Explainable Artificial Intelligence Interpretability Machine Learning Medical Information System Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide “obviously” interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, Published version The Alibaba-NTU Program is a collaboration between Alibaba and Nanyang Technological University, Singapore. 2021-12-16T13:59:26Z 2021-12-16T13:59:26Z 2020 Journal Article Tjoa, E. & Guan, C. (2020). A survey on explainable artificial intelligence (XAI) : toward medical XAI. IEEE Transactions On Neural Networks and Learning Systems, 32(11), 4793-4813. https://dx.doi.org/10.1109/TNNLS.2020.3027314 2162-237X https://hdl.handle.net/10356/154295 10.1109/TNNLS.2020.3027314 11 32 4793 4813 en IEEE Transactions on Neural Networks and Learning Systems © 2020 IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Explainable Artificial Intelligence
Interpretability
Machine Learning
Medical Information System
spellingShingle Engineering::Computer science and engineering
Explainable Artificial Intelligence
Interpretability
Machine Learning
Medical Information System
Tjoa, Erico
Guan, Cuntai
A survey on explainable artificial intelligence (XAI) : toward medical XAI
description Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide “obviously” interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded,
author2 Interdisciplinary Graduate School (IGS)
author_facet Interdisciplinary Graduate School (IGS)
Tjoa, Erico
Guan, Cuntai
format Article
author Tjoa, Erico
Guan, Cuntai
author_sort Tjoa, Erico
title A survey on explainable artificial intelligence (XAI) : toward medical XAI
title_short A survey on explainable artificial intelligence (XAI) : toward medical XAI
title_full A survey on explainable artificial intelligence (XAI) : toward medical XAI
title_fullStr A survey on explainable artificial intelligence (XAI) : toward medical XAI
title_full_unstemmed A survey on explainable artificial intelligence (XAI) : toward medical XAI
title_sort survey on explainable artificial intelligence (xai) : toward medical xai
publishDate 2021
url https://hdl.handle.net/10356/154295
_version_ 1720447161112461312