From medical imaging to explainable artificial intelligence
Deep Neural Network (DNN) has recently been recognized as one of the most powerful models capable of performing tasks at and beyond human capacity. With millions, even billions of parameters, DNN has been able to attain remarkable performance on hitherto difficult tasks such as computer vision, natu...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164493 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-164493 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Tjoa, Erico From medical imaging to explainable artificial intelligence |
description |
Deep Neural Network (DNN) has recently been recognized as one of the most powerful models capable of performing tasks at and beyond human capacity. With millions, even billions of parameters, DNN has been able to attain remarkable performance on hitherto difficult tasks such as computer vision, natural language processing and many other complex problems (including personalized healthcare and complex video games). Its capability has only increased further with better architecture designs and computing power. While DNN has been said to usher in the new artificial intelligence era, it comes with several problems. Beside its massive resource consumption, another important problem remains a challenge: DNN is a blackbox model. DNN is difficult to understand; it is not entirely clear how each neuron or parameter contributes to its performance, whether they are even relevant individually, or whether there is a universal meaningful way to understand its inner working at all.
In response, researchers have proposed many different methods to study the blackbox models, some of which are considered posthoc methods, model-agnostic methods, visualizations etc. They have been studied under the big umbrella of eXplainable artificial intelligence (XAI). While some methods are based on sound mathematical principles, many other methods are based on heuristics. The meaning of \textit{explanation} sometimes becomes muddled by subjectivity. We started off our exploration into the topic by observing their applicability on a medical imaging problem, almost blindly. Since we did not find a satisfactory \textit{explanation} from our earlier experiments, we then attempted to approach the problem from different perspectives.
We first tested the viability of common XAI methods by designing a computer vision experiment with a synthetic dataset that has very clear and obvious features. Common methods are expected to capture the features accurately. Our results show that heatmap-based methods have not performed very well. Thus, we decided to design methods that place interpretability and explainability at the very highest priority. More precisely, we experimented with the following methods: (1) General Pattern Theory (GPT) has been used to systematically capture features in objects in a component-wise manner. More precisely, we aim to represent object components with generators. (2) Interpretable Universal Approximation. SQANN and TNN (defined later) have been designed as universal approximators whose universal approximation property is provable in a very clear-cut, humanly understandable manner. By contrast, existing methods use heavy mathematical abstraction. (3) Self reward design. We leverage neural network components for solving reinforcement learning problems in an extremely interpretable manner. More precisely, in the design, each neuron is assigned a meaning, giving a very high level of transparency that is hard to compare with existing methods.
Apart from novel designs, we have experimented with common methods. For example, with augmentative explanations, we study how much common methods improve the predictive accuracy of a model. Besides, we study XAI methods with respect to a popular Weakly Supervised Object Localization (WSOL) metric, the MaxBoxAcc, and tested the effect of Neural Backed Decision Tree w.r.t the same metric. kaBEDONN has been designed as a partial upgrade of SQANN, intended to provide easy-to-understand and easy-to-adjust explanations inspired by research on influential examples. Finally, we assert that the project will be concluded with a good balance of novelty and incremental improvements (and validation) on existing XAI methods. |
author2 |
Guan Cuntai |
author_facet |
Guan Cuntai Tjoa, Erico |
format |
Thesis-Doctor of Philosophy |
author |
Tjoa, Erico |
author_sort |
Tjoa, Erico |
title |
From medical imaging to explainable artificial intelligence |
title_short |
From medical imaging to explainable artificial intelligence |
title_full |
From medical imaging to explainable artificial intelligence |
title_fullStr |
From medical imaging to explainable artificial intelligence |
title_full_unstemmed |
From medical imaging to explainable artificial intelligence |
title_sort |
from medical imaging to explainable artificial intelligence |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/164493 |
_version_ |
1759855644807004160 |
spelling |
sg-ntu-dr.10356-1644932023-03-05T16:35:17Z From medical imaging to explainable artificial intelligence Tjoa, Erico Guan Cuntai Interdisciplinary Graduate School (IGS) Alibaba-NTU Singapore Joint Research Institute CTGuan@ntu.edu.sg Engineering::Computer science and engineering Deep Neural Network (DNN) has recently been recognized as one of the most powerful models capable of performing tasks at and beyond human capacity. With millions, even billions of parameters, DNN has been able to attain remarkable performance on hitherto difficult tasks such as computer vision, natural language processing and many other complex problems (including personalized healthcare and complex video games). Its capability has only increased further with better architecture designs and computing power. While DNN has been said to usher in the new artificial intelligence era, it comes with several problems. Beside its massive resource consumption, another important problem remains a challenge: DNN is a blackbox model. DNN is difficult to understand; it is not entirely clear how each neuron or parameter contributes to its performance, whether they are even relevant individually, or whether there is a universal meaningful way to understand its inner working at all. In response, researchers have proposed many different methods to study the blackbox models, some of which are considered posthoc methods, model-agnostic methods, visualizations etc. They have been studied under the big umbrella of eXplainable artificial intelligence (XAI). While some methods are based on sound mathematical principles, many other methods are based on heuristics. The meaning of \textit{explanation} sometimes becomes muddled by subjectivity. We started off our exploration into the topic by observing their applicability on a medical imaging problem, almost blindly. Since we did not find a satisfactory \textit{explanation} from our earlier experiments, we then attempted to approach the problem from different perspectives. We first tested the viability of common XAI methods by designing a computer vision experiment with a synthetic dataset that has very clear and obvious features. Common methods are expected to capture the features accurately. Our results show that heatmap-based methods have not performed very well. Thus, we decided to design methods that place interpretability and explainability at the very highest priority. More precisely, we experimented with the following methods: (1) General Pattern Theory (GPT) has been used to systematically capture features in objects in a component-wise manner. More precisely, we aim to represent object components with generators. (2) Interpretable Universal Approximation. SQANN and TNN (defined later) have been designed as universal approximators whose universal approximation property is provable in a very clear-cut, humanly understandable manner. By contrast, existing methods use heavy mathematical abstraction. (3) Self reward design. We leverage neural network components for solving reinforcement learning problems in an extremely interpretable manner. More precisely, in the design, each neuron is assigned a meaning, giving a very high level of transparency that is hard to compare with existing methods. Apart from novel designs, we have experimented with common methods. For example, with augmentative explanations, we study how much common methods improve the predictive accuracy of a model. Besides, we study XAI methods with respect to a popular Weakly Supervised Object Localization (WSOL) metric, the MaxBoxAcc, and tested the effect of Neural Backed Decision Tree w.r.t the same metric. kaBEDONN has been designed as a partial upgrade of SQANN, intended to provide easy-to-understand and easy-to-adjust explanations inspired by research on influential examples. Finally, we assert that the project will be concluded with a good balance of novelty and incremental improvements (and validation) on existing XAI methods. Doctor of Philosophy 2023-01-30T06:07:02Z 2023-01-30T06:07:02Z 2022 Thesis-Doctor of Philosophy Tjoa, E. (2022). From medical imaging to explainable artificial intelligence. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/164493 https://hdl.handle.net/10356/164493 10.32657/10356/164493 en RIE2020 AME Programmatic Fund, Singapore (No. A20G8b0102) This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |