Towards robust explainability of deep neural networks against attribution attacks
Deep learning techniques have been rapidly developed and widely applied in various fields. However, the black-box nature of deep neural networks (DNNs) makes it difficult to understand their decision-making process, giving rise to the field of explainable artificial intelligence (XAI). Attribution m...
Saved in:
Main Author: | Wang, Fan |
---|---|
Other Authors: | Kong Wai-Kin, Adams |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175394 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
DEVELOPING A HOLISTIC EXPLAINABLE MACHINE LEARNING FRAMEWORK: DATA SCIENCE APPLICATIONS IN HEALTHCARE
by: ONG MING LUN
Published: (2021) -
Sampling with trusthworthy constraints: A variational gradient framework
by: Liu, Xingchao, et al.
Published: (2021) -
Towards robust deep learning models against corruptions
by: Yi, Chenyu
Published: (2024) -
ROBUSTNESS AND UNCERTAINTY ESTIMATION FOR DEEP NEURAL NETWORKS
by: JAY NANDY
Published: (2021) -
A deep neural network approach to predicting clinical outcomes of neuroblastoma patients
by: Tranchevent, Léon-Charles, et al.
Published: (2021)