Which neural network makes more explainable decisions? An approach towards measuring explainability

Neural networks are getting increasingly popular thanks to their exceptional performance in solving many real-world problems. At the same time, they are shown to be vulnerable to attacks, difficult to debug and subject to fairness issues. To improve people’s trust in the technology, it is often nece...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Mengdi, SUN, Jun, WANG, Jingyi
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7160
https://ink.library.smu.edu.sg/context/sis_research/article/8163/viewcontent/Zhang2022_Article_WhichNeuralNetworkMakesMoreExp__1_.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8163
record_format dspace
spelling sg-smu-ink.sis_research-81632022-05-27T08:20:59Z Which neural network makes more explainable decisions? An approach towards measuring explainability ZHANG, Mengdi SUN, Jun WANG, Jingyi Neural networks are getting increasingly popular thanks to their exceptional performance in solving many real-world problems. At the same time, they are shown to be vulnerable to attacks, difficult to debug and subject to fairness issues. To improve people’s trust in the technology, it is often necessary to provide some human-understandable explanation of neural networks’ decisions, e.g., why is that my loan application is rejected whereas hers is approved? That is, the stakeholder would be interested to minimize the chances of not being able to explain the decision consistently and would like to know how often and how easy it is to explain the decisions of a neural network before it is deployed. In this work, we provide two measurements on the decision explainability of neural networks. Afterwards, we develop algorithms for evaluating the measurements of user-provided neural networks automatically. We evaluate our approach on multiple neural network models trained on benchmark datasets. The results show that existing neural networks’ decisions often have low explainability according to our measurements. This is in line with the observation that adversarial samples can be easily generated through adversarial perturbation, which are often hard to explain. Our further experiments show that the decisions of the models trained with robust training are not necessarily easier to explain, whereas decisions of the models retrained with samples generated by our algorithms are easier to explain. 2022-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7160 info:doi/10.1007/s10515-022-00338-w https://ink.library.smu.edu.sg/context/sis_research/article/8163/viewcontent/Zhang2022_Article_WhichNeuralNetworkMakesMoreExp__1_.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep learning models Model interpretability Neural network testing OS and Networks Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Deep learning models
Model interpretability
Neural network testing
OS and Networks
Software Engineering
spellingShingle Deep learning models
Model interpretability
Neural network testing
OS and Networks
Software Engineering
ZHANG, Mengdi
SUN, Jun
WANG, Jingyi
Which neural network makes more explainable decisions? An approach towards measuring explainability
description Neural networks are getting increasingly popular thanks to their exceptional performance in solving many real-world problems. At the same time, they are shown to be vulnerable to attacks, difficult to debug and subject to fairness issues. To improve people’s trust in the technology, it is often necessary to provide some human-understandable explanation of neural networks’ decisions, e.g., why is that my loan application is rejected whereas hers is approved? That is, the stakeholder would be interested to minimize the chances of not being able to explain the decision consistently and would like to know how often and how easy it is to explain the decisions of a neural network before it is deployed. In this work, we provide two measurements on the decision explainability of neural networks. Afterwards, we develop algorithms for evaluating the measurements of user-provided neural networks automatically. We evaluate our approach on multiple neural network models trained on benchmark datasets. The results show that existing neural networks’ decisions often have low explainability according to our measurements. This is in line with the observation that adversarial samples can be easily generated through adversarial perturbation, which are often hard to explain. Our further experiments show that the decisions of the models trained with robust training are not necessarily easier to explain, whereas decisions of the models retrained with samples generated by our algorithms are easier to explain.
format text
author ZHANG, Mengdi
SUN, Jun
WANG, Jingyi
author_facet ZHANG, Mengdi
SUN, Jun
WANG, Jingyi
author_sort ZHANG, Mengdi
title Which neural network makes more explainable decisions? An approach towards measuring explainability
title_short Which neural network makes more explainable decisions? An approach towards measuring explainability
title_full Which neural network makes more explainable decisions? An approach towards measuring explainability
title_fullStr Which neural network makes more explainable decisions? An approach towards measuring explainability
title_full_unstemmed Which neural network makes more explainable decisions? An approach towards measuring explainability
title_sort which neural network makes more explainable decisions? an approach towards measuring explainability
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7160
https://ink.library.smu.edu.sg/context/sis_research/article/8163/viewcontent/Zhang2022_Article_WhichNeuralNetworkMakesMoreExp__1_.pdf
_version_ 1770576234391011328