Boosting knowledge distillation and interpretability
Deep Neural Network (DNN) can be applied in many fields to predict classification and can obtain high accuracy. However, Deep Neural Network is a black box, which means that it’s hard to explain how the Deep Neural Network can derive specific classification directly. The generally accepted interpret...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/150315 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-150315 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1503152023-07-04T16:15:18Z Boosting knowledge distillation and interpretability Song, Huan Ponnuthurai Nagaratnam Suganthan School of Electrical and Electronic Engineering EPNSugan@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Deep Neural Network (DNN) can be applied in many fields to predict classification and can obtain high accuracy. However, Deep Neural Network is a black box, which means that it’s hard to explain how the Deep Neural Network can derive specific classification directly. The generally accepted interpretable model is the decision tree. Although decision tree classification accuracy is not as good as deep neural networks, it is a more intuitive and interpretable common model. By combining a deep neural network with a decision tree, it is possible to show the inner architecture of model without loss of accuracy. It can be helpful to learn why certain inputs can get specific output by distilling the knowledge from DNN model into a decision tree. Master of Science (Computer Control and Automation) 2021-06-08T12:38:44Z 2021-06-08T12:38:44Z 2021 Thesis-Master by Coursework Song, H. (2021). Boosting knowledge distillation and interpretability. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/150315 https://hdl.handle.net/10356/150315 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Pattern recognition |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Song, Huan Boosting knowledge distillation and interpretability |
description |
Deep Neural Network (DNN) can be applied in many fields to predict classification and can obtain high accuracy. However, Deep Neural Network is a black box, which means that it’s hard to explain how the Deep Neural Network can derive specific classification directly. The generally accepted interpretable model is the decision tree. Although decision tree classification accuracy is not as good as deep neural networks, it is a more intuitive and interpretable common model. By combining a deep neural network with a decision tree, it is possible to show the inner architecture of model without loss of accuracy. It can be helpful to learn why certain inputs can get specific output by distilling the knowledge from DNN model into a decision tree. |
author2 |
Ponnuthurai Nagaratnam Suganthan |
author_facet |
Ponnuthurai Nagaratnam Suganthan Song, Huan |
format |
Thesis-Master by Coursework |
author |
Song, Huan |
author_sort |
Song, Huan |
title |
Boosting knowledge distillation and interpretability |
title_short |
Boosting knowledge distillation and interpretability |
title_full |
Boosting knowledge distillation and interpretability |
title_fullStr |
Boosting knowledge distillation and interpretability |
title_full_unstemmed |
Boosting knowledge distillation and interpretability |
title_sort |
boosting knowledge distillation and interpretability |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/150315 |
_version_ |
1772826915497836544 |