Visualizing interpretations of deep neural networks

The evolution of Convolutional Neural Networks and new approaches like Vision Transformers has led to better performance in computer vision. However, deep neural networks lack transparency and interpretability, leading to consequences in critical applications. Visualizing deep neural network interpr...

Full description

Saved in:
Bibliographic Details
Main Author: Ta, Quynh Nga
Other Authors: Li Boyang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166663
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-166663
record_format dspace
spelling sg-ntu-dr.10356-1666632023-05-12T15:36:38Z Visualizing interpretations of deep neural networks Ta, Quynh Nga Li Boyang School of Computer Science and Engineering boyang.li@ntu.edu.sg Engineering::Computer science and engineering The evolution of Convolutional Neural Networks and new approaches like Vision Transformers has led to better performance in computer vision. However, deep neural networks lack transparency and interpretability, leading to consequences in critical applications. Visualizing deep neural network interpretations can provide insights into decision-making, identify biases and errors, and reveal potential limitations in the model or training data. This area of research is significant for enhancing the transparency, interpretability, and trustworthiness of deep neural networks and facilitating their application in critical domains. This project aims to create a web application to facilitate the interpretation of the ConvNeXt model, a state-of-the-art convolutional neural network. The application implements three techniques: Maximally activating image patches, Feature attribution visualisation with SmoothGrad, and Adversarial perturbation visualisation with SmoothGrad. Maximally activating image patches help users understand what patterns maximally activate a channel in a layer. Feature attribution visualisation with SmoothGrad highlights the pixels that are most influential for the model's prediction. Adversarial perturbation visualisation with SmoothGrad allows users to explore how the model reacts when the input image is perturbed. The results of experimentation of interpretability techniques on the ConvNeXt model will also be discussed in this report. Bachelor of Science in Data Science and Artificial Intelligence 2023-05-08T07:57:42Z 2023-05-08T07:57:42Z 2023 Final Year Project (FYP) Ta, Q. N. (2023). Visualizing interpretations of deep neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166663 https://hdl.handle.net/10356/166663 en SCSE22-0205 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Ta, Quynh Nga
Visualizing interpretations of deep neural networks
description The evolution of Convolutional Neural Networks and new approaches like Vision Transformers has led to better performance in computer vision. However, deep neural networks lack transparency and interpretability, leading to consequences in critical applications. Visualizing deep neural network interpretations can provide insights into decision-making, identify biases and errors, and reveal potential limitations in the model or training data. This area of research is significant for enhancing the transparency, interpretability, and trustworthiness of deep neural networks and facilitating their application in critical domains. This project aims to create a web application to facilitate the interpretation of the ConvNeXt model, a state-of-the-art convolutional neural network. The application implements three techniques: Maximally activating image patches, Feature attribution visualisation with SmoothGrad, and Adversarial perturbation visualisation with SmoothGrad. Maximally activating image patches help users understand what patterns maximally activate a channel in a layer. Feature attribution visualisation with SmoothGrad highlights the pixels that are most influential for the model's prediction. Adversarial perturbation visualisation with SmoothGrad allows users to explore how the model reacts when the input image is perturbed. The results of experimentation of interpretability techniques on the ConvNeXt model will also be discussed in this report.
author2 Li Boyang
author_facet Li Boyang
Ta, Quynh Nga
format Final Year Project
author Ta, Quynh Nga
author_sort Ta, Quynh Nga
title Visualizing interpretations of deep neural networks
title_short Visualizing interpretations of deep neural networks
title_full Visualizing interpretations of deep neural networks
title_fullStr Visualizing interpretations of deep neural networks
title_full_unstemmed Visualizing interpretations of deep neural networks
title_sort visualizing interpretations of deep neural networks
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/166663
_version_ 1770567566544076800