Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks

Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, wh...

Full description

Saved in:
Bibliographic Details
Main Author: Bai, Tao
Other Authors: Jun Zhao
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160963
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-160963
record_format dspace
spelling sg-ntu-dr.10356-1609632022-09-01T02:33:19Z Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks Bai, Tao Jun Zhao Wen Bihan School of Computer Science and Engineering junzhao@ntu.edu.sg, bihan.wen@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, which look normal in human eyes. Such adversarial instances can fool and mislead DNNs to misbehave as adversaries expected, causing serious consequences for various DNN-based applications in our daily life. To this end, this thesis dedicates to revealing the vulnerabilities of deep learning algorithms and developing defense strategies for combating adversaries effectively. We study current DNNs from the perspective of security with two sides: attack and defense. On the attack front, we explore the possibility of attacks against DNNs during test time with two types of adversarial examples: adversarial perturbations and adversarial patches. On the defense front, we develop solutions to defend against adversarial examples and investigate the robustness-preserving distillation techniques. Doctor of Philosophy 2022-08-11T01:54:38Z 2022-08-11T01:54:38Z 2022 Thesis-Doctor of Philosophy Bai, T. (2022). Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/160963 https://hdl.handle.net/10356/160963 10.32657/10356/160963 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Bai, Tao
Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
description Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, which look normal in human eyes. Such adversarial instances can fool and mislead DNNs to misbehave as adversaries expected, causing serious consequences for various DNN-based applications in our daily life. To this end, this thesis dedicates to revealing the vulnerabilities of deep learning algorithms and developing defense strategies for combating adversaries effectively. We study current DNNs from the perspective of security with two sides: attack and defense. On the attack front, we explore the possibility of attacks against DNNs during test time with two types of adversarial examples: adversarial perturbations and adversarial patches. On the defense front, we develop solutions to defend against adversarial examples and investigate the robustness-preserving distillation techniques.
author2 Jun Zhao
author_facet Jun Zhao
Bai, Tao
format Thesis-Doctor of Philosophy
author Bai, Tao
author_sort Bai, Tao
title Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_short Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_full Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_fullStr Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_full_unstemmed Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
title_sort exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/160963
_version_ 1744365400528781312