Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks

Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, wh...

Full description

Saved in:
Bibliographic Details
Main Author: Bai, Tao
Other Authors: Jun Zhao
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/160963
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, which look normal in human eyes. Such adversarial instances can fool and mislead DNNs to misbehave as adversaries expected, causing serious consequences for various DNN-based applications in our daily life. To this end, this thesis dedicates to revealing the vulnerabilities of deep learning algorithms and developing defense strategies for combating adversaries effectively. We study current DNNs from the perspective of security with two sides: attack and defense. On the attack front, we explore the possibility of attacks against DNNs during test time with two types of adversarial examples: adversarial perturbations and adversarial patches. On the defense front, we develop solutions to defend against adversarial examples and investigate the robustness-preserving distillation techniques.