Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
Deep learning, especially deep neural networks (DNNs), is at the heart of the current rise of artificial intelligence, and the major breakthroughs in the last few years have been made by DNNs. It has been demonstrated in recent works that DNNs are vulnerable to human-crafted adversarial examples, wh...
Saved in:
Main Author: | Bai, Tao |
---|---|
Other Authors: | Jun Zhao |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/160963 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Towards deep neural networks robust to adversarial examples
by: Matyasko, Alexander
Published: (2020) -
Evaluation of adversarial attacks against deep learning models
by: Chua, Jonathan Wen Rong
Published: (2023) -
Adversarial robustness of deep reinforcement learning
by: Qu, Xinghua
Published: (2022) -
Generative adversarial network (GAN) for image synthesis
by: Hou, Boyu
Published: (2022) -
Attack on prediction confidence of deep learning neural networks
by: Ng, Garyl Xuan
Published: (2022)