Detecting adversarial samples for deep neural networks through mutation testing
Deep Neural Networks (DNNs) are adept at many tasks, with the more well-known task of image recognition using a subset of DNNs called Convolutional Neural Networks (CNNs). However, they are prone to attacks called adversarial attacks. Adversarial attacks are malicious modifications made on input sam...
Saved in:
Main Author: | Tan, Kye Yen |
---|---|
Other Authors: | Chang Chip Hong |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/138719 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Towards deep neural networks robust to adversarial examples
by: Matyasko, Alexander
Published: (2020) -
Demystifying adversarial attacks on neural networks
by: Yip, Lionell En Zhi
Published: (2020) -
Exploring the vulnerabilities and enhancing the adversarial robustness of deep neural networks
by: Bai, Tao
Published: (2022) -
Protecting neural networks from adversarial attacks
by: Kwek, Jia Ying
Published: (2020) -
Adversarial robustness of deep reinforcement learning
by: Qu, Xinghua
Published: (2022)