Investigating robustness of deep learning against adversarial examples
Deep learning has achieved many unprecedented performances in various fields, such as the field of Computer Vision. Deep neural networks have shown many impressive results in solving complex problems, yet, they are still vulnerable to adversarial attacks, which come in the form of subtle, often impe...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/136558 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep learning has achieved many unprecedented performances in various fields, such as the field of Computer Vision. Deep neural networks have shown many impressive results in solving complex problems, yet, they are still vulnerable to adversarial attacks, which come in the form of subtle, often imperceptible perturbations. These perturbations that are added to the inputs can cause models to predict incorrectly. In this report, we present the effects of adversarial perturbations that are restricted to their low frequency subspace using the MNIST and CIFAR-10 dataset. We also experimented on generating a universal perturbation that is restricted to its low frequency subspace. The generated image-agnostic perturbation was also tested with a common adversarial defense method – JPEG compression, to observe the effectiveness of such defenses against the perturbation. |
---|