Investigating robustness of deep learning against adversarial examples

Deep learning has achieved many unprecedented performances in various fields, such as the field of Computer Vision. Deep neural networks have shown many impressive results in solving complex problems, yet, they are still vulnerable to adversarial attacks, which come in the form of subtle, often impe...

Full description

Saved in:
Bibliographic Details
Main Author: Chua, Shan Jing
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2019
Subjects:
Online Access:https://hdl.handle.net/10356/136558
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-136558
record_format dspace
spelling sg-ntu-dr.10356-1365582022-07-14T06:05:16Z Investigating robustness of deep learning against adversarial examples Chua, Shan Jing Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Engineering Engineering::Computer science and engineering Deep learning has achieved many unprecedented performances in various fields, such as the field of Computer Vision. Deep neural networks have shown many impressive results in solving complex problems, yet, they are still vulnerable to adversarial attacks, which come in the form of subtle, often imperceptible perturbations. These perturbations that are added to the inputs can cause models to predict incorrectly. In this report, we present the effects of adversarial perturbations that are restricted to their low frequency subspace using the MNIST and CIFAR-10 dataset. We also experimented on generating a universal perturbation that is restricted to its low frequency subspace. The generated image-agnostic perturbation was also tested with a common adversarial defense method – JPEG compression, to observe the effectiveness of such defenses against the perturbation. Bachelor of Engineering (Computer Science) 2019-12-30T01:41:10Z 2019-12-30T01:41:10Z 2019 Final Year Project (FYP) https://hdl.handle.net/10356/136558 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Engineering::Computer science and engineering
spellingShingle Engineering
Engineering::Computer science and engineering
Chua, Shan Jing
Investigating robustness of deep learning against adversarial examples
description Deep learning has achieved many unprecedented performances in various fields, such as the field of Computer Vision. Deep neural networks have shown many impressive results in solving complex problems, yet, they are still vulnerable to adversarial attacks, which come in the form of subtle, often imperceptible perturbations. These perturbations that are added to the inputs can cause models to predict incorrectly. In this report, we present the effects of adversarial perturbations that are restricted to their low frequency subspace using the MNIST and CIFAR-10 dataset. We also experimented on generating a universal perturbation that is restricted to its low frequency subspace. The generated image-agnostic perturbation was also tested with a common adversarial defense method – JPEG compression, to observe the effectiveness of such defenses against the perturbation.
author2 Jun Zhao
author_facet Jun Zhao
Chua, Shan Jing
format Final Year Project
author Chua, Shan Jing
author_sort Chua, Shan Jing
title Investigating robustness of deep learning against adversarial examples
title_short Investigating robustness of deep learning against adversarial examples
title_full Investigating robustness of deep learning against adversarial examples
title_fullStr Investigating robustness of deep learning against adversarial examples
title_full_unstemmed Investigating robustness of deep learning against adversarial examples
title_sort investigating robustness of deep learning against adversarial examples
publisher Nanyang Technological University
publishDate 2019
url https://hdl.handle.net/10356/136558
_version_ 1738844800309788672