Demystifying adversarial attacks on neural networks
Prevalent use of Neural Networks for Classification Tasks has brought to attention the security and integrity of the Neural Networks that industries are so reliant on. Adversarial examples are conspicuous to humans, but neural networks struggle to correctly classify images with the presence of adver...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2020
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/137946 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-137946 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1379462020-04-20T05:59:43Z Demystifying adversarial attacks on neural networks Yip, Lionell En Zhi Anupam Chattopadhyay School of Computer Science and Engineering Parallel and Distributed Computing Centre anupam@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Prevalent use of Neural Networks for Classification Tasks has brought to attention the security and integrity of the Neural Networks that industries are so reliant on. Adversarial examples are conspicuous to humans, but neural networks struggle to correctly classify images with the presence of adversarial perturbations. I introduce a framework for understanding how neural networks perceive inputs, and its relation to adversarial attack methods. I demonstrate that there is no correlation between the region of importance and the region of attack. I demonstrate that a frequently perturbed region of an adversarial example across a class in a data-set exists. I attempt to improve classification performance by exploiting the differences of input and adversarial attack, and I demonstrate a novel augmentation method for improving prediction performance of adversarial samples. Bachelor of Engineering (Computer Science) 2020-04-20T05:59:43Z 2020-04-20T05:59:43Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/137946 en SCSE19-0306 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Yip, Lionell En Zhi Demystifying adversarial attacks on neural networks |
description |
Prevalent use of Neural Networks for Classification Tasks has brought to attention the security and integrity of the Neural Networks that industries are so reliant on. Adversarial examples are conspicuous to humans, but neural networks struggle to correctly classify images with the presence of adversarial perturbations. I introduce a framework for understanding how neural networks perceive inputs, and its relation to adversarial attack methods. I demonstrate that there is no correlation between the region of importance and the region of attack. I demonstrate that a frequently perturbed region of an adversarial example across a class in a data-set exists. I attempt to improve classification performance by exploiting the differences of input and adversarial attack, and I demonstrate a novel augmentation method for improving prediction performance of adversarial samples. |
author2 |
Anupam Chattopadhyay |
author_facet |
Anupam Chattopadhyay Yip, Lionell En Zhi |
format |
Final Year Project |
author |
Yip, Lionell En Zhi |
author_sort |
Yip, Lionell En Zhi |
title |
Demystifying adversarial attacks on neural networks |
title_short |
Demystifying adversarial attacks on neural networks |
title_full |
Demystifying adversarial attacks on neural networks |
title_fullStr |
Demystifying adversarial attacks on neural networks |
title_full_unstemmed |
Demystifying adversarial attacks on neural networks |
title_sort |
demystifying adversarial attacks on neural networks |
publisher |
Nanyang Technological University |
publishDate |
2020 |
url |
https://hdl.handle.net/10356/137946 |
_version_ |
1681059196659826688 |