Attack on training effort of deep learning
Abstract The objective of this project is to extend on a previous study on an adversarial rain attack on state-of-the-art deep neural networks (DNN) to hinder image classification and object detection. DNNs are known for their vulnerabilities to adversarial attacks. These attacks can take on many...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156741 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-156741 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1567412022-04-23T07:58:26Z Attack on training effort of deep learning Ho, Tony Man Tung Liu Yang School of Computer Science and Engineering yangliu@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Abstract The objective of this project is to extend on a previous study on an adversarial rain attack on state-of-the-art deep neural networks (DNN) to hinder image classification and object detection. DNNs are known for their vulnerabilities to adversarial attacks. These attacks can take on many forms, but generally take the form of adding some form of perturbation to images intended to fool the DNN into misclassifying the image. While there are many other popular forms of adversarial attacks such as FastGradient Sign method (FGSM), Limited-memory BFGS (L-BFGS) and Generative Adversarial Networks (GAN), in this project, we will focus mainly on the adversarial rain attack. Rain has also been known to pose a threat to DNN based perception systems such as video surveillance, autonomous driving, and unmanned aerial vehicles (UAV), and can cause serious safety issues to the user when a misclassification occurs due to the perturbation caused by the effects of the rain. An attack script that uses factor-aware rain generation was used to create rain streaks on the individual frames of a video, which are then used for the adversarial rain attack. A comparison of the confidence of the images before and after the attack will then be made, allowing us to clearly visualize the effects of the attack. The attack script has performed to expectation and was successful in reducing the overall confidence of the recognition. While some objects in certain frames of the video after the attack are still detected by the Faster R-CNN model with a VGG16 backbone, their confidence scores are lowered, proving to us that the attack was at least somewhat successful. This can serve as a baseline for future research to explore further into similar attacks to devise a better defensive countermeasure against these attacks. Bachelor of Engineering (Computer Science) 2022-04-23T07:58:26Z 2022-04-23T07:58:26Z 2022 Final Year Project (FYP) Ho, T. M. T. (2022). Attack on training effort of deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156741 https://hdl.handle.net/10356/156741 en application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Ho, Tony Man Tung Attack on training effort of deep learning |
description |
Abstract
The objective of this project is to extend on a previous study on an adversarial rain attack on state-of-the-art deep neural networks (DNN) to hinder image classification and object detection. DNNs are known for their vulnerabilities to adversarial attacks. These attacks can take on many forms, but generally take the form of adding some form of perturbation to images intended to fool the DNN into misclassifying the image. While there are many other popular forms of adversarial attacks such as FastGradient Sign method (FGSM), Limited-memory BFGS (L-BFGS) and Generative Adversarial Networks (GAN), in this project, we will focus mainly on the adversarial rain attack.
Rain has also been known to pose a threat to DNN based perception systems such as video surveillance, autonomous driving, and unmanned aerial vehicles (UAV), and can cause serious safety issues to the user when a misclassification occurs due to the perturbation caused by the effects of the rain.
An attack script that uses factor-aware rain generation was used to create rain streaks on the individual frames of a video, which are then used for the adversarial rain attack. A comparison of the confidence of the images before and after the attack will then be made, allowing us to clearly visualize the effects of the attack. The attack script has performed to expectation and was successful in reducing the overall confidence of the recognition. While some objects in certain frames of the video after the attack are still detected by the Faster R-CNN model with a VGG16 backbone, their confidence scores are lowered, proving to us that the attack was at least somewhat successful. This can serve as a baseline for future research to explore further into similar attacks to devise a better defensive countermeasure against these attacks. |
author2 |
Liu Yang |
author_facet |
Liu Yang Ho, Tony Man Tung |
format |
Final Year Project |
author |
Ho, Tony Man Tung |
author_sort |
Ho, Tony Man Tung |
title |
Attack on training effort of deep learning |
title_short |
Attack on training effort of deep learning |
title_full |
Attack on training effort of deep learning |
title_fullStr |
Attack on training effort of deep learning |
title_full_unstemmed |
Attack on training effort of deep learning |
title_sort |
attack on training effort of deep learning |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/156741 |
_version_ |
1731235792396746752 |