Attack on training effort of deep learning
Abstract The objective of this project is to extend on a previous study on an adversarial rain attack on state-of-the-art deep neural networks (DNN) to hinder image classification and object detection. DNNs are known for their vulnerabilities to adversarial attacks. These attacks can take on many...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156741 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Abstract
The objective of this project is to extend on a previous study on an adversarial rain attack on state-of-the-art deep neural networks (DNN) to hinder image classification and object detection. DNNs are known for their vulnerabilities to adversarial attacks. These attacks can take on many forms, but generally take the form of adding some form of perturbation to images intended to fool the DNN into misclassifying the image. While there are many other popular forms of adversarial attacks such as FastGradient Sign method (FGSM), Limited-memory BFGS (L-BFGS) and Generative Adversarial Networks (GAN), in this project, we will focus mainly on the adversarial rain attack.
Rain has also been known to pose a threat to DNN based perception systems such as video surveillance, autonomous driving, and unmanned aerial vehicles (UAV), and can cause serious safety issues to the user when a misclassification occurs due to the perturbation caused by the effects of the rain.
An attack script that uses factor-aware rain generation was used to create rain streaks on the individual frames of a video, which are then used for the adversarial rain attack. A comparison of the confidence of the images before and after the attack will then be made, allowing us to clearly visualize the effects of the attack. The attack script has performed to expectation and was successful in reducing the overall confidence of the recognition. While some objects in certain frames of the video after the attack are still detected by the Faster R-CNN model with a VGG16 backbone, their confidence scores are lowered, proving to us that the attack was at least somewhat successful. This can serve as a baseline for future research to explore further into similar attacks to devise a better defensive countermeasure against these attacks. |
---|