Evaluation of adversarial attacks against deep learning models
Machine learning has been increasingly prevalent in aiding us in our day-to-day lives. They have been and are still useful in performing tasks in different fields such as Computer Vision and Natural Language Processing. However, they are also increasingly targeted by adversaries, who aim to reduc...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171835 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-171835 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1718352023-11-10T15:37:01Z Evaluation of adversarial attacks against deep learning models Chua, Jonathan Wen Rong Zhang Tianwei School of Computer Science and Engineering Li Guanlin tianwei.zhang@ntu.edu.sg, guanlin001@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Machine learning has been increasingly prevalent in aiding us in our day-to-day lives. They have been and are still useful in performing tasks in different fields such as Computer Vision and Natural Language Processing. However, they are also increasingly targeted by adversaries, who aim to reduce their effectiveness rendering them useless and unpredictable. Hence, there is a need to improve the robustness of current machine learning models, to deter adversarial attacks. Existing defences have been proven to be useful in deterring known attacks such as Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD) and Carlini and Wagner (C&W). However, in recent times, adaptive attacks such as Backward Pass Differential Approximation (BPDA) and AutoAttack (AA), have been able to counteract existing defence techniques, rendering them ineffective. In this project, we focus on adversarial defences in the field of Computer Vision. In our experiments, we employed various input preprocessing techniques as defence such as JPEG compression, Total Variance Minimization (TVM), Spatial Smoothing, Bit-depth Reduction, Principal Component Analysis (PCA) and Pixel Deflection to remove adversarial perturbations from input data. These defence techniques have been evaluated on the ResNet-20 and ResNet-56 networks, trained with CIFAR-10 and CIFAR-100 datasets. The image inputs were adversarially perturbed using several known attacks such as C&W, PGD and AA. Bachelor of Engineering (Computer Science) 2023-11-09T08:45:02Z 2023-11-09T08:45:02Z 2023 Final Year Project (FYP) Chua, J. W. R. (2023). Evaluation of adversarial attacks against deep learning models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171835 https://hdl.handle.net/10356/171835 en SCSE22-0758 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Chua, Jonathan Wen Rong Evaluation of adversarial attacks against deep learning models |
description |
Machine learning has been increasingly prevalent in aiding us in our day-to-day lives. They
have been and are still useful in performing tasks in different fields such as Computer Vision
and Natural Language Processing. However, they are also increasingly targeted by
adversaries, who aim to reduce their effectiveness rendering them useless and unpredictable.
Hence, there is a need to improve the robustness of current machine learning models, to deter
adversarial attacks.
Existing defences have been proven to be useful in deterring known attacks such as Fast
Gradient Sign Method (FGSM), Projected Gradient Descent (PGD) and Carlini and Wagner
(C&W). However, in recent times, adaptive attacks such as Backward Pass Differential
Approximation (BPDA) and AutoAttack (AA), have been able to counteract existing defence
techniques, rendering them ineffective.
In this project, we focus on adversarial defences in the field of Computer Vision. In our
experiments, we employed various input preprocessing techniques as defence such as JPEG
compression, Total Variance Minimization (TVM), Spatial Smoothing, Bit-depth Reduction,
Principal Component Analysis (PCA) and Pixel Deflection to remove adversarial
perturbations from input data. These defence techniques have been evaluated on the
ResNet-20 and ResNet-56 networks, trained with CIFAR-10 and CIFAR-100 datasets. The
image inputs were adversarially perturbed using several known attacks such as C&W, PGD
and AA. |
author2 |
Zhang Tianwei |
author_facet |
Zhang Tianwei Chua, Jonathan Wen Rong |
format |
Final Year Project |
author |
Chua, Jonathan Wen Rong |
author_sort |
Chua, Jonathan Wen Rong |
title |
Evaluation of adversarial attacks against deep learning models |
title_short |
Evaluation of adversarial attacks against deep learning models |
title_full |
Evaluation of adversarial attacks against deep learning models |
title_fullStr |
Evaluation of adversarial attacks against deep learning models |
title_full_unstemmed |
Evaluation of adversarial attacks against deep learning models |
title_sort |
evaluation of adversarial attacks against deep learning models |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/171835 |
_version_ |
1783955519788023808 |