Evaluation of adversarial attacks against deep learning models
The rapid development of deep learning techniques has made them useful in many applications. However, recent studies have shown that deep learning algorithms can be vulnerable to adversarial attacks. This is a serious concern when considering these algorithms for safety-critical applications. To fur...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156516 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-156516 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1565162022-04-19T06:15:25Z Evaluation of adversarial attacks against deep learning models Ta, Anh Duc Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Engineering::Computer science and engineering The rapid development of deep learning techniques has made them useful in many applications. However, recent studies have shown that deep learning algorithms can be vulnerable to adversarial attacks. This is a serious concern when considering these algorithms for safety-critical applications. To further improve the defense of deep learning algorithm, there is a need to study the threats of adversarial attacks. In this project, the effectiveness of adversarial attacks on deep learning models was evaluated under different criteria like different attack methods, different deep learning model structures and different deep learning tasks. The result of the experiment showed that the effectiveness of the attacks depended on the type of the attack, the source model structure, and the target model structure. Moreover, the result indicated that adversarial training is not the best defense technique against all types of attack methods. Furthermore, the report also showed that effectiveness of adversarial examples is not limited to Computer Vision tasks only but also to Audio Examples Classification. Bachelor of Engineering (Computer Science) 2022-04-19T06:15:25Z 2022-04-19T06:15:25Z 2022 Final Year Project (FYP) Ta, A. D. (2022). Evaluation of adversarial attacks against deep learning models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156516 https://hdl.handle.net/10356/156516 en SCSE21-0250 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering |
spellingShingle |
Engineering::Computer science and engineering Ta, Anh Duc Evaluation of adversarial attacks against deep learning models |
description |
The rapid development of deep learning techniques has made them useful in many applications. However, recent studies have shown that deep learning algorithms can be vulnerable to adversarial attacks. This is a serious concern when considering these algorithms for safety-critical applications. To further improve the defense of deep learning algorithm, there is a need to study the threats of adversarial attacks.
In this project, the effectiveness of adversarial attacks on deep learning models was evaluated under different criteria like different attack methods, different deep learning model structures and different deep learning tasks. The result of the experiment showed that the effectiveness of the attacks depended on the type of the attack, the source model structure, and the target model structure. Moreover, the result indicated that adversarial training is not the best defense technique against all types of attack methods. Furthermore, the report also showed that effectiveness of adversarial examples is not limited to Computer Vision tasks only but also to Audio Examples Classification. |
author2 |
Zhang Tianwei |
author_facet |
Zhang Tianwei Ta, Anh Duc |
format |
Final Year Project |
author |
Ta, Anh Duc |
author_sort |
Ta, Anh Duc |
title |
Evaluation of adversarial attacks against deep learning models |
title_short |
Evaluation of adversarial attacks against deep learning models |
title_full |
Evaluation of adversarial attacks against deep learning models |
title_fullStr |
Evaluation of adversarial attacks against deep learning models |
title_full_unstemmed |
Evaluation of adversarial attacks against deep learning models |
title_sort |
evaluation of adversarial attacks against deep learning models |
publisher |
Nanyang Technological University |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/156516 |
_version_ |
1731235745502330880 |