Evaluation of adversarial attacks against deep learning models
As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning mod...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175064 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175064 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1750642024-04-19T15:41:54Z Evaluation of adversarial attacks against deep learning models Chua, Wenjun Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Computer and Information Science As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning models into generating false results for their benefit, making them more susceptible to adversarial attacks by hackers and threatening the stability and robustness of deep learning models. This report serves to replicate and evaluate various known adversarial attacks against popular deep learning models and evaluate their performance. Experimental results show that while certain defenses exhibit efficacy against specific adversarial attacks, none provides comprehensive protection against all threats. Bachelor's degree 2024-04-19T02:30:34Z 2024-04-19T02:30:34Z 2024 Final Year Project (FYP) Chua, W. (2024). Evaluation of adversarial attacks against deep learning models. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175064 https://hdl.handle.net/10356/175064 en SCSE23-0071 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science |
spellingShingle |
Computer and Information Science Chua, Wenjun Evaluation of adversarial attacks against deep learning models |
description |
As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning models into generating false results for their benefit, making them more susceptible to adversarial attacks by hackers and threatening the stability and robustness of deep learning models. This report serves to replicate and evaluate various known adversarial attacks against popular deep learning models and evaluate their performance. Experimental results show that while certain defenses exhibit efficacy against specific adversarial attacks, none provides comprehensive protection against all threats. |
author2 |
Zhang Tianwei |
author_facet |
Zhang Tianwei Chua, Wenjun |
format |
Final Year Project |
author |
Chua, Wenjun |
author_sort |
Chua, Wenjun |
title |
Evaluation of adversarial attacks against deep learning models |
title_short |
Evaluation of adversarial attacks against deep learning models |
title_full |
Evaluation of adversarial attacks against deep learning models |
title_fullStr |
Evaluation of adversarial attacks against deep learning models |
title_full_unstemmed |
Evaluation of adversarial attacks against deep learning models |
title_sort |
evaluation of adversarial attacks against deep learning models |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175064 |
_version_ |
1800916222490968064 |