Evaluation of adversarial attacks against deep learning models

As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning mod...

Full description

Saved in:
Bibliographic Details
Main Author: Chua, Wenjun
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175064
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning models into generating false results for their benefit, making them more susceptible to adversarial attacks by hackers and threatening the stability and robustness of deep learning models. This report serves to replicate and evaluate various known adversarial attacks against popular deep learning models and evaluate their performance. Experimental results show that while certain defenses exhibit efficacy against specific adversarial attacks, none provides comprehensive protection against all threats.