Evaluation of adversarial attacks against deep learning models
As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning mod...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/175064 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
總結: | As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning models into generating false results for their benefit, making them more susceptible to adversarial attacks by hackers and threatening the stability and robustness of deep learning models. This report serves to replicate and evaluate various known adversarial attacks against popular deep learning models and evaluate their performance. Experimental results show that while certain defenses exhibit efficacy against specific adversarial attacks, none provides comprehensive protection against all threats. |
---|