Evaluation of adversarial attacks against deep learning models
As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning mod...
محفوظ في:
المؤلف الرئيسي: | |
---|---|
مؤلفون آخرون: | |
التنسيق: | Final Year Project |
اللغة: | English |
منشور في: |
Nanyang Technological University
2024
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/175064 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Nanyang Technological University |
اللغة: | English |
الملخص: | As artificial intelligence (AI) has grown in popularity over the years, the application of AI and deep learning models to make our lives easier has become more prevalent which led to the increase in usage and reliance of AI. This provides increased incentives for attackers to trick deep learning models into generating false results for their benefit, making them more susceptible to adversarial attacks by hackers and threatening the stability and robustness of deep learning models. This report serves to replicate and evaluate various known adversarial attacks against popular deep learning models and evaluate their performance. Experimental results show that while certain defenses exhibit efficacy against specific adversarial attacks, none provides comprehensive protection against all threats. |
---|