Defense on unrestricted adversarial examples

Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Sim, Chee Xian
مؤلفون آخرون: Jun Zhao
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2023
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/165839
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such as image recognition and playing video games. However, DRL models are vulnerable to adversarial attacks that could lead to catastrophic results. A white-box attack, such as the Fast Gradient Signed Method (FGSM) attack, can significantly affect the performance of models, even with low amounts of perturbations. To defend against such attacks, the most common approach is to perform adversarial training to create robust neural networks against these attacks. In this paper, we explore the use of Bayesian Neural Networks (BNN) on Proximal Policy Optimization (PPO) model to defend against adversarial attacks.