Defense on unrestricted adversarial examples

Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such...

全面介紹

Saved in:
書目詳細資料
主要作者: Sim, Chee Xian
其他作者: Jun Zhao
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2023
主題:
在線閱讀:https://hdl.handle.net/10356/165839
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such as image recognition and playing video games. However, DRL models are vulnerable to adversarial attacks that could lead to catastrophic results. A white-box attack, such as the Fast Gradient Signed Method (FGSM) attack, can significantly affect the performance of models, even with low amounts of perturbations. To defend against such attacks, the most common approach is to perform adversarial training to create robust neural networks against these attacks. In this paper, we explore the use of Bayesian Neural Networks (BNN) on Proximal Policy Optimization (PPO) model to defend against adversarial attacks.