Defense on unrestricted adversarial examples
Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165839 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in
various fields, including learning algorithms such as Reinforcement Learning (RL).
These advancements have led to new algorithms like Deep Reinforcement Learning
(DRL), which can achieve great performance in fields such as image recognition and
playing video games. However, DRL models are vulnerable to adversarial attacks that
could lead to catastrophic results. A white-box attack, such as the Fast Gradient
Signed Method (FGSM) attack, can significantly affect the performance of models,
even with low amounts of perturbations. To defend against such attacks, the most
common approach is to perform adversarial training to create robust neural networks
against these attacks. In this paper, we explore the use of Bayesian Neural Networks
(BNN) on Proximal Policy Optimization (PPO) model to defend against adversarial
attacks. |
---|