Defense on unrestricted adversarial examples

Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such...

Full description

Saved in:
Bibliographic Details
Main Author: Sim, Chee Xian
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/165839
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-165839
record_format dspace
spelling sg-ntu-dr.10356-1658392023-04-14T15:37:40Z Defense on unrestricted adversarial examples Sim, Chee Xian Jun Zhao School of Computer Science and Engineering junzhao@ntu.edu.sg Engineering::Computer science and engineering Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such as image recognition and playing video games. However, DRL models are vulnerable to adversarial attacks that could lead to catastrophic results. A white-box attack, such as the Fast Gradient Signed Method (FGSM) attack, can significantly affect the performance of models, even with low amounts of perturbations. To defend against such attacks, the most common approach is to perform adversarial training to create robust neural networks against these attacks. In this paper, we explore the use of Bayesian Neural Networks (BNN) on Proximal Policy Optimization (PPO) model to defend against adversarial attacks. Bachelor of Engineering (Computer Engineering) 2023-04-13T05:08:31Z 2023-04-13T05:08:31Z 2023 Final Year Project (FYP) Sim, C. X. (2023). Defense on unrestricted adversarial examples. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/165839 https://hdl.handle.net/10356/165839 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Sim, Chee Xian
Defense on unrestricted adversarial examples
description Deep Neural Networks (DNN) and Deep Learning (DL) has led to advancements in various fields, including learning algorithms such as Reinforcement Learning (RL). These advancements have led to new algorithms like Deep Reinforcement Learning (DRL), which can achieve great performance in fields such as image recognition and playing video games. However, DRL models are vulnerable to adversarial attacks that could lead to catastrophic results. A white-box attack, such as the Fast Gradient Signed Method (FGSM) attack, can significantly affect the performance of models, even with low amounts of perturbations. To defend against such attacks, the most common approach is to perform adversarial training to create robust neural networks against these attacks. In this paper, we explore the use of Bayesian Neural Networks (BNN) on Proximal Policy Optimization (PPO) model to defend against adversarial attacks.
author2 Jun Zhao
author_facet Jun Zhao
Sim, Chee Xian
format Final Year Project
author Sim, Chee Xian
author_sort Sim, Chee Xian
title Defense on unrestricted adversarial examples
title_short Defense on unrestricted adversarial examples
title_full Defense on unrestricted adversarial examples
title_fullStr Defense on unrestricted adversarial examples
title_full_unstemmed Defense on unrestricted adversarial examples
title_sort defense on unrestricted adversarial examples
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/165839
_version_ 1764208014979497984