Adversarial examples in neural networks
In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually. Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learni...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175179 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually.
Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learning models crafted for the purpose of causing models to make mistakes.
These examples are made through imperceptible modifications to the input data that are naked to the human eye, which can alter the model’s initial output significantly, resulting in an abnormal output.
There are many research works focusing on generating transferable adversarial examples and designing defence methods to protect networks from adversarial examples.
This project explores various attacks as well as defence techniques that are currently in place. Through the analysis of the various attack and defence techniques, a defence method shall be proposed to aid in the defence against adversarial examples. |
---|