Adversarial examples in neural networks
In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually. Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learni...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175179 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175179 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1751792024-04-19T15:42:41Z Adversarial examples in neural networks Lim, Ruihong Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Computer and Information Science Engineering Computer science and engineering Computing methodologies In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually. Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learning models crafted for the purpose of causing models to make mistakes. These examples are made through imperceptible modifications to the input data that are naked to the human eye, which can alter the model’s initial output significantly, resulting in an abnormal output. There are many research works focusing on generating transferable adversarial examples and designing defence methods to protect networks from adversarial examples. This project explores various attacks as well as defence techniques that are currently in place. Through the analysis of the various attack and defence techniques, a defence method shall be proposed to aid in the defence against adversarial examples. Bachelor's degree 2024-04-19T13:02:35Z 2024-04-19T13:02:35Z 2024 Final Year Project (FYP) Lim, R. (2024). Adversarial examples in neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175179 https://hdl.handle.net/10356/175179 en SCSE23-0067 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Engineering Computer science and engineering Computing methodologies |
spellingShingle |
Computer and Information Science Engineering Computer science and engineering Computing methodologies Lim, Ruihong Adversarial examples in neural networks |
description |
In recent years, development in various areas such as computer vision and natural language processing, has exposed deep learning technology to security risks gradually.
Adversarial examples are one of the security risks faced by deep learning technology, where they are inputs into machine learning models crafted for the purpose of causing models to make mistakes.
These examples are made through imperceptible modifications to the input data that are naked to the human eye, which can alter the model’s initial output significantly, resulting in an abnormal output.
There are many research works focusing on generating transferable adversarial examples and designing defence methods to protect networks from adversarial examples.
This project explores various attacks as well as defence techniques that are currently in place. Through the analysis of the various attack and defence techniques, a defence method shall be proposed to aid in the defence against adversarial examples. |
author2 |
Zhang Tianwei |
author_facet |
Zhang Tianwei Lim, Ruihong |
format |
Final Year Project |
author |
Lim, Ruihong |
author_sort |
Lim, Ruihong |
title |
Adversarial examples in neural networks |
title_short |
Adversarial examples in neural networks |
title_full |
Adversarial examples in neural networks |
title_fullStr |
Adversarial examples in neural networks |
title_full_unstemmed |
Adversarial examples in neural networks |
title_sort |
adversarial examples in neural networks |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175179 |
_version_ |
1800916243633405952 |