Backdoor attacks in neural networks

Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaborati...

Full description

Saved in:
Bibliographic Details
Main Author: Low, Wen Wen
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171934
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-171934
record_format dspace
spelling sg-ntu-dr.10356-1719342023-11-17T15:37:24Z Backdoor attacks in neural networks Low, Wen Wen Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaboratively to process and analyse data. By learning from vast amounts of labelled examples, neural networks can recognize patterns, make predictions, and solve complex tasks with remarkable accuracy. With the increasing adoption of neural networks in various domains, ensuring their robustness and security has become a critical concern. This project explores the concept of backdoor attacks in neural networks. Backdoor attacks involve the deliberate insertion of hidden triggers into the learning process of a neural network model, compromising its integrity and reliability. The project aims to understand the mechanisms and vulnerabilities that enable backdoor attacks and investigates defence strategies to mitigate their impact. Through experiments and analysis, this FYP aims to contribute to the development of robust defence mechanisms that enhance the security of neural network models against backdoor attacks, ensuring their trustworthiness and reliability in critical applications. Bachelor of Engineering (Computer Engineering) 2023-11-17T02:55:43Z 2023-11-17T02:55:43Z 2023 Final Year Project (FYP) Low, W. W. (2023). Backdoor attacks in neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171934 https://hdl.handle.net/10356/171934 en SCSE22-0765 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies
spellingShingle Engineering::Computer science and engineering::Computing methodologies
Low, Wen Wen
Backdoor attacks in neural networks
description Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaboratively to process and analyse data. By learning from vast amounts of labelled examples, neural networks can recognize patterns, make predictions, and solve complex tasks with remarkable accuracy. With the increasing adoption of neural networks in various domains, ensuring their robustness and security has become a critical concern. This project explores the concept of backdoor attacks in neural networks. Backdoor attacks involve the deliberate insertion of hidden triggers into the learning process of a neural network model, compromising its integrity and reliability. The project aims to understand the mechanisms and vulnerabilities that enable backdoor attacks and investigates defence strategies to mitigate their impact. Through experiments and analysis, this FYP aims to contribute to the development of robust defence mechanisms that enhance the security of neural network models against backdoor attacks, ensuring their trustworthiness and reliability in critical applications.
author2 Zhang Tianwei
author_facet Zhang Tianwei
Low, Wen Wen
format Final Year Project
author Low, Wen Wen
author_sort Low, Wen Wen
title Backdoor attacks in neural networks
title_short Backdoor attacks in neural networks
title_full Backdoor attacks in neural networks
title_fullStr Backdoor attacks in neural networks
title_full_unstemmed Backdoor attacks in neural networks
title_sort backdoor attacks in neural networks
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/171934
_version_ 1783955542111158272