Backdoor attacks in neural networks

Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaborati...

Full description

Saved in:
Bibliographic Details
Main Author: Low, Wen Wen
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/171934
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Neural networks have emerged as a powerful tool in the field of artificial intelligence and machine learning. Inspired by the structure and functionality of the human brain, neural networks are computational models composed of interconnected nodes, or "neurons," that work collaboratively to process and analyse data. By learning from vast amounts of labelled examples, neural networks can recognize patterns, make predictions, and solve complex tasks with remarkable accuracy. With the increasing adoption of neural networks in various domains, ensuring their robustness and security has become a critical concern. This project explores the concept of backdoor attacks in neural networks. Backdoor attacks involve the deliberate insertion of hidden triggers into the learning process of a neural network model, compromising its integrity and reliability. The project aims to understand the mechanisms and vulnerabilities that enable backdoor attacks and investigates defence strategies to mitigate their impact. Through experiments and analysis, this FYP aims to contribute to the development of robust defence mechanisms that enhance the security of neural network models against backdoor attacks, ensuring their trustworthiness and reliability in critical applications.