Backdoor attacks in neural networks

As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative,...

Full description

Saved in:
Bibliographic Details
Main Author: Liew, Sher Yun
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175146
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative, has brought about a pressing concern: the potential for disastrous consequences arising from malicious backdoor attacks in neural networks. To determine the effects and limitations of these attacks, this project aims to conduct a comprehensive examination of 2 previously proposed backdoor attack strategies, namely Blended and Blind backdoors, along with 2 previously proposed backdoor defence mechanisms, namely Neural Cleanse and Spectral Signatures. An exhaustive review of pertinent research literature was performed. Additionally, experiments were carried out to test the effectiveness of these strategies.