Backdoor attacks in neural networks

As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative,...

Full description

Saved in:
Bibliographic Details
Main Author: Liew, Sher Yun
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175146
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175146
record_format dspace
spelling sg-ntu-dr.10356-1751462024-04-26T15:41:06Z Backdoor attacks in neural networks Liew, Sher Yun Zhang Tianwei School of Computer Science and Engineering tianwei.zhang@ntu.edu.sg Computer and Information Science Cyber security As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative, has brought about a pressing concern: the potential for disastrous consequences arising from malicious backdoor attacks in neural networks. To determine the effects and limitations of these attacks, this project aims to conduct a comprehensive examination of 2 previously proposed backdoor attack strategies, namely Blended and Blind backdoors, along with 2 previously proposed backdoor defence mechanisms, namely Neural Cleanse and Spectral Signatures. An exhaustive review of pertinent research literature was performed. Additionally, experiments were carried out to test the effectiveness of these strategies. Bachelor's degree 2024-04-22T05:47:49Z 2024-04-22T05:47:49Z 2024 Final Year Project (FYP) Liew, S. Y. (2024). Backdoor attacks in neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175146 https://hdl.handle.net/10356/175146 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Cyber security
spellingShingle Computer and Information Science
Cyber security
Liew, Sher Yun
Backdoor attacks in neural networks
description As artificial intelligence becomes increasingly integrated in our daily lives, neural networks can be found in applications of deep learning in a multitude of critical domains, encompassing facial recognition, autonomous vehicular systems, and more. This pervasive integration, while transformative, has brought about a pressing concern: the potential for disastrous consequences arising from malicious backdoor attacks in neural networks. To determine the effects and limitations of these attacks, this project aims to conduct a comprehensive examination of 2 previously proposed backdoor attack strategies, namely Blended and Blind backdoors, along with 2 previously proposed backdoor defence mechanisms, namely Neural Cleanse and Spectral Signatures. An exhaustive review of pertinent research literature was performed. Additionally, experiments were carried out to test the effectiveness of these strategies.
author2 Zhang Tianwei
author_facet Zhang Tianwei
Liew, Sher Yun
format Final Year Project
author Liew, Sher Yun
author_sort Liew, Sher Yun
title Backdoor attacks in neural networks
title_short Backdoor attacks in neural networks
title_full Backdoor attacks in neural networks
title_fullStr Backdoor attacks in neural networks
title_full_unstemmed Backdoor attacks in neural networks
title_sort backdoor attacks in neural networks
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175146
_version_ 1814047215299067904