Deepfake based backdoor attack against face recognition deep neural networks

The recent development and expansion of the field of artificial intelligence has led to a significant increase in the market value of utilizing cutting-edge AI. One branch of AI that has gained widespread attention is Deep Neural Networks (DNNs), known for their outstanding performance in classifica...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Davis Wen Han
Other Authors: Chang Chip Hong
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167031
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-167031
record_format dspace
spelling sg-ntu-dr.10356-1670312023-07-07T15:45:07Z Deepfake based backdoor attack against face recognition deep neural networks Tan, Davis Wen Han Chang Chip Hong School of Electrical and Electronic Engineering ECHChang@ntu.edu.sg Engineering::Electrical and electronic engineering The recent development and expansion of the field of artificial intelligence has led to a significant increase in the market value of utilizing cutting-edge AI. One branch of AI that has gained widespread attention is Deep Neural Networks (DNNs), known for their outstanding performance in classification tasks. As a result, DNNs have migrated into a variety of application domains. Backdoor attacks have grown to be a significant danger to deep learning models in recent years. A victim model can have a hidden backdoor created that allows a target misclassification to be activated on any poisoned input while still maintaining the classification accuracy on benign inputs by incorporating a small percentage of poisoned samples into the training dataset. Numerous stealthy backdoor generation algorithms have been proposed to make the backdoor trigger unnoticeable to keep it hidden from human observers. Deepfake, makes it very simple for anyone to create highly realistic but fake images, videos, and even voices without the need for advanced technical knowledge. Although this technology was initially designed for use in digital entertainment, the ability to alter content and create new content poses a threat when it is employed for nefarious and malicious purposes, such as disseminating false information. One potential attack is to use Deepfake to embed covert backdoors in DNNs for face recognition. The victim DNNs can correctly identify images of benign faces, but they misclassify targets when given Deepfake's poisoned test images. Bachelor of Engineering (Electrical and Electronic Engineering) 2023-05-21T09:20:54Z 2023-05-21T09:20:54Z 2023 Final Year Project (FYP) Tan, D. W. H. (2023). Deepfake based backdoor attack against face recognition deep neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167031 https://hdl.handle.net/10356/167031 en A2103-221 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Electrical and electronic engineering
spellingShingle Engineering::Electrical and electronic engineering
Tan, Davis Wen Han
Deepfake based backdoor attack against face recognition deep neural networks
description The recent development and expansion of the field of artificial intelligence has led to a significant increase in the market value of utilizing cutting-edge AI. One branch of AI that has gained widespread attention is Deep Neural Networks (DNNs), known for their outstanding performance in classification tasks. As a result, DNNs have migrated into a variety of application domains. Backdoor attacks have grown to be a significant danger to deep learning models in recent years. A victim model can have a hidden backdoor created that allows a target misclassification to be activated on any poisoned input while still maintaining the classification accuracy on benign inputs by incorporating a small percentage of poisoned samples into the training dataset. Numerous stealthy backdoor generation algorithms have been proposed to make the backdoor trigger unnoticeable to keep it hidden from human observers. Deepfake, makes it very simple for anyone to create highly realistic but fake images, videos, and even voices without the need for advanced technical knowledge. Although this technology was initially designed for use in digital entertainment, the ability to alter content and create new content poses a threat when it is employed for nefarious and malicious purposes, such as disseminating false information. One potential attack is to use Deepfake to embed covert backdoors in DNNs for face recognition. The victim DNNs can correctly identify images of benign faces, but they misclassify targets when given Deepfake's poisoned test images.
author2 Chang Chip Hong
author_facet Chang Chip Hong
Tan, Davis Wen Han
format Final Year Project
author Tan, Davis Wen Han
author_sort Tan, Davis Wen Han
title Deepfake based backdoor attack against face recognition deep neural networks
title_short Deepfake based backdoor attack against face recognition deep neural networks
title_full Deepfake based backdoor attack against face recognition deep neural networks
title_fullStr Deepfake based backdoor attack against face recognition deep neural networks
title_full_unstemmed Deepfake based backdoor attack against face recognition deep neural networks
title_sort deepfake based backdoor attack against face recognition deep neural networks
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/167031
_version_ 1772827693834829824