Robustness of semi-supervised deep learning model against backdoor attacks
Deep neural networks (DNNs) have revolutionized computer vision (CV), particularly in object detection and image classification applications. However, annotating data is a costly and time-consuming process that limits the amount of labeled data available for model training. Semi-supervised learning...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/167428 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-167428 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1674282023-07-07T17:43:31Z Robustness of semi-supervised deep learning model against backdoor attacks Siew, Jun Ze Chang Chip Hong School of Electrical and Electronic Engineering ECHChang@ntu.edu.sg Engineering::Electrical and electronic engineering Deep neural networks (DNNs) have revolutionized computer vision (CV), particularly in object detection and image classification applications. However, annotating data is a costly and time-consuming process that limits the amount of labeled data available for model training. Semi-supervised learning (SSL) addresses this issue by utilizing a small portion of labeled data to learn underlying patterns and justify unlabeled data without sacrificing prediction performance. It has been demonstrated that DNNs trained by supervised learning (SL) algorithms are susceptible to data poisoning backdoor attacks. Imperceptible malicious behaviors can be embedded into activated DNNs and cause target misclassification when a specific “trigger” exists. This is due to DNNs excessive learning ability that could build a latent connection between the trigger pattern and target labels. However, the effectiveness of such backdoor attacks is rarely studied under SSL settings. Therefore, this project aims to evaluate the robustness of semi-supervised learning methods against backdoor attacks. The data-augmentation-based backdoor attack is selected in our evaluation. The attack trigger is applied separately to each image channel (R, G, B) and forms a composite trigger imperceptible to a human. This attack is conducted on a MobileNetV3 model trained on the Cifar-10 dataset using the SSL algorithm. The results show a dramatically high attack success rate of 96%, even with just a 1% injection rate of backdoored samples. This final-year project (FYP) aims to contribute to developing more secure semi-supervised learning methods that can be applied to practical applications in computer vision and methods to improve its security. Bachelor of Engineering (Electrical and Electronic Engineering) 2023-05-26T08:02:19Z 2023-05-26T08:02:19Z 2023 Final Year Project (FYP) Siew, J. Z. (2023). Robustness of semi-supervised deep learning model against backdoor attacks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167428 https://hdl.handle.net/10356/167428 en A2102-221 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Electrical and electronic engineering |
spellingShingle |
Engineering::Electrical and electronic engineering Siew, Jun Ze Robustness of semi-supervised deep learning model against backdoor attacks |
description |
Deep neural networks (DNNs) have revolutionized computer vision (CV), particularly in object detection and image classification applications. However, annotating data is a costly and time-consuming process that limits the amount of labeled data available for model training. Semi-supervised learning (SSL) addresses this issue by utilizing a small portion of labeled data to learn underlying patterns and justify unlabeled data without sacrificing prediction performance.
It has been demonstrated that DNNs trained by supervised learning (SL) algorithms are susceptible to data poisoning backdoor attacks. Imperceptible malicious behaviors can be embedded into activated DNNs and cause target misclassification when a specific “trigger” exists. This is due to DNNs excessive learning ability that could build a latent connection between the trigger pattern and target labels. However, the effectiveness of such backdoor attacks is rarely studied under SSL settings.
Therefore, this project aims to evaluate the robustness of semi-supervised learning methods against backdoor attacks. The data-augmentation-based backdoor attack is selected in our evaluation. The attack trigger is applied separately to each image channel (R, G, B) and forms a composite trigger imperceptible to a human. This attack is conducted on a MobileNetV3 model trained on the Cifar-10 dataset using the SSL algorithm. The results show a dramatically high attack success rate of 96%, even with just a 1% injection rate of backdoored samples.
This final-year project (FYP) aims to contribute to developing more secure semi-supervised learning methods that can be applied to practical applications in computer vision and methods to improve its security. |
author2 |
Chang Chip Hong |
author_facet |
Chang Chip Hong Siew, Jun Ze |
format |
Final Year Project |
author |
Siew, Jun Ze |
author_sort |
Siew, Jun Ze |
title |
Robustness of semi-supervised deep learning model against backdoor attacks |
title_short |
Robustness of semi-supervised deep learning model against backdoor attacks |
title_full |
Robustness of semi-supervised deep learning model against backdoor attacks |
title_fullStr |
Robustness of semi-supervised deep learning model against backdoor attacks |
title_full_unstemmed |
Robustness of semi-supervised deep learning model against backdoor attacks |
title_sort |
robustness of semi-supervised deep learning model against backdoor attacks |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/167428 |
_version_ |
1772827884176539648 |