Attack on training effort of deep learning
Deep Neural Network (DNN) is popular for its efficiency and accuracy across domains, including the medical field. However, medical DNNs are vulnerable to adversarial attacks, which presents a huge limitation in their clinical usage. Retinal vessel segmentation is key to diagnosis of ocular diseases....
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/147972 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-147972 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1479722021-04-21T05:37:42Z Attack on training effort of deep learning Chan, Wen Le Liu Yang School of Computer Science and Engineering Liu Yang yangliu@ntu.edu.sg Engineering::Computer science and engineering::Computer applications::Life and medical sciences Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Deep Neural Network (DNN) is popular for its efficiency and accuracy across domains, including the medical field. However, medical DNNs are vulnerable to adversarial attacks, which presents a huge limitation in their clinical usage. Retinal vessel segmentation is key to diagnosis of ocular diseases. This task is inherently challenging as 1) vessels have low contrast with the background 2) vessel are of different width 3) other pathological regions could be easily mistaken as vascular structures. With its high clinical value, many works construct DNNs to realize automated vessel segmentation. However, current approaches have two main limitations. 1) The small available datasets result in possible over-training and over-fitting. 2) The dataset contains only specially selected high-quality images, results in lack of generalisation ability on low-quality images and ill-crafted adversarial examples. To illustrate these limitations, two adversarial attack methods are proposed. We did not utilise noise attacks, as noises are rarely present in retinal images. We instead leveraged on their inherent degradation, which is uneven illumination, caused by the imperfect image acquisition process. Firstly, pixel-wise adversarial attack applies a Light-Enhancement curve iteratively on each pixel’s illumination. Secondly, threshold-based adversarial attack creates non-uniform illumination through disproportional change in different region’s illumination. We also utilised different constraints to ensure the effectiveness of adversarial examples while retaining high level of realism. Validation on DRIVE datasets with the state-of-the art DNN, SA-Unet, achieved superior results compared to noise-based attack. We revealed the potential threat of non-uniform illumination to DNN-based automated retinal segmentation, in hopes to inspire the development of uneven-illumination-robust approaches. We proposed a possible solution to this threat, by demonstrating the effectiveness of adversarial training method in improving network’s generalisation ability. In addition, we proposed DC-Unet, in which DropBlock, batch normalisation and ReLU activation are added to U-Net’s convolution block, with dynamic convolution linking the encoder path and decoder path. The proposed architecture achieved competitive performance on both DRIVE test set and synthesised low-quality images. Bachelor of Engineering (Computer Science) 2021-04-21T05:37:42Z 2021-04-21T05:37:42Z 2021 Final Year Project (FYP) Chan, W. L. (2021). Attack on training effort of deep learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/147972 https://hdl.handle.net/10356/147972 en SCSE20-0190 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computer applications::Life and medical sciences Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
Engineering::Computer science and engineering::Computer applications::Life and medical sciences Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Chan, Wen Le Attack on training effort of deep learning |
description |
Deep Neural Network (DNN) is popular for its efficiency and accuracy across domains, including the medical field. However, medical DNNs are vulnerable to adversarial attacks, which presents a huge limitation in their clinical usage. Retinal vessel segmentation is key to diagnosis of ocular diseases. This task is inherently challenging as 1) vessels have low contrast with the background 2) vessel are of different width 3) other pathological regions could be easily mistaken as vascular structures. With its high clinical value, many works construct DNNs to realize automated vessel segmentation. However, current approaches have two main limitations. 1) The small available datasets result in possible over-training and over-fitting. 2) The dataset contains only specially selected high-quality images, results in lack of generalisation ability on low-quality images and ill-crafted adversarial examples.
To illustrate these limitations, two adversarial attack methods are proposed. We did not utilise noise attacks, as noises are rarely present in retinal images. We instead leveraged on their inherent degradation, which is uneven illumination, caused by the imperfect image acquisition process. Firstly, pixel-wise adversarial attack applies a Light-Enhancement curve iteratively on each pixel’s illumination. Secondly, threshold-based adversarial attack creates non-uniform illumination through disproportional change in different region’s illumination. We also utilised different constraints to ensure the effectiveness of adversarial examples while retaining high level of realism. Validation on DRIVE datasets with the state-of-the art DNN, SA-Unet, achieved superior results compared to noise-based attack. We revealed the potential threat of non-uniform illumination to DNN-based automated retinal segmentation, in hopes to inspire the development of uneven-illumination-robust approaches.
We proposed a possible solution to this threat, by demonstrating the effectiveness of adversarial training method in improving network’s generalisation ability. In addition, we proposed DC-Unet, in which DropBlock, batch normalisation and ReLU activation are added to U-Net’s convolution block, with dynamic convolution linking the encoder path and decoder path. The proposed architecture achieved competitive performance on both DRIVE test set and synthesised low-quality images. |
author2 |
Liu Yang |
author_facet |
Liu Yang Chan, Wen Le |
format |
Final Year Project |
author |
Chan, Wen Le |
author_sort |
Chan, Wen Le |
title |
Attack on training effort of deep learning |
title_short |
Attack on training effort of deep learning |
title_full |
Attack on training effort of deep learning |
title_fullStr |
Attack on training effort of deep learning |
title_full_unstemmed |
Attack on training effort of deep learning |
title_sort |
attack on training effort of deep learning |
publisher |
Nanyang Technological University |
publishDate |
2021 |
url |
https://hdl.handle.net/10356/147972 |
_version_ |
1698713705433268224 |