Attack on training effort of deep learning

Deep Neural Network (DNN) is popular for its efficiency and accuracy across domains, including the medical field. However, medical DNNs are vulnerable to adversarial attacks, which presents a huge limitation in their clinical usage. Retinal vessel segmentation is key to diagnosis of ocular diseases....

Full description

Saved in:
Bibliographic Details
Main Author: Chan, Wen Le
Other Authors: Liu Yang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/147972
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Deep Neural Network (DNN) is popular for its efficiency and accuracy across domains, including the medical field. However, medical DNNs are vulnerable to adversarial attacks, which presents a huge limitation in their clinical usage. Retinal vessel segmentation is key to diagnosis of ocular diseases. This task is inherently challenging as 1) vessels have low contrast with the background 2) vessel are of different width 3) other pathological regions could be easily mistaken as vascular structures. With its high clinical value, many works construct DNNs to realize automated vessel segmentation. However, current approaches have two main limitations. 1) The small available datasets result in possible over-training and over-fitting. 2) The dataset contains only specially selected high-quality images, results in lack of generalisation ability on low-quality images and ill-crafted adversarial examples. To illustrate these limitations, two adversarial attack methods are proposed. We did not utilise noise attacks, as noises are rarely present in retinal images. We instead leveraged on their inherent degradation, which is uneven illumination, caused by the imperfect image acquisition process. Firstly, pixel-wise adversarial attack applies a Light-Enhancement curve iteratively on each pixel’s illumination. Secondly, threshold-based adversarial attack creates non-uniform illumination through disproportional change in different region’s illumination. We also utilised different constraints to ensure the effectiveness of adversarial examples while retaining high level of realism. Validation on DRIVE datasets with the state-of-the art DNN, SA-Unet, achieved superior results compared to noise-based attack. We revealed the potential threat of non-uniform illumination to DNN-based automated retinal segmentation, in hopes to inspire the development of uneven-illumination-robust approaches. We proposed a possible solution to this threat, by demonstrating the effectiveness of adversarial training method in improving network’s generalisation ability. In addition, we proposed DC-Unet, in which DropBlock, batch normalisation and ReLU activation are added to U-Net’s convolution block, with dynamic convolution linking the encoder path and decoder path. The proposed architecture achieved competitive performance on both DRIVE test set and synthesised low-quality images.