Adversarial attacks and defenses for visual signals
Since AlexNet won the 2012 ILSVRC championship, deep neural networks (DNNs) play an increasingly important role in many fields of computer vision research, such as image and video classification and salient object detection (SOD). However, in recent work, it was shown that a small purposeful perturb...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/164772 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-164772 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision |
spellingShingle |
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Cheng, Yupeng Adversarial attacks and defenses for visual signals |
description |
Since AlexNet won the 2012 ILSVRC championship, deep neural networks (DNNs) play an increasingly important role in many fields of computer vision research, such as image and video classification and salient object detection (SOD). However, in recent work, it was shown that a small purposeful perturbation of the input can lead to a significant classification error of DNN models. Such perturbed inputs are called "adversarial examples", and the corresponding generation methods are termed "adversarial attacks". The risk of DNNs is particularly surprising, as these modifications are usually imperceptible. Since then, great effort has been made to study the methods of achieving powerful attacks and the corresponding defense.
In this thesis, we explore potential threats and defense methods in different applications of DNNs. Specifically, we choose four research directions. As the adversarial attack against natural image classification task is the classic topic in this field, we choose it as our first research direction. In this part, we post and solve a brand new problem, i.e., how to simultaneously remove the noise of the input images while fooling the DNNs. After we complete the exploration on natural image classification area, the medical image classification task draws our attention. As a result, we pick "adversarial attack against medical image grading model" as our second research direction. In this work, we study the influence of camera exposure from the viewpoint of adversarial attack against DR grading system. These two works both investigate adversarial attack against image classification task. So, we select the adversarial attack against SOD as our third research direction. In this topic, we study the influence of varying partly image regions to the RGB-D SOD model. Adversarial defense is also an important part of the adversarial topic. Hence, we choose defense as the fourth research direction. Moreover, to expand horizons, we pick video classification as the target task and explore defense method for adversarial examples against the video recognition model. The four works are then detailed as follows.
In the first work, we propose a new type of attack against image classification task: stealthily embedding adversarial attacks into the denoising process, such that the resultant better quality images can fool the DNN classifiers. A recent denoising work generates denoised images by applying pixel-wise kernel convolution, where the kernels are obtained by a kernel prediction network. To make the corresponding denoised images become adversarial examples, we propose a new method called Perceptually Aware and Stealthy Adversarial DENoise Attack (Pasadena), which modifies those pixel-wise kernels. Besides, to make the denoised image look natural while maintaining a similar attack effect, an adaptive perceptual region localization stage is applied to ensure that our attacks are located in vulnerable regions. Our experiments conducted on many challenging datasets demonstrate that our method successfully completes the target.
In the second work, we choose the Diabetic Retinopathy (DR) grading as the task to be attacked. Based on the observation that the DNN-based DR diagnostic system is sensitive to camera exposure, we propose a novel adversarial attack, i.e., adversarial exposure attack. Specifically, given an input retinal fundus image, we generate its bracketed exposure sequences and apply pixel-wised weight combination of them to obtain the adversarial image. Thus, this process is termed bracketed exposure fusion based attack (BEF). In addition, to increase the transferability of the adversaries, we propose a convolutional bracketed exposure fusion based attack (CBEF) which extends the element-wise weight maps to element-wise kernel maps. The experimental results on a popular DR detection dataset show that our adversarial outputs can mislead the DNN-based DR grading model and keep a natural appearance.
In the third work, on the adversarial attack of RGB-D saliency detectors, we propose a new targeted attack, i.e., Sub-Salient Target Attack (SSTA), which achieves an effective attack. Specifically, SSTA first finds out Sub-Salient (SS) regions of a multimodal SOD DNNs model, and then applies higher-strength perturbation on such regions to achieve effective adversarial attacks. Specifically, Sub-Salient (SS) regions are the places with high saliency in the background of image. We design a Sub-Salient Localization (SSL) module to obtain SS regions by finding out high-saliency regions in a modified image where the foreground object has been modified with sufficient adversarial perturbation. Moreover, experimental results show that when SSTA is embedded in adversarial training, the robustness of SOD model can be significantly improved. Extensive experiments on public datasets demonstrate that the advantage of our approach over plain baseline method.
In the fourth work, we propose a two-stage framework to achieve the goal of defending adversarial videos. It consists of an adaptive JPEG compression defense followed by an optical texture defense, both of which utilize the optical flow maps extracted from videos. In the first stage, our method estimates the size of moving foreground in the video based on optical flow maps and applies an appropriate JPEG compression to remove the adversarial noise in the videos. In the second stage, we intentionally disturb input videos by optical texture crafted from optical flow maps, so that the influence of adversarial noise can be diluted to some extent. The results on the UCF101 benchmark dataset show that our approach can generate reliable defense results, and the two stages of it are mutually beneficial. |
author2 |
Lin Shang-Wei |
author_facet |
Lin Shang-Wei Cheng, Yupeng |
format |
Thesis-Doctor of Philosophy |
author |
Cheng, Yupeng |
author_sort |
Cheng, Yupeng |
title |
Adversarial attacks and defenses for visual signals |
title_short |
Adversarial attacks and defenses for visual signals |
title_full |
Adversarial attacks and defenses for visual signals |
title_fullStr |
Adversarial attacks and defenses for visual signals |
title_full_unstemmed |
Adversarial attacks and defenses for visual signals |
title_sort |
adversarial attacks and defenses for visual signals |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/164772 |
_version_ |
1759853973156659200 |
spelling |
sg-ntu-dr.10356-1647722023-03-06T07:30:04Z Adversarial attacks and defenses for visual signals Cheng, Yupeng Lin Shang-Wei Lin Weisi School of Computer Science and Engineering shang-wei.lin@ntu.edu.sg, WSLin@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Since AlexNet won the 2012 ILSVRC championship, deep neural networks (DNNs) play an increasingly important role in many fields of computer vision research, such as image and video classification and salient object detection (SOD). However, in recent work, it was shown that a small purposeful perturbation of the input can lead to a significant classification error of DNN models. Such perturbed inputs are called "adversarial examples", and the corresponding generation methods are termed "adversarial attacks". The risk of DNNs is particularly surprising, as these modifications are usually imperceptible. Since then, great effort has been made to study the methods of achieving powerful attacks and the corresponding defense. In this thesis, we explore potential threats and defense methods in different applications of DNNs. Specifically, we choose four research directions. As the adversarial attack against natural image classification task is the classic topic in this field, we choose it as our first research direction. In this part, we post and solve a brand new problem, i.e., how to simultaneously remove the noise of the input images while fooling the DNNs. After we complete the exploration on natural image classification area, the medical image classification task draws our attention. As a result, we pick "adversarial attack against medical image grading model" as our second research direction. In this work, we study the influence of camera exposure from the viewpoint of adversarial attack against DR grading system. These two works both investigate adversarial attack against image classification task. So, we select the adversarial attack against SOD as our third research direction. In this topic, we study the influence of varying partly image regions to the RGB-D SOD model. Adversarial defense is also an important part of the adversarial topic. Hence, we choose defense as the fourth research direction. Moreover, to expand horizons, we pick video classification as the target task and explore defense method for adversarial examples against the video recognition model. The four works are then detailed as follows. In the first work, we propose a new type of attack against image classification task: stealthily embedding adversarial attacks into the denoising process, such that the resultant better quality images can fool the DNN classifiers. A recent denoising work generates denoised images by applying pixel-wise kernel convolution, where the kernels are obtained by a kernel prediction network. To make the corresponding denoised images become adversarial examples, we propose a new method called Perceptually Aware and Stealthy Adversarial DENoise Attack (Pasadena), which modifies those pixel-wise kernels. Besides, to make the denoised image look natural while maintaining a similar attack effect, an adaptive perceptual region localization stage is applied to ensure that our attacks are located in vulnerable regions. Our experiments conducted on many challenging datasets demonstrate that our method successfully completes the target. In the second work, we choose the Diabetic Retinopathy (DR) grading as the task to be attacked. Based on the observation that the DNN-based DR diagnostic system is sensitive to camera exposure, we propose a novel adversarial attack, i.e., adversarial exposure attack. Specifically, given an input retinal fundus image, we generate its bracketed exposure sequences and apply pixel-wised weight combination of them to obtain the adversarial image. Thus, this process is termed bracketed exposure fusion based attack (BEF). In addition, to increase the transferability of the adversaries, we propose a convolutional bracketed exposure fusion based attack (CBEF) which extends the element-wise weight maps to element-wise kernel maps. The experimental results on a popular DR detection dataset show that our adversarial outputs can mislead the DNN-based DR grading model and keep a natural appearance. In the third work, on the adversarial attack of RGB-D saliency detectors, we propose a new targeted attack, i.e., Sub-Salient Target Attack (SSTA), which achieves an effective attack. Specifically, SSTA first finds out Sub-Salient (SS) regions of a multimodal SOD DNNs model, and then applies higher-strength perturbation on such regions to achieve effective adversarial attacks. Specifically, Sub-Salient (SS) regions are the places with high saliency in the background of image. We design a Sub-Salient Localization (SSL) module to obtain SS regions by finding out high-saliency regions in a modified image where the foreground object has been modified with sufficient adversarial perturbation. Moreover, experimental results show that when SSTA is embedded in adversarial training, the robustness of SOD model can be significantly improved. Extensive experiments on public datasets demonstrate that the advantage of our approach over plain baseline method. In the fourth work, we propose a two-stage framework to achieve the goal of defending adversarial videos. It consists of an adaptive JPEG compression defense followed by an optical texture defense, both of which utilize the optical flow maps extracted from videos. In the first stage, our method estimates the size of moving foreground in the video based on optical flow maps and applies an appropriate JPEG compression to remove the adversarial noise in the videos. In the second stage, we intentionally disturb input videos by optical texture crafted from optical flow maps, so that the influence of adversarial noise can be diluted to some extent. The results on the UCF101 benchmark dataset show that our approach can generate reliable defense results, and the two stages of it are mutually beneficial. Doctor of Philosophy 2023-02-14T03:00:17Z 2023-02-14T03:00:17Z 2023 Thesis-Doctor of Philosophy Cheng, Y. (2023). Adversarial attacks and defenses for visual signals. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/164772 https://hdl.handle.net/10356/164772 10.32657/10356/164772 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |