Generating adversarial examples with only one image

Deep learning based vision systems are widely deployed in today's world. The backbones of these systems, namely deep neural networks (DNNs), are showing an impressive capability on feature extraction, large-scale training, and precise predictions. However, DNNs have been shown vulnerable to adv...

Full description

Saved in:
Bibliographic Details
Main Author: Luo, Jinqi
Other Authors: Jun Zhao
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148573
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-148573
record_format dspace
spelling sg-ntu-dr.10356-1485732021-05-06T06:40:14Z Generating adversarial examples with only one image Luo, Jinqi Jun Zhao School of Computer Science and Engineering Computational Intelligence Lab junzhao@ntu.edu.sg Engineering::Computer science and engineering Deep learning based vision systems are widely deployed in today's world. The backbones of these systems, namely deep neural networks (DNNs), are showing an impressive capability on feature extraction, large-scale training, and precise predictions. However, DNNs have been shown vulnerable to adversarial examples of different types including adversarial perturbations and adversarial patches. Existing approaches for adversarial patch generation hardly consider the contextual consistency between patches and the image background, causing such patches to be easily detected and adversarial attacks to fail. Additionally, these methods require a large amount of data for training, which is computationally expensive and time-consuming. In this project, we explore how to generate advanced adversarial patches effectively and efficiently. To overcome the aforementioned challenges, we propose an approach to generate adversarial yet inconspicuous patches with one single image. In our approach, adversarial patches are produced in a coarse-to-fine way with multiple scales of generators and discriminators. We consider the perceptual sensitivity of victim model by highlighting its sensitivity to equip our approach with strong attacking capability. The selection of patch location is based on the perceptual sensitivity of victim models. Contextual information is encoded during the Min-Max training to make patches consistent with surroundings. Through extensive experiments, our approach shows strong attacking ability in both the white-box and black-box setting. Experiments on saliency detection and user evaluation indicate that our adversarial patches, which can evade human observations, are more inconspicuous and natural-looking compared to existed approaches. Lastly, the experiments on real-world objects shows that our digital approach has the potential of being malicious in real-world settings. Bachelor of Engineering (Computer Science) 2021-05-06T06:40:14Z 2021-05-06T06:40:14Z 2021 Final Year Project (FYP) Luo, J. (2021). Generating adversarial examples with only one image. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148573 https://hdl.handle.net/10356/148573 en SCSE20-0291 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Luo, Jinqi
Generating adversarial examples with only one image
description Deep learning based vision systems are widely deployed in today's world. The backbones of these systems, namely deep neural networks (DNNs), are showing an impressive capability on feature extraction, large-scale training, and precise predictions. However, DNNs have been shown vulnerable to adversarial examples of different types including adversarial perturbations and adversarial patches. Existing approaches for adversarial patch generation hardly consider the contextual consistency between patches and the image background, causing such patches to be easily detected and adversarial attacks to fail. Additionally, these methods require a large amount of data for training, which is computationally expensive and time-consuming. In this project, we explore how to generate advanced adversarial patches effectively and efficiently. To overcome the aforementioned challenges, we propose an approach to generate adversarial yet inconspicuous patches with one single image. In our approach, adversarial patches are produced in a coarse-to-fine way with multiple scales of generators and discriminators. We consider the perceptual sensitivity of victim model by highlighting its sensitivity to equip our approach with strong attacking capability. The selection of patch location is based on the perceptual sensitivity of victim models. Contextual information is encoded during the Min-Max training to make patches consistent with surroundings. Through extensive experiments, our approach shows strong attacking ability in both the white-box and black-box setting. Experiments on saliency detection and user evaluation indicate that our adversarial patches, which can evade human observations, are more inconspicuous and natural-looking compared to existed approaches. Lastly, the experiments on real-world objects shows that our digital approach has the potential of being malicious in real-world settings.
author2 Jun Zhao
author_facet Jun Zhao
Luo, Jinqi
format Final Year Project
author Luo, Jinqi
author_sort Luo, Jinqi
title Generating adversarial examples with only one image
title_short Generating adversarial examples with only one image
title_full Generating adversarial examples with only one image
title_fullStr Generating adversarial examples with only one image
title_full_unstemmed Generating adversarial examples with only one image
title_sort generating adversarial examples with only one image
publisher Nanyang Technological University
publishDate 2021
url https://hdl.handle.net/10356/148573
_version_ 1699245909372567552