Pre-trained and sample-transferable perturbation based adversarial neuron manipulation: revealing the risks of transfer learning in remote sensing
The classification of remote sensing images has been revolutionized by the advent of deep learning, particularly through the application of transfer learning techniques. However, the susceptibility of these models to adversarial attacks poses significant challenges. Existing adversarial attacks a...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180881 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The classification of remote sensing images has been revolutionized by the advent
of deep learning, particularly through the application of transfer learning
techniques. However, the susceptibility of these models to adversarial attacks
poses significant challenges. Existing adversarial attacks against transfer learning- based deep models always require domain-specific data
or multiple interactions with the model, which are not always available and are of
high computational complexity. This paper proposes a novel Adversarial Neuron
Manipulation (ANM) method, which generates pre-trained and sample-
transferable perturbations to craft adversarial examples. The pre-training process
does not require domain-specific information, and these perturbations can be
merged with any image that is not involved in the perturbation generation process
to create adversarial examples, hence the adversarial neuron manipulation requires
lower accessibility to the victim model and is more computationally efficient for
the attacker. Experiments on different models with various remote sensing datasets
demonstrate the effectiveness of the proposed attack method. By analyzing the
vulnerabilities of deep models, perturbations that can manipulate multiple fragile
neurons show better attack performance. This low-demand adversarial neuron
manipulation attack reveals another risk of transfer learning models and needs to
be addressed with more security and robustness measures. |
---|