Adversarial attacks and robustness for segment anything model
Segment Anything Model (SAM), as a potent graphic segmentation model, has demonstrated its application potential in various fields. Before deploying SAM in various applications, the robustness of SAM against adversarial attacks is a security concern that must be addressed. In this paper, we ex...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/177032 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-177032 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1770322024-05-24T15:46:09Z Adversarial attacks and robustness for segment anything model Liu, Shifei Jiang Xudong School of Electrical and Electronic Engineering EXDJiang@ntu.edu.sg Computer and Information Science Robustness Adversarial attacks Segment Anything Model (SAM), as a potent graphic segmentation model, has demonstrated its application potential in various fields. Before deploying SAM in various applications, the robustness of SAM against adversarial attacks is a security concern that must be addressed. In this paper, we experimentally conducted adversarial attacks on SAM and its downstream application mod els to evaluate their robustness. For SAM downstream models with unknown structures, the method of attacking by establishing a surrogate model has sev eral limitations. These include significant time and computational costs due to SAM’s large volume, as well as poor simulation effects of the surrogate model because of the unknown training set used by the model. This dissertation aimed to leverage open-source models to design a simple and feasible method for attacking SAM downstream application models. We used Gaussian functions to estimate the gradient of SAM downstream models on the image encoder. This approach significantly reduced computational and time costs compared to building surrogate models and improved the attack effectiveness. To further enhance the transferability of the attack, we applied random rota tion and erasing transformations to input images and trained using the Expec tation Over Transformation (EOT) loss. However, we found that the EOT-based method did not show a good performance gain in attacking downstream tasks. This inadequacy can be attributed to the intrinsic trade-off between the attack effectiveness and transferability, necessitating the determination of an optimal weight parameter through a heuristic search to strike a balance. Bachelor's degree 2024-05-24T07:55:08Z 2024-05-24T07:55:08Z 2024 Final Year Project (FYP) Liu, S. (2024). Adversarial attacks and robustness for segment anything model. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/177032 https://hdl.handle.net/10356/177032 en A3073-231 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Robustness Adversarial attacks |
spellingShingle |
Computer and Information Science Robustness Adversarial attacks Liu, Shifei Adversarial attacks and robustness for segment anything model |
description |
Segment Anything Model (SAM), as a potent graphic segmentation model, has
demonstrated its application potential in various fields. Before deploying SAM
in various applications, the robustness of SAM against adversarial attacks is
a security concern that must be addressed. In this paper, we experimentally
conducted adversarial attacks on SAM and its downstream application mod
els to evaluate their robustness. For SAM downstream models with unknown
structures, the method of attacking by establishing a surrogate model has sev
eral limitations. These include significant time and computational costs due to
SAM’s large volume, as well as poor simulation effects of the surrogate model
because of the unknown training set used by the model.
This dissertation aimed to leverage open-source models to design a simple and
feasible method for attacking SAM downstream application models. We used
Gaussian functions to estimate the gradient of SAM downstream models on the
image encoder. This approach significantly reduced computational and time costs
compared to building surrogate models and improved the attack effectiveness.
To further enhance the transferability of the attack, we applied random rota
tion and erasing transformations to input images and trained using the Expec
tation Over Transformation (EOT) loss. However, we found that the EOT-based
method did not show a good performance gain in attacking downstream tasks.
This inadequacy can be attributed to the intrinsic trade-off between the attack
effectiveness and transferability, necessitating the determination of an optimal
weight parameter through a heuristic search to strike a balance. |
author2 |
Jiang Xudong |
author_facet |
Jiang Xudong Liu, Shifei |
format |
Final Year Project |
author |
Liu, Shifei |
author_sort |
Liu, Shifei |
title |
Adversarial attacks and robustness for segment anything model |
title_short |
Adversarial attacks and robustness for segment anything model |
title_full |
Adversarial attacks and robustness for segment anything model |
title_fullStr |
Adversarial attacks and robustness for segment anything model |
title_full_unstemmed |
Adversarial attacks and robustness for segment anything model |
title_sort |
adversarial attacks and robustness for segment anything model |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/177032 |
_version_ |
1806059826296389632 |