Learning transferable perturbations for image captioning
Present studies have discovered that state-of-the-art deep learning models can be attacked by small but well-designed perturbations. Existing attack algorithms for the image captioning task is time-consuming, and their generated adversarial examples cannot transfer well to other models. To generate...
Saved in:
Main Authors: | WU, Hanjie, LIU, Yongtuo, CAI, Hongmin, HE, Shengfeng |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8371 https://ink.library.smu.edu.sg/context/sis_research/article/9374/viewcontent/Learning_Transferable_Perturbations_for_Image_Captioning.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
PERSONALIZED VISUAL INFORMATION CAPTIONING
by: WU SHUANG
Published: (2023) -
Stack-VS : stacked visual-semantic attention for image caption generation
by: Cheng, Ling, et al.
Published: (2021) -
Context-aware visual policy network for fine-grained image captioning
by: Zha, Zheng-Jun, et al.
Published: (2022) -
Deconfounded image captioning: a causal retrospect
by: Yang, Xu, et al.
Published: (2022) -
Image captioning via semantic element embedding
by: ZHANG, Xiaodan, et al.
Published: (2020)