Multi-target backdoor attacks for code pre-trained models
Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8238 https://ink.library.smu.edu.sg/context/sis_research/article/9241/viewcontent/Continual_normalization_Rethinking_batch_normalization_for_online_continual_learning.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9241 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-92412023-10-26T03:27:34Z Multi-target backdoor attacks for code pre-trained models LI, Yanzhou LIU, Shangqing CHEN, Kangjie XIE, Xiaofei ZHANG, Tianwei LIU, Yang Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for pre-trained models are designed for understanding tasks. In this paper, we propose task-agnostic backdoor attacks for code pre-trained models. Our backdoored model is pre-trained with two learning strategies (i.e., Poisoned Seq2Seq learning and token representation learning) to support the multi-target attack of downstream code understanding and generation tasks. During the deployment phase, the implanted backdoors in the victim models can be activated by the designed triggers to achieve the targeted attack. We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets. Extensive experiments demonstrate that our approach can effectively and stealthily attack code-related downstream tasks. 2023-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8238 info:doi/10.48550/arXiv.2306.08350 https://ink.library.smu.edu.sg/context/sis_research/article/9241/viewcontent/Continual_normalization_Rethinking_batch_normalization_for_online_continual_learning.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Databases and Information Systems |
spellingShingle |
Databases and Information Systems LI, Yanzhou LIU, Shangqing CHEN, Kangjie XIE, Xiaofei ZHANG, Tianwei LIU, Yang Multi-target backdoor attacks for code pre-trained models |
description |
Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for pre-trained models are designed for understanding tasks. In this paper, we propose task-agnostic backdoor attacks for code pre-trained models. Our backdoored model is pre-trained with two learning strategies (i.e., Poisoned Seq2Seq learning and token representation learning) to support the multi-target attack of downstream code understanding and generation tasks. During the deployment phase, the implanted backdoors in the victim models can be activated by the designed triggers to achieve the targeted attack. We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets. Extensive experiments demonstrate that our approach can effectively and stealthily attack code-related downstream tasks. |
format |
text |
author |
LI, Yanzhou LIU, Shangqing CHEN, Kangjie XIE, Xiaofei ZHANG, Tianwei LIU, Yang |
author_facet |
LI, Yanzhou LIU, Shangqing CHEN, Kangjie XIE, Xiaofei ZHANG, Tianwei LIU, Yang |
author_sort |
LI, Yanzhou |
title |
Multi-target backdoor attacks for code pre-trained models |
title_short |
Multi-target backdoor attacks for code pre-trained models |
title_full |
Multi-target backdoor attacks for code pre-trained models |
title_fullStr |
Multi-target backdoor attacks for code pre-trained models |
title_full_unstemmed |
Multi-target backdoor attacks for code pre-trained models |
title_sort |
multi-target backdoor attacks for code pre-trained models |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2023 |
url |
https://ink.library.smu.edu.sg/sis_research/8238 https://ink.library.smu.edu.sg/context/sis_research/article/9241/viewcontent/Continual_normalization_Rethinking_batch_normalization_for_online_continual_learning.pdf |
_version_ |
1781793970672631808 |