Multi-target backdoor attacks for code pre-trained models

Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Yanzhou, LIU, Shangqing, CHEN, Kangjie, XIE, Xiaofei, ZHANG, Tianwei, LIU, Yang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8541
https://ink.library.smu.edu.sg/context/sis_research/article/9544/viewcontent/Multi_target_Backdoor_Attacks_for_Code_Pre_trained_Models.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9544
record_format dspace
spelling sg-smu-ink.sis_research-95442024-01-22T14:54:02Z Multi-target backdoor attacks for code pre-trained models LI, Yanzhou LIU, Shangqing CHEN, Kangjie XIE, Xiaofei ZHANG, Tianwei LIU, Yang Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for pre-trained models are designed for understanding tasks. In this paper, we propose task-agnostic backdoor attacks for code pre-trained models. Our backdoored model is pre-trained with two learning strategies (i.e., Poisoned Seq2Seq learning and token representation learning) to support the multi-target attack of downstream code understanding and generation tasks. During the deployment phase, the implanted backdoors in the victim models can be activated by the designed triggers to achieve the targeted attack. We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets. Extensive experiments demonstrate that our approach can effectively and stealthily attack code-related downstream tasks. 2023-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8541 https://ink.library.smu.edu.sg/context/sis_research/article/9544/viewcontent/Multi_target_Backdoor_Attacks_for_Code_Pre_trained_Models.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Backdoors Code understanding Codegeneration Deployment phasis Down-stream Learning strategy Multi-targets Neural code Databases and Information Systems Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Backdoors
Code understanding
Codegeneration
Deployment phasis
Down-stream
Learning strategy
Multi-targets
Neural code
Databases and Information Systems
Information Security
spellingShingle Backdoors
Code understanding
Codegeneration
Deployment phasis
Down-stream
Learning strategy
Multi-targets
Neural code
Databases and Information Systems
Information Security
LI, Yanzhou
LIU, Shangqing
CHEN, Kangjie
XIE, Xiaofei
ZHANG, Tianwei
LIU, Yang
Multi-target backdoor attacks for code pre-trained models
description Backdoor attacks for neural code models have gained considerable attention due to the advancement of code intelligence. However, most existing works insert triggers into task-specific data for code-related downstream tasks, thereby limiting the scope of attacks. Moreover, the majority of attacks for pre-trained models are designed for understanding tasks. In this paper, we propose task-agnostic backdoor attacks for code pre-trained models. Our backdoored model is pre-trained with two learning strategies (i.e., Poisoned Seq2Seq learning and token representation learning) to support the multi-target attack of downstream code understanding and generation tasks. During the deployment phase, the implanted backdoors in the victim models can be activated by the designed triggers to achieve the targeted attack. We evaluate our approach on two code understanding tasks and three code generation tasks over seven datasets. Extensive experiments demonstrate that our approach can effectively and stealthily attack code-related downstream tasks.
format text
author LI, Yanzhou
LIU, Shangqing
CHEN, Kangjie
XIE, Xiaofei
ZHANG, Tianwei
LIU, Yang
author_facet LI, Yanzhou
LIU, Shangqing
CHEN, Kangjie
XIE, Xiaofei
ZHANG, Tianwei
LIU, Yang
author_sort LI, Yanzhou
title Multi-target backdoor attacks for code pre-trained models
title_short Multi-target backdoor attacks for code pre-trained models
title_full Multi-target backdoor attacks for code pre-trained models
title_fullStr Multi-target backdoor attacks for code pre-trained models
title_full_unstemmed Multi-target backdoor attacks for code pre-trained models
title_sort multi-target backdoor attacks for code pre-trained models
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8541
https://ink.library.smu.edu.sg/context/sis_research/article/9544/viewcontent/Multi_target_Backdoor_Attacks_for_Code_Pre_trained_Models.pdf
_version_ 1789483261727080448