FDI : Attack neural code generation systems through user feedback channel

Neural code generation systems have recently attracted increasing attention to improve developer productivity and speed up software development. Typically, these systems maintain a pre-trained neural model and make it available to general users as a service (e.g., through remote APIs) and incorporat...

Full description

Saved in:
Bibliographic Details
Main Authors: SUN, Zhensu, DU, Xiaoning, LUO, Xiapu, SONG, Fu, LO, David, LI, Li
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9885
https://ink.library.smu.edu.sg/context/sis_research/article/10885/viewcontent/2408.04194v1.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10885
record_format dspace
spelling sg-smu-ink.sis_research-108852025-01-02T09:10:50Z FDI : Attack neural code generation systems through user feedback channel SUN, Zhensu DU, Xiaoning LUO, Xiapu SONG, Fu LO, David LI, Li Neural code generation systems have recently attracted increasing attention to improve developer productivity and speed up software development. Typically, these systems maintain a pre-trained neural model and make it available to general users as a service (e.g., through remote APIs) and incorporate a feedback mechanism to extensively collect and utilize the users' reaction to the generated code, i.e., user feedback. However, the security implications of such feedback have not yet been explored. With a systematic study of current feedback mechanisms, we find that feedback makes these systems vulnerable to feedback data injection (FDI) attacks. We discuss the methodology of FDI attacks and present a pre-attack profiling strategy to infer the attack constraints of a targeted system in the black-box setting. We demonstrate two proof-of-concept examples utilizing the FDI attack surface to implement prompt injection attacks and backdoor attacks on practical neural code generation systems. The attacker may stealthily manipulate a neural code generation system to generate code with vulnerabilities, attack payload, and malicious and spam messages. Our findings reveal the security implications of feedback mechanisms in neural code generation systems, paving the way for increasing their security. 2024-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9885 info:doi/10.1145/3650212.3680300 https://ink.library.smu.edu.sg/context/sis_research/article/10885/viewcontent/2408.04194v1.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Code generation Data poisoning User feedback Security and privacy Feedback data injection Artificial Intelligence and Robotics Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Code generation
Data poisoning
User feedback
Security and privacy
Feedback data injection
Artificial Intelligence and Robotics
Information Security
spellingShingle Code generation
Data poisoning
User feedback
Security and privacy
Feedback data injection
Artificial Intelligence and Robotics
Information Security
SUN, Zhensu
DU, Xiaoning
LUO, Xiapu
SONG, Fu
LO, David
LI, Li
FDI : Attack neural code generation systems through user feedback channel
description Neural code generation systems have recently attracted increasing attention to improve developer productivity and speed up software development. Typically, these systems maintain a pre-trained neural model and make it available to general users as a service (e.g., through remote APIs) and incorporate a feedback mechanism to extensively collect and utilize the users' reaction to the generated code, i.e., user feedback. However, the security implications of such feedback have not yet been explored. With a systematic study of current feedback mechanisms, we find that feedback makes these systems vulnerable to feedback data injection (FDI) attacks. We discuss the methodology of FDI attacks and present a pre-attack profiling strategy to infer the attack constraints of a targeted system in the black-box setting. We demonstrate two proof-of-concept examples utilizing the FDI attack surface to implement prompt injection attacks and backdoor attacks on practical neural code generation systems. The attacker may stealthily manipulate a neural code generation system to generate code with vulnerabilities, attack payload, and malicious and spam messages. Our findings reveal the security implications of feedback mechanisms in neural code generation systems, paving the way for increasing their security.
format text
author SUN, Zhensu
DU, Xiaoning
LUO, Xiapu
SONG, Fu
LO, David
LI, Li
author_facet SUN, Zhensu
DU, Xiaoning
LUO, Xiapu
SONG, Fu
LO, David
LI, Li
author_sort SUN, Zhensu
title FDI : Attack neural code generation systems through user feedback channel
title_short FDI : Attack neural code generation systems through user feedback channel
title_full FDI : Attack neural code generation systems through user feedback channel
title_fullStr FDI : Attack neural code generation systems through user feedback channel
title_full_unstemmed FDI : Attack neural code generation systems through user feedback channel
title_sort fdi : attack neural code generation systems through user feedback channel
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9885
https://ink.library.smu.edu.sg/context/sis_research/article/10885/viewcontent/2408.04194v1.pdf
_version_ 1821237274027753472