An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection

Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To a...

Full description

Saved in:
Bibliographic Details
Main Authors: YAN, Shenao, WANG, Shen, DUAN, Yue, HONG, Hanbin, LEE, Kiho, KIM, Doowon, HONG, Yuan
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8962
https://ink.library.smu.edu.sg/context/sis_research/article/9965/viewcontent/2406.06822v1.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9965
record_format dspace
spelling sg-smu-ink.sis_research-99652024-07-04T07:04:31Z An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection YAN, Shenao WANG, Shen DUAN, Yue HONG, Hanbin LEE, Kiho KIM, Doowon HONG, Yuan Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To address this critical security challenge, we introduce CODEBREAKER, a pioneering LLM-assisted backdoor attack framework on code completion models. Unlike recent attacks that embed malicious payloads in detectable or irrelevant sections of the code (e.g., comments), CODEBREAKER leverages LLMs (e.g., GPT-4) for sophisticated payload transformation (without affecting functionalities), ensuring that both the poisoned data for fine-tuning and generated code can evade strong vulnerability detection. CODEBREAKER stands out with its comprehensive coverage of vulnerabilities, making it the first to provide such an extensive set for evaluation. Our extensive experimental evaluations and user studies underline the strong attack performance of CODEBREAKER across various settings, validating its superiority over existing approaches. By integrating malicious payloads directly into the source code with minimal transformation, CODEBREAKER challenges current security measures, underscoring the critical need for more robust defenses for code completion. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8962 https://ink.library.smu.edu.sg/context/sis_research/article/9965/viewcontent/2406.06822v1.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Information Security
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Information Security
spellingShingle Information Security
YAN, Shenao
WANG, Shen
DUAN, Yue
HONG, Hanbin
LEE, Kiho
KIM, Doowon
HONG, Yuan
An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
description Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To address this critical security challenge, we introduce CODEBREAKER, a pioneering LLM-assisted backdoor attack framework on code completion models. Unlike recent attacks that embed malicious payloads in detectable or irrelevant sections of the code (e.g., comments), CODEBREAKER leverages LLMs (e.g., GPT-4) for sophisticated payload transformation (without affecting functionalities), ensuring that both the poisoned data for fine-tuning and generated code can evade strong vulnerability detection. CODEBREAKER stands out with its comprehensive coverage of vulnerabilities, making it the first to provide such an extensive set for evaluation. Our extensive experimental evaluations and user studies underline the strong attack performance of CODEBREAKER across various settings, validating its superiority over existing approaches. By integrating malicious payloads directly into the source code with minimal transformation, CODEBREAKER challenges current security measures, underscoring the critical need for more robust defenses for code completion.
format text
author YAN, Shenao
WANG, Shen
DUAN, Yue
HONG, Hanbin
LEE, Kiho
KIM, Doowon
HONG, Yuan
author_facet YAN, Shenao
WANG, Shen
DUAN, Yue
HONG, Hanbin
LEE, Kiho
KIM, Doowon
HONG, Yuan
author_sort YAN, Shenao
title An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
title_short An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
title_full An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
title_fullStr An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
title_full_unstemmed An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
title_sort llm-assisted easy-to-trigger poisoning attack on code completion models: injecting disguised vulnerabilities against strong detection
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/8962
https://ink.library.smu.edu.sg/context/sis_research/article/9965/viewcontent/2406.06822v1.pdf
_version_ 1814047659129831424