An LLM-assisted easy-to-trigger poisoning attack on code completion models: Injecting disguised vulnerabilities against strong detection
Large Language Models (LLMs) have transformed code completion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter the model outputs. To a...
Saved in:
Main Authors: | YAN, Shenao, WANG, Shen, DUAN, Yue, HONG, Hanbin, LEE, Kiho, KIM, Doowon, HONG, Yuan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8962 https://ink.library.smu.edu.sg/context/sis_research/article/9965/viewcontent/2406.06822v1.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
AdvSCanner : Generating adversarial smart contracts to exploit reentrancy vulnerabilities using LLM and static analysis
by: WU, Yin, et al.
Published: (2024) -
Environment poisoning in reinforcement learning: attacks and resilience
by: Xu, Hang
Published: (2023) -
Security enhancements to prevent DNS cache poisoning attacks
by: Chong, Soon Seng
Published: (2018) -
Vulnerability analysis on noise-injection based hardware attack on deep neural networks
by: Liu, Wenye, et al.
Published: (2020) -
LLM-based fuzz driver generation
by: Chai, Wen Xuan
Published: (2024)