Natural attack for pre-trained models of code
Pre-trained models of code have achieved success in many important software engineering tasks. However, these powerful models are vulnerable to adversarial attacks that slightly perturb model inputs to make a victim model produce wrong outputs. Current works mainly attack models of code with example...
Saved in:
Main Authors: | YANG, Zhou, SHI, Jieke, HE, Junda, LO, David |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7654 https://ink.library.smu.edu.sg/context/sis_research/article/8657/viewcontent/Natural.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Stealthy backdoor attack for code models
by: YANG, Zhou, et al.
Published: (2024) -
Compressing pre-trained models of code into 3 MB
by: SHI, Jieke, et al.
Published: (2022) -
SeqAdver: Automatic payload construction and injection in sequence-based Android adversarial attack
by: ZHANG, Fei, et al.
Published: (2023) -
Retrieval based code summarisation using code pre-trained models
by: Gupta, Sahaj
Published: (2024) -
Curiosity-driven and victim-aware adversarial policies
by: GONG, Chen, et al.
Published: (2022)