Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations

We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of co...

Full description

Saved in:
Bibliographic Details
Main Authors: BUI, Duy Quoc Nghi, Yijun Yu, JIANG, Lingxiao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6719
https://ink.library.smu.edu.sg/context/sis_research/article/7722/viewcontent/sigir21corder.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7722
record_format dspace
spelling sg-smu-ink.sis_research-77222022-01-27T11:13:58Z Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations BUI, Duy Quoc Nghi Yijun Yu, JIANG, Lingxiao We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks 2021-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6719 info:doi/10.1145/3404835.3462840 https://ink.library.smu.edu.sg/context/sis_research/article/7722/viewcontent/sigir21corder.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Software and its engineering Software libraries and repositories Information systems Information retrieval Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Software and its engineering
Software libraries and repositories
Information systems
Information retrieval
Software Engineering
spellingShingle Software and its engineering
Software libraries and repositories
Information systems
Information retrieval
Software Engineering
BUI, Duy Quoc Nghi
Yijun Yu,
JIANG, Lingxiao
Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
description We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval, and code-to-text summarization tasks
format text
author BUI, Duy Quoc Nghi
Yijun Yu,
JIANG, Lingxiao
author_facet BUI, Duy Quoc Nghi
Yijun Yu,
JIANG, Lingxiao
author_sort BUI, Duy Quoc Nghi
title Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
title_short Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
title_full Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
title_fullStr Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
title_full_unstemmed Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
title_sort self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6719
https://ink.library.smu.edu.sg/context/sis_research/article/7722/viewcontent/sigir21corder.pdf
_version_ 1770576053573517312