Esale: enhancing code-summary alignment learning for source code summarization
(Source) code summarization aims to automatically generate succinct natural language summaries for given code snippets. Such summaries play a significant role in promoting developers to understand and maintain code. Inspired by neural machine translation, deep learning-based code summarization techn...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/180631 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-180631 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1806312024-10-15T07:23:07Z Esale: enhancing code-summary alignment learning for source code summarization Fang, Chunrong Sun, Weisong Chen, Xiao Chen X. Wei, Zhao Zhang, Quanjun You, Yudu Luo, Bin Liu, Yang Chen, Zhenyu College of Computing and Data Science Computer and Information Science Deep learning Source code summarization (Source) code summarization aims to automatically generate succinct natural language summaries for given code snippets. Such summaries play a significant role in promoting developers to understand and maintain code. Inspired by neural machine translation, deep learning-based code summarization techniques widely adopt an encoder-decoder framework, where the encoder transforms given code snippets into context vectors, and the decoder decodes context vectors into summaries. Recently, large-scale pre-trained models for source code (e.g., CodeBERT and UniXcoder) are equipped with encoders capable of producing general context vectors and have achieved substantial improvements on the code summarization task. However, although they are usually trained mainly on code-focused tasks and can capture general code features, they still fall short in capturing specific features that need to be summarized. In a nutshell, they fail to learn the alignment between code snippets and summaries (code-summary alignment for short). In this paper, we propose a novel approach to improve code summarization based on summary-focused tasks. Specifically, we exploit a multi-task learning paradigm to train the encoder on three summary-focused tasks to enhance its ability to learn code-summary alignment, including unidirectional language modeling (ULM), masked language modeling (MLM), and action word prediction (AWP). Unlike pre-trained models that mainly predict masked tokens in code snippets, we design ULM and MLM to predict masked words in summaries. Intuitively, predicting words based on given code snippets would help learn the code-summary alignment. In addition, existing work shows that AWP affects the prediction of the entire summary. Therefore, we further introduce the domain-specific task AWP to enhance the ability of the encoder to learn the alignment between action words and code snippets. We evaluate the effectiveness of our approach, called Esale, by conducting extensive experiments on four datasets, including two widely used datasets JCSD and PCSD, a cross-project Java dataset CPJD, and a multilingual language dataset CodeSearchNet. Experimental results show that Esale significantly outperforms state-of-the-art baselines in all three widely used metrics, including BLEU, METEOR, and ROUGE-L. Moreover, the human evaluation proves that the summaries generated by Esale are more informative and closer to the ground-truth summaries. This work is supported partially by National Natural Science Foundation of China (61932012, 62141215) and the Program B for Outstanding PhD Candidate of Nanjing University (202201B054). 2024-10-15T07:23:07Z 2024-10-15T07:23:07Z 2024 Journal Article Fang, C., Sun, W., Chen, X., Chen X., Wei, Z., Zhang, Q., You, Y., Luo, B., Liu, Y. & Chen, Z. (2024). Esale: enhancing code-summary alignment learning for source code summarization. IEEE Transactions On Software Engineering, 50(8), 2077-2095. https://dx.doi.org/10.1109/TSE.2024.3422274 0098-5589 https://hdl.handle.net/10356/180631 10.1109/TSE.2024.3422274 2-s2.0-85197533403 8 50 2077 2095 en IEEE Transactions on Software Engineering © 2024 IEEE. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Deep learning Source code summarization |
spellingShingle |
Computer and Information Science Deep learning Source code summarization Fang, Chunrong Sun, Weisong Chen, Xiao Chen X. Wei, Zhao Zhang, Quanjun You, Yudu Luo, Bin Liu, Yang Chen, Zhenyu Esale: enhancing code-summary alignment learning for source code summarization |
description |
(Source) code summarization aims to automatically generate succinct natural language summaries for given code snippets. Such summaries play a significant role in promoting developers to understand and maintain code. Inspired by neural machine translation, deep learning-based code summarization techniques widely adopt an encoder-decoder framework, where the encoder transforms given code snippets into context vectors, and the decoder decodes context vectors into summaries. Recently, large-scale pre-trained models for source code (e.g., CodeBERT and UniXcoder) are equipped with encoders capable of producing general context vectors and have achieved substantial improvements on the code summarization task. However, although they are usually trained mainly on code-focused tasks and can capture general code features, they still fall short in capturing specific features that need to be summarized. In a nutshell, they fail to learn the alignment between code snippets and summaries (code-summary alignment for short). In this paper, we propose a novel approach to improve code summarization based on summary-focused tasks. Specifically, we exploit a multi-task learning paradigm to train the encoder on three summary-focused tasks to enhance its ability to learn code-summary alignment, including unidirectional language modeling (ULM), masked language modeling (MLM), and action word prediction (AWP). Unlike pre-trained models that mainly predict masked tokens in code snippets, we design ULM and MLM to predict masked words in summaries. Intuitively, predicting words based on given code snippets would help learn the code-summary alignment. In addition, existing work shows that AWP affects the prediction of the entire summary. Therefore, we further introduce the domain-specific task AWP to enhance the ability of the encoder to learn the alignment between action words and code snippets. We evaluate the effectiveness of our approach, called Esale, by conducting extensive experiments on four datasets, including two widely used datasets JCSD and PCSD, a cross-project Java dataset CPJD, and a multilingual language dataset CodeSearchNet. Experimental results show that Esale significantly outperforms state-of-the-art baselines in all three widely used metrics, including BLEU, METEOR, and ROUGE-L. Moreover, the human evaluation proves that the summaries generated by Esale are more informative and closer to the ground-truth summaries. |
author2 |
College of Computing and Data Science |
author_facet |
College of Computing and Data Science Fang, Chunrong Sun, Weisong Chen, Xiao Chen X. Wei, Zhao Zhang, Quanjun You, Yudu Luo, Bin Liu, Yang Chen, Zhenyu |
format |
Article |
author |
Fang, Chunrong Sun, Weisong Chen, Xiao Chen X. Wei, Zhao Zhang, Quanjun You, Yudu Luo, Bin Liu, Yang Chen, Zhenyu |
author_sort |
Fang, Chunrong |
title |
Esale: enhancing code-summary alignment learning for source code summarization |
title_short |
Esale: enhancing code-summary alignment learning for source code summarization |
title_full |
Esale: enhancing code-summary alignment learning for source code summarization |
title_fullStr |
Esale: enhancing code-summary alignment learning for source code summarization |
title_full_unstemmed |
Esale: enhancing code-summary alignment learning for source code summarization |
title_sort |
esale: enhancing code-summary alignment learning for source code summarization |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/180631 |
_version_ |
1814777715231817728 |