Unveiling code pre-trained models: Investigating syntax and semantics capacities

Code models have made significant advancements in code intelligence by encoding knowledge about programming languages. While previous studies have explored the capabilities of these models in learning code syntax, there has been limited investigation on their ability to understand code semantics. Ad...

Full description

Saved in:
Bibliographic Details
Main Authors: MA, Wei, LIU, Shangqing, ZHAO, Mengjie, XIE, Xiaofei, WANG, Wenhang, HU, Qiang, ZHANG, Jie, YANG, Liu
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9092
https://ink.library.smu.edu.sg/context/sis_research/article/10095/viewcontent/3664606.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10095
record_format dspace
spelling sg-smu-ink.sis_research-100952024-08-01T15:10:13Z Unveiling code pre-trained models: Investigating syntax and semantics capacities MA, Wei LIU, Shangqing ZHAO, Mengjie XIE, Xiaofei WANG, Wenhang HU, Qiang ZHANG, Jie YANG, Liu Code models have made significant advancements in code intelligence by encoding knowledge about programming languages. While previous studies have explored the capabilities of these models in learning code syntax, there has been limited investigation on their ability to understand code semantics. Additionally, existing analyses assume the number of edges between nodes at the abstract syntax tree (AST) is related to syntax distance, and also often require transforming the high-dimensional space of deep learning models to a low-dimensional one, which may introduce inaccuracies. To study how code models represent code syntax and semantics, we conduct a comprehensive analysis of 7 code models, including four representative code pre-trained models (CodeBERT, GraphCodeBERT, CodeT5, and UnixCoder) and three large language models (StarCoder, CodeLlama and CodeT5+). We design four probing tasks to assess the models’ capacities in learning both code syntax and semantics. These probing tasks reconstruct code syntax and semantics structures (AST, CDG, DDG and CFG) in the representation space. These structures are core concepts for code understanding. We also investigate the syntax token role in each token representation and the long dependency between the code tokens. Additionally, we analyze the distribution of attention weights related to code semantic structures. Through extensive analysis, our findings highlight the strengths and limitations of different code models in learning code syntax and semantics. The results demonstrate that these models excel in learning code syntax, successfully capturing the syntax relationships between tokens and the syntax roles of individual tokens. However, their performance in encoding code semantics varies. CodeT5 and CodeBERT demonstrate proficiency in capturing control and data dependencies, while UnixCoder shows weaker performance in this aspect. We do not observe LLMs generally performing much better than pre-trained models. The shallow layers of LLMs perform better than their deep layers. The investigation of attention weights reveals that different attention heads play distinct roles in encoding code semantics. Our research findings emphasize the need for further enhancements in code models to better learn code semantics. This study contributes to the understanding of code models’ abilities in syntax and semantics analysis. Our findings provide guidance for future improvements in code models, facilitating their effective application in various code-related tasks. 2024-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9092 info:doi/10.1145/3664606 https://ink.library.smu.edu.sg/context/sis_research/article/10095/viewcontent/3664606.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Code Model Analysis Syntax and Semantic Encoding Programming Languages and Compilers Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Code Model Analysis
Syntax and Semantic Encoding
Programming Languages and Compilers
Software Engineering
spellingShingle Code Model Analysis
Syntax and Semantic Encoding
Programming Languages and Compilers
Software Engineering
MA, Wei
LIU, Shangqing
ZHAO, Mengjie
XIE, Xiaofei
WANG, Wenhang
HU, Qiang
ZHANG, Jie
YANG, Liu
Unveiling code pre-trained models: Investigating syntax and semantics capacities
description Code models have made significant advancements in code intelligence by encoding knowledge about programming languages. While previous studies have explored the capabilities of these models in learning code syntax, there has been limited investigation on their ability to understand code semantics. Additionally, existing analyses assume the number of edges between nodes at the abstract syntax tree (AST) is related to syntax distance, and also often require transforming the high-dimensional space of deep learning models to a low-dimensional one, which may introduce inaccuracies. To study how code models represent code syntax and semantics, we conduct a comprehensive analysis of 7 code models, including four representative code pre-trained models (CodeBERT, GraphCodeBERT, CodeT5, and UnixCoder) and three large language models (StarCoder, CodeLlama and CodeT5+). We design four probing tasks to assess the models’ capacities in learning both code syntax and semantics. These probing tasks reconstruct code syntax and semantics structures (AST, CDG, DDG and CFG) in the representation space. These structures are core concepts for code understanding. We also investigate the syntax token role in each token representation and the long dependency between the code tokens. Additionally, we analyze the distribution of attention weights related to code semantic structures. Through extensive analysis, our findings highlight the strengths and limitations of different code models in learning code syntax and semantics. The results demonstrate that these models excel in learning code syntax, successfully capturing the syntax relationships between tokens and the syntax roles of individual tokens. However, their performance in encoding code semantics varies. CodeT5 and CodeBERT demonstrate proficiency in capturing control and data dependencies, while UnixCoder shows weaker performance in this aspect. We do not observe LLMs generally performing much better than pre-trained models. The shallow layers of LLMs perform better than their deep layers. The investigation of attention weights reveals that different attention heads play distinct roles in encoding code semantics. Our research findings emphasize the need for further enhancements in code models to better learn code semantics. This study contributes to the understanding of code models’ abilities in syntax and semantics analysis. Our findings provide guidance for future improvements in code models, facilitating their effective application in various code-related tasks.
format text
author MA, Wei
LIU, Shangqing
ZHAO, Mengjie
XIE, Xiaofei
WANG, Wenhang
HU, Qiang
ZHANG, Jie
YANG, Liu
author_facet MA, Wei
LIU, Shangqing
ZHAO, Mengjie
XIE, Xiaofei
WANG, Wenhang
HU, Qiang
ZHANG, Jie
YANG, Liu
author_sort MA, Wei
title Unveiling code pre-trained models: Investigating syntax and semantics capacities
title_short Unveiling code pre-trained models: Investigating syntax and semantics capacities
title_full Unveiling code pre-trained models: Investigating syntax and semantics capacities
title_fullStr Unveiling code pre-trained models: Investigating syntax and semantics capacities
title_full_unstemmed Unveiling code pre-trained models: Investigating syntax and semantics capacities
title_sort unveiling code pre-trained models: investigating syntax and semantics capacities
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9092
https://ink.library.smu.edu.sg/context/sis_research/article/10095/viewcontent/3664606.pdf
_version_ 1814047728916758528