Assessing AI detectors in identifying AI-generated code: Implications for education

Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct.In this paper, we pr...

Full description

Saved in:
Bibliographic Details
Main Authors: PAN, Wei Hung, CHOK, Ming Jie, WONG, Jonathan Leong Shan, SHIN, Yung Xin, POON, Yeong Shian, YANG, Zhou, CHONG, Chun Yong, David LO, LIM, Mei Kuan
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9244
https://ink.library.smu.edu.sg/context/sis_research/article/10244/viewcontent/3639474.3640068.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10244
record_format dspace
spelling sg-smu-ink.sis_research-102442024-09-02T06:44:20Z Assessing AI detectors in identifying AI-generated code: Implications for education PAN, Wei Hung CHOK, Ming Jie WONG, Jonathan Leong Shan SHIN, Yung Xin POON, Yeong Shian YANG, Zhou CHONG, Chun Yong David LO, LIM, Mei Kuan Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct.In this paper, we present an empirical study where the LLM is examined for its attempts to bypass detection by AIGC Detectors. This is achieved by generating code in response to a given question using different variants. We collected a dataset comprising 5,069 samples, with each sample consisting of a textual description of a coding problem and its corresponding human-written Python solution codes. These samples were obtained from various sources, including 80 from Quescol, 3,264 from Kaggle, and 1,725 from Leet-Code. From the dataset, we created 13 sets of code problem variant prompts, which were used to instruct ChatGPT to generate the outputs. Subsequently, we assessed the performance of five AIGC detectors. Our results demonstrate that existing AIGC Detectors perform poorly in distinguishing between human-written code and AI-generated code. 2024-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9244 https://ink.library.smu.edu.sg/context/sis_research/article/10244/viewcontent/3639474.3640068.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Software Engineering Education AI-Generated Code AI-Generated Code Detection Artificial Intelligence and Robotics Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Software Engineering Education
AI-Generated Code
AI-Generated Code Detection
Artificial Intelligence and Robotics
Software Engineering
spellingShingle Software Engineering Education
AI-Generated Code
AI-Generated Code Detection
Artificial Intelligence and Robotics
Software Engineering
PAN, Wei Hung
CHOK, Ming Jie
WONG, Jonathan Leong Shan
SHIN, Yung Xin
POON, Yeong Shian
YANG, Zhou
CHONG, Chun Yong
David LO,
LIM, Mei Kuan
Assessing AI detectors in identifying AI-generated code: Implications for education
description Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct.In this paper, we present an empirical study where the LLM is examined for its attempts to bypass detection by AIGC Detectors. This is achieved by generating code in response to a given question using different variants. We collected a dataset comprising 5,069 samples, with each sample consisting of a textual description of a coding problem and its corresponding human-written Python solution codes. These samples were obtained from various sources, including 80 from Quescol, 3,264 from Kaggle, and 1,725 from Leet-Code. From the dataset, we created 13 sets of code problem variant prompts, which were used to instruct ChatGPT to generate the outputs. Subsequently, we assessed the performance of five AIGC detectors. Our results demonstrate that existing AIGC Detectors perform poorly in distinguishing between human-written code and AI-generated code.
format text
author PAN, Wei Hung
CHOK, Ming Jie
WONG, Jonathan Leong Shan
SHIN, Yung Xin
POON, Yeong Shian
YANG, Zhou
CHONG, Chun Yong
David LO,
LIM, Mei Kuan
author_facet PAN, Wei Hung
CHOK, Ming Jie
WONG, Jonathan Leong Shan
SHIN, Yung Xin
POON, Yeong Shian
YANG, Zhou
CHONG, Chun Yong
David LO,
LIM, Mei Kuan
author_sort PAN, Wei Hung
title Assessing AI detectors in identifying AI-generated code: Implications for education
title_short Assessing AI detectors in identifying AI-generated code: Implications for education
title_full Assessing AI detectors in identifying AI-generated code: Implications for education
title_fullStr Assessing AI detectors in identifying AI-generated code: Implications for education
title_full_unstemmed Assessing AI detectors in identifying AI-generated code: Implications for education
title_sort assessing ai detectors in identifying ai-generated code: implications for education
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9244
https://ink.library.smu.edu.sg/context/sis_research/article/10244/viewcontent/3639474.3640068.pdf
_version_ 1814047843256631296