LLM-based fuzz driver generation

As software complexity continues to escalate, traditional fuzzing methodologies encounter limitations in efficiently discovering vulnerabilities, underscoring the need for innovative approaches. This thesis investigates the innovative integration of Large Language Models (LLMs) into the process of f...

全面介紹

Saved in:
書目詳細資料
主要作者: Chai, Wen Xuan
其他作者: Liu Yang
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/175404
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:As software complexity continues to escalate, traditional fuzzing methodologies encounter limitations in efficiently discovering vulnerabilities, underscoring the need for innovative approaches. This thesis investigates the innovative integration of Large Language Models (LLMs) into the process of fuzz driver generation, aiming to enhance the efficiency and effectiveness of fuzz testing in software development. Fuzz testing, a critical technique in identifying vulnerabilities within software applications, traditionally requires extensive manual effort to generate fuzz drivers capable of automating the discovery of bugs and security flaws. The advent of LLMs, exemplified by models such as GPT-3 and GPT-4, offers a promising avenue to automate and optimize this process. Through a comprehensive exploration, this research delves into the potential of LLMs to revolutionize fuzz driver generation, thereby significantly improving software testing methodologies. The study begins with an in-depth analysis of the current state of fuzzing practices and the role of LLMs in code generation, laying the groundwork for understanding the significance of integrating these advanced models into fuzz testing. It then progresses to a systematic examination of LLM-based fuzz driver generation, including the formulation of methodologies, evaluation of existing tools, and analysis of the generated drivers’ effectiveness compared to traditional methods. Challenges inherent in leveraging LLMs for this purpose are also critically assessed, with a focus on model specificity, evaluation metrics, and the integration with existing fuzzing frameworks. Empirical findings from this research indicate that LLMs can substantially increase code coverage and identify vulnerabilities more efficiently, albeit with notable challenges in accuracy and relevance of the generated code. The thesis concludes with a discussion on future directions, highlighting the need for advancements in LLM technologies, methodological innovations, and the potential for LLMs to redefine software security practices. This research contributes to the field by bridging theoretical AI advancements with practical applications in software testing, offering insights into the future of automated and efficient fuzz testing methodologies.