LLM-based fuzz driver generation

As software complexity continues to escalate, traditional fuzzing methodologies encounter limitations in efficiently discovering vulnerabilities, underscoring the need for innovative approaches. This thesis investigates the innovative integration of Large Language Models (LLMs) into the process of f...

Full description

Saved in:
Bibliographic Details
Main Author: Chai, Wen Xuan
Other Authors: Liu Yang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175404
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As software complexity continues to escalate, traditional fuzzing methodologies encounter limitations in efficiently discovering vulnerabilities, underscoring the need for innovative approaches. This thesis investigates the innovative integration of Large Language Models (LLMs) into the process of fuzz driver generation, aiming to enhance the efficiency and effectiveness of fuzz testing in software development. Fuzz testing, a critical technique in identifying vulnerabilities within software applications, traditionally requires extensive manual effort to generate fuzz drivers capable of automating the discovery of bugs and security flaws. The advent of LLMs, exemplified by models such as GPT-3 and GPT-4, offers a promising avenue to automate and optimize this process. Through a comprehensive exploration, this research delves into the potential of LLMs to revolutionize fuzz driver generation, thereby significantly improving software testing methodologies. The study begins with an in-depth analysis of the current state of fuzzing practices and the role of LLMs in code generation, laying the groundwork for understanding the significance of integrating these advanced models into fuzz testing. It then progresses to a systematic examination of LLM-based fuzz driver generation, including the formulation of methodologies, evaluation of existing tools, and analysis of the generated drivers’ effectiveness compared to traditional methods. Challenges inherent in leveraging LLMs for this purpose are also critically assessed, with a focus on model specificity, evaluation metrics, and the integration with existing fuzzing frameworks. Empirical findings from this research indicate that LLMs can substantially increase code coverage and identify vulnerabilities more efficiently, albeit with notable challenges in accuracy and relevance of the generated code. The thesis concludes with a discussion on future directions, highlighting the need for advancements in LLM technologies, methodological innovations, and the potential for LLMs to redefine software security practices. This research contributes to the field by bridging theoretical AI advancements with practical applications in software testing, offering insights into the future of automated and efficient fuzz testing methodologies.