How effective are they? Exploring large language model based fuzz driver generation

Fuzz drivers are essential for library API fuzzing. However, automatically generating fuzz drivers is a complex task, as it demands the creation of high-quality, correct, and robust API usage code. An LLM-based (Large Language Model) approach for generating fuzz drivers is a promising area of resear...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHANG, Cen, ZHENG, Yaowen, BAI, Mingqiang, LI, Yeting, MA, Wei, XIE, Xiaofei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9508
https://ink.library.smu.edu.sg/context/sis_research/article/10508/viewcontent/How_Effective_Are_They__Exploring_Large_Language_Model_Based_Fuzz_Driver_Generation.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Fuzz drivers are essential for library API fuzzing. However, automatically generating fuzz drivers is a complex task, as it demands the creation of high-quality, correct, and robust API usage code. An LLM-based (Large Language Model) approach for generating fuzz drivers is a promising area of research. Unlike traditional program analysis-based generators, this text-based approach is more generalized and capable of harnessing a variety of API usage information, resulting in code that is friendly for human readers. However, there is still a lack of understanding regarding the fundamental issues on this direction, such as its e ectiveness and potential challenges. To bridge this gap, we conducted the rst in-depth study targeting the important issues of using LLMs to generate e ective fuzz drivers. Our study features a curated dataset with 86 fuzz driver generation questions from 30widely-usedCprojects. Six prompting strategies are designed and tested across ve state-of-the-art LLMs with vedi erenttemperaturesettings.Intotal, ourstudyevaluated 736,430 generated fuzz drivers, with 0.85 billion token costs ($8,000+ charged tokens). Additionally, we compared the LLM-generated drivers against those utilized in industry, conducting extensive fuzzing experiments (3.75 CPU-year). Our study uncovered that: 1) While LLM-based fuzz driver generation is a promising direction, it still encounters several obstacles towards practical applications; 2) LLMs face di culties in generating e ective fuzz drivers for APIs with intricate speci cs. Three featured design choices of prompt strategies can be bene cial: issuing repeat queries, querying with examples, and employing an iterative querying process; 3) While LLM-generated drivers can yield fuzzing outcomes that are on par with those used in the industry, there are substantial opportunities for enhancement, such as extending contained API usage, or integrating semantic oracles to facilitate logical bug detection.