Augmenting Greybox Fuzzing with Generative AI

Hu, Jie, Zhang, Qian, Yin, Heng

arXiv.org Artificial Intelligence 

In recent years, fuzz testing has emerged as an effective technique for testing software systems. For example, fuzz testing has been remarkably successful in uncovering critical security bugs in applications such as Chrome web-browser [1] and SQLLite database [11]. Generally, fuzz testing runs a program with seed inputs, mutates the previous inputs to improve a given guidance metric such as branch coverage, and repeats this cycle of input mutation and the target program execution. During the fuzzing process, we often execute the target program with generated large amount of test cases and monitor the runtime behavior to find vulnerabilities. For that, it is essential to generate test cases that effectively cover a wide range of execution paths and program behaviors. This comprehensive coverage enables thorough exploration of the program's functionality and helps uncover potential vulnerabilities or issues. The simplicity of fuzzing has made it a de-facto testing procedure for large-scale software systems; however, its effectiveness is based on an inherent yet oversighted assumption: a set of arbitrary input mutations is likely to yield meaningful inputs. In fact, our extensive experience suggests that this assumption often does not hold for most software systems that take highly structured data as inputs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found