Evaluating Fuzz Testing. George T. Klees, Andrew Ruef, Benjamin Cooper, Shiyi Wei, and Michael Hicks. In Proceedings of the ACM Conference on Computer and Communications Security (CCS), October 2018. Winner of the 7th NSA Best Scientific Cybersecurity Paper competition.

Fuzz testing has enjoyed great success at discovering security critical bugs in real software. Recently, researchers have devoted significant effort to devising new fuzzing techniques, strategies, and algorithms. Such new ideas are primarily evaluated experimentally so an important question is: What experimental setup is needed to produce trustworthy results? We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We conclude with some guidelines that we hope will help improve experimental evaluations of fuzz testing algorithms, making reported results more robust.

http ]

@inproceedings{klees2018fuzzeval,
  author = {George T. Klees and Andrew Ruef and Benjamin Cooper and Shiyi Wei and Michael Hicks},
  title = {Evaluating Fuzz Testing},
  booktitle = {Proceedings of the {ACM} Conference on Computer and Communications Security (CCS)},
  year = {2018},
  month = oct,
  note = {Winner of the 7th NSA \textbf{Best Scientific Cybersecurity Paper} competition}
}

This file was generated by bibtex2html 1.99.