Evaluating program analysis and testing tools with the RUGRAT random benchmark application generator

Ishtiaque Hussain, Christoph Csallner, Mark Grechanik, Chen Fu, Qing Xie, Sangmin Park, Kunal Taneja, B. M.Mainul Hossain

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

Benchmarks are heavily used in different areas of computer science to evaluate algorithms and tools. In program analysis and testing, open-source and commercial programs are routinely used as benchmarks to evaluate different aspects of algorithms and tools. Unfortunately, many of these programs are written by programmers who introduce different biases, not to mention that it is very difficult to find programs that can serve as benchmarks with high reproducibility of results. We propose a novel approach for generating random benchmarks for evaluating program analysis and testing tools. Our approach uses stochastic parse trees, where language grammar production rules are assigned probabilities that specify the frequencies with which instantiations of these rules will appear in the generated programs. We implemented our tool for Java and applied it to generate benchmarks with which we evaluated different program analysis and testing tools. Our tool was also implemented by a major software company for C++ and used by a team of developers to generate benchmarks that enabled them to reproduce a bug in less than four hours.

Original languageEnglish (US)
Title of host publication10th International Workshop on Dynamic Analysis, WODA 2012 - Proceedings
Pages1-6
Number of pages6
DOIs
StatePublished - Aug 28 2012
Event10th International Workshop on Dynamic Analysis, WODA 2012 - Minneapolis, MN, United States
Duration: Jul 15 2012Jul 15 2012

Publication series

Name10th International Workshop on Dynamic Analysis, WODA 2012 - Proceedings

Conference

Conference10th International Workshop on Dynamic Analysis, WODA 2012
Country/TerritoryUnited States
CityMinneapolis, MN
Period7/15/127/15/12

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint

Dive into the research topics of 'Evaluating program analysis and testing tools with the RUGRAT random benchmark application generator'. Together they form a unique fingerprint.

Cite this