Applying classification techniques to remotely-collected program execution data

Murali Haran, Alan Karr, Alessandro Orso, Adam Porter, Ashish Sanil

Research output: Chapter in Book/Report/Conference proceedingConference contribution

38 Scopus citations

Abstract

There is an increasing interest in techniques that support measurement and analysis of fielded software systems. One of the main goals of these techniques is to better understand how software actually behaves in the field. In particular, many of these techniques require a way to distinguish, in the field, failing from passing executions. So far, researchers and practitioners have only partially addressed this problem: they have simply assumed that program failure status is either obvious (i.e., the program crashes) or provided by an external source (e.g., the users). In this paper, we propose a technique for automatically classifying execution data, collected in the field, as coming from either passing or failing program runs. (Failing program runs are executions that terminate with a failure, such as a wrong outcome.) We use statistical learning algorithms to build the classification models. Our approach builds the models by analyzing executions performed in a controlled environment (e.g., test cases run in-house) and then uses the models to predict whether execution data produced by a fielded instance were generated by a passing or failing program execution. We also present results from an initial feasibility study, based on multiple versions of a software subject, in which we investigate several issues vital to the applicability of the technique. Finally, we present some lessons learned regarding the interplay between the reliability of classification models and the amount and type of data collected.

Original languageEnglish (US)
Title of host publicationESEC/FSE'05 - Proceedings of the Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-13)
PublisherAssociation for Computing Machinery
Pages146-155
Number of pages10
ISBN (Print)1595930140, 9781595930149
DOIs
StatePublished - 2005
EventESEC/FSE'05 - Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-13) - Lisbon, Portugal
Duration: Sep 5 2005Sep 9 2005

Publication series

NameESEC/FSE'05 - Proceedings of the Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-13)

Other

OtherESEC/FSE'05 - Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-13)
CountryPortugal
CityLisbon
Period9/5/059/9/05

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint Dive into the research topics of 'Applying classification techniques to remotely-collected program execution data'. Together they form a unique fingerprint.

  • Cite this

    Haran, M., Karr, A., Orso, A., Porter, A., & Sanil, A. (2005). Applying classification techniques to remotely-collected program execution data. In ESEC/FSE'05 - Proceedings of the Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-13) (pp. 146-155). (ESEC/FSE'05 - Proceedings of the Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-13)). Association for Computing Machinery. https://doi.org/10.1145/1081706.1081732