AuthorsA. Arcuri and L. Briand
EditorsH. Gall and N. Medvidovic
TitleA Practical Guide for Using Statistical Tests to Assess Randomized Algorithms in Software Engineering
AfilliationSoftware Engineering, Software Engineering
StatusPublished
Publication TypeProceedings, refereed
Year of Publication2011
Conference NameACM/IEEE International Conference on Software Engineering (ICSE)
Pagination1-10
PublisherIEEE
ISBN Number978-1-4503-0445-0
Abstract

Randomized algorithms have been used to successfully addressmany different types of software engineering problems. This type of algorithms employ a degree of randomness as part of their logic. Randomized algorithms are useful for difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The use of rigorous statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009. Its goal is not to perform a complete survey but to get a representative snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering.

Citation KeySimula.simula.88