|Authors||A. Arcuri and L. C. Briand|
|Title||A Hitchhiker's Guide to Statistical Tests for Assessing Randomized Algorithms in Software Engineering|
|Project(s)||The Certus Centre (SFI)|
|Publication Type||Technical reports|
|Year of Publication||2011|
|Publisher||Simula Research Laboratory|
Randomized algorithms have been used to successfully address many different types of software engineering problems. This type of algorithms entail a significant degree of randomness as part of their logic. Randomized algorithms are useful to address difficult problems where a precise solution cannot be derived in a deterministic way within reasonable time. However, randomized algorithms can produce different results on every run when applied to the same problem instance. It is hence important to assess the effectiveness of randomized algorithms by collecting data from a large enough number of runs. The rigorous use of statistical tests is then essential to provide support to the conclusions derived by analyzing such data. In this paper, we provide a systematic review of the use of randomized algorithms in selected software engineering venues in 2009/2010. Its goal is not to perform a complete survey but to get a representative and up-to-date snapshot of current practice in software engineering research. We show that randomized algorithms are used in a significant percentage of papers but that, in most cases, randomness is not properly accounted for. This casts doubts on the validity of most empirical results assessing randomized algorithms for various applications. There are numerous statistical tests, based on different assumptions, and it is not always clear when and how to use these tests. We hence provide practical guidelines to support empirical research on randomized algorithms in software engineering.