On the dataset shift problem in software engineering prediction models

Research output: Contribution to journalArticleResearchpeer-review

79 Citations (Scopus)


A core assumption of any prediction model is that test data distribution does not differ from training data distribution. Prediction models used in software engineering are no exception. In reality, this assumption can be violated in many ways resulting in inconsistent and non-transferrable observations across different cases. The goal of this paper is to explain the phenomena of conclusion instability through the dataset shift concept from software effort and fault prediction perspective. Different types of dataset shift are explained with examples from software engineering, and techniques for addressing associated problems are discussed. While dataset shifts in the form of sample selection bias and imbalanced data are well-known in software engineering research, understanding other types is relevant for possible interpretations of the non-transferable results across different sites and studies. Software engineering community should be aware of and account for the dataset shift related issues when evaluating the validity of research outcomes.

Original languageEnglish
Pages (from-to)62-74
Number of pages13
JournalEmpirical Software Engineering
Issue number1-2
Publication statusPublished - 1 Feb 2012
Externally publishedYes


  • Dataset shift
  • Defect prediction
  • Effort estimation
  • Prediction models

Cite this