On the dataset shift problem in software engineering prediction models

Research output: Contribution to journalArticleResearchpeer-review

52 Citations (Scopus)

Abstract

A core assumption of any prediction model is that test data distribution does not differ from training data distribution. Prediction models used in software engineering are no exception. In reality, this assumption can be violated in many ways resulting in inconsistent and non-transferrable observations across different cases. The goal of this paper is to explain the phenomena of conclusion instability through the dataset shift concept from software effort and fault prediction perspective. Different types of dataset shift are explained with examples from software engineering, and techniques for addressing associated problems are discussed. While dataset shifts in the form of sample selection bias and imbalanced data are well-known in software engineering research, understanding other types is relevant for possible interpretations of the non-transferable results across different sites and studies. Software engineering community should be aware of and account for the dataset shift related issues when evaluating the validity of research outcomes.

Original languageEnglish
Pages (from-to)62-74
Number of pages13
JournalEmpirical Software Engineering
Volume17
Issue number1-2
DOIs
Publication statusPublished - 1 Feb 2012
Externally publishedYes

Keywords

  • Dataset shift
  • Defect prediction
  • Effort estimation
  • Prediction models

Cite this