Instance spaces for machine learning classification

Mario A. Muñoz, Laura Villanova, Davaatseren Baatar, Kate Smith-Miles

Research output: Contribution to journalArticleResearchpeer-review

106 Citations (Scopus)

Abstract

This paper tackles the issue of objective performance evaluation of machine learning classifiers, and the impact of the choice of test instances. Given that statistical properties or features of a dataset affect the difficulty of an instance for particular classification algorithms, we examine the diversity and quality of the UCI repository of test instances used by most machine learning researchers. We show how an instance space can be visualized, with each classification dataset represented as a point in the space. The instance space is constructed to reveal pockets of hard and easy instances, and enables the strengths and weaknesses of individual classifiers to be identified. Finally, we propose a methodology to generate new test instances with the aim of enriching the diversity of the instance space, enabling potentially greater insights than can be afforded by the current UCI repository.

Original languageEnglish
Pages (from-to)109-147
Number of pages39
JournalMachine Learning
Volume107
Issue number1
DOIs
Publication statusPublished - 1 Jan 2018

Keywords

  • Algorithm footprints
  • Classification
  • Instance difficulty
  • Instance space
  • Meta-learning
  • Performance evaluation
  • Test data
  • Test instance generation

Cite this