Projects per year
Abstract
This paper tackles the issue of objective performance evaluation of machine learning classifiers, and the impact of the choice of test instances. Given that statistical properties or features of a dataset affect the difficulty of an instance for particular classification algorithms, we examine the diversity and quality of the UCI repository of test instances used by most machine learning researchers. We show how an instance space can be visualized, with each classification dataset represented as a point in the space. The instance space is constructed to reveal pockets of hard and easy instances, and enables the strengths and weaknesses of individual classifiers to be identified. Finally, we propose a methodology to generate new test instances with the aim of enriching the diversity of the instance space, enabling potentially greater insights than can be afforded by the current UCI repository.
Original language | English |
---|---|
Pages (from-to) | 109-147 |
Number of pages | 39 |
Journal | Machine Learning |
Volume | 107 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2018 |
Keywords
- Algorithm footprints
- Classification
- Instance difficulty
- Instance space
- Meta-learning
- Performance evaluation
- Test data
- Test instance generation
Projects
- 1 Finished
-
Stress-testing algorithms: generating new test instances to elicit insights
Smith-Miles, K.
Australian Research Council (ARC), Monash University
8/12/14 → 31/12/19
Project: Research