Generating new test instances by evolving in instance space

Kate Amanda Smith-Miles, Simon Bowly

Research output: Contribution to journalArticleResearchpeer-review

18 Citations (Scopus)

Abstract

Our confidence in the future performance of any algorithm, including optimization algorithms, depends on how carefully we select test instances so that the generalization of algorithm performance on future instances can be inferred. In recent work, we have established a methodology to generate a 2-d representation of the instance space, comprising a set of known test instances. This instance space shows the similarities and differences between the instances using measurable features or properties, and enables the performance of algorithms to be viewed across the instance space, where generalizations can be inferred. The power of this methodology is the insights that can be generated into algorithm strengths and weaknesses by examining the regions in instance space where strong performance can be expected. The representation of the instance space is dependent on the choice of test instances however. In this paper we present a methodology for generating new test instances with controllable properties, by filling observed gaps in the instance space. This enables the generation of rich new sets of test instances to support better the understanding of algorithm strengths and weaknesses. The methodology is demonstrated on graph colouring as a case study.
Original languageEnglish
Pages (from-to)102-113
Number of pages12
JournalComputers and Operations Research
Volume63
DOIs
Publication statusPublished - 2015

Keywords

  • Testinstances
  • Benchmarking
  • Graph colouring
  • Instance space
  • Evolvinginstances

Cite this

Smith-Miles, Kate Amanda ; Bowly, Simon. / Generating new test instances by evolving in instance space. In: Computers and Operations Research. 2015 ; Vol. 63. pp. 102-113.
@article{49437836001048d2a8e350a969a9c4e0,
title = "Generating new test instances by evolving in instance space",
abstract = "Our confidence in the future performance of any algorithm, including optimization algorithms, depends on how carefully we select test instances so that the generalization of algorithm performance on future instances can be inferred. In recent work, we have established a methodology to generate a 2-d representation of the instance space, comprising a set of known test instances. This instance space shows the similarities and differences between the instances using measurable features or properties, and enables the performance of algorithms to be viewed across the instance space, where generalizations can be inferred. The power of this methodology is the insights that can be generated into algorithm strengths and weaknesses by examining the regions in instance space where strong performance can be expected. The representation of the instance space is dependent on the choice of test instances however. In this paper we present a methodology for generating new test instances with controllable properties, by filling observed gaps in the instance space. This enables the generation of rich new sets of test instances to support better the understanding of algorithm strengths and weaknesses. The methodology is demonstrated on graph colouring as a case study.",
keywords = "Testinstances, Benchmarking, Graph colouring, Instance space, Evolvinginstances",
author = "Smith-Miles, {Kate Amanda} and Simon Bowly",
year = "2015",
doi = "10.1016/j.cor.2015.04.022",
language = "English",
volume = "63",
pages = "102--113",
journal = "Computers and Operations Research",
issn = "0305-0548",
publisher = "Elsevier",

}

Generating new test instances by evolving in instance space. / Smith-Miles, Kate Amanda; Bowly, Simon.

In: Computers and Operations Research, Vol. 63, 2015, p. 102-113.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Generating new test instances by evolving in instance space

AU - Smith-Miles, Kate Amanda

AU - Bowly, Simon

PY - 2015

Y1 - 2015

N2 - Our confidence in the future performance of any algorithm, including optimization algorithms, depends on how carefully we select test instances so that the generalization of algorithm performance on future instances can be inferred. In recent work, we have established a methodology to generate a 2-d representation of the instance space, comprising a set of known test instances. This instance space shows the similarities and differences between the instances using measurable features or properties, and enables the performance of algorithms to be viewed across the instance space, where generalizations can be inferred. The power of this methodology is the insights that can be generated into algorithm strengths and weaknesses by examining the regions in instance space where strong performance can be expected. The representation of the instance space is dependent on the choice of test instances however. In this paper we present a methodology for generating new test instances with controllable properties, by filling observed gaps in the instance space. This enables the generation of rich new sets of test instances to support better the understanding of algorithm strengths and weaknesses. The methodology is demonstrated on graph colouring as a case study.

AB - Our confidence in the future performance of any algorithm, including optimization algorithms, depends on how carefully we select test instances so that the generalization of algorithm performance on future instances can be inferred. In recent work, we have established a methodology to generate a 2-d representation of the instance space, comprising a set of known test instances. This instance space shows the similarities and differences between the instances using measurable features or properties, and enables the performance of algorithms to be viewed across the instance space, where generalizations can be inferred. The power of this methodology is the insights that can be generated into algorithm strengths and weaknesses by examining the regions in instance space where strong performance can be expected. The representation of the instance space is dependent on the choice of test instances however. In this paper we present a methodology for generating new test instances with controllable properties, by filling observed gaps in the instance space. This enables the generation of rich new sets of test instances to support better the understanding of algorithm strengths and weaknesses. The methodology is demonstrated on graph colouring as a case study.

KW - Testinstances

KW - Benchmarking

KW - Graph colouring

KW - Instance space

KW - Evolvinginstances

UR - http://www.sciencedirect.com/science/article/pii/S0305054815001136/pdfft?md5=fb74efbbcab1b75b1926dbb3c6b295e3&pid=1-s2.0-S0305054815001136-main.pdf

U2 - 10.1016/j.cor.2015.04.022

DO - 10.1016/j.cor.2015.04.022

M3 - Article

VL - 63

SP - 102

EP - 113

JO - Computers and Operations Research

JF - Computers and Operations Research

SN - 0305-0548

ER -