Generalization of machine learning for problem reduction: a case study on travelling salesman problems

Yuan Sun, Andreas Ernst, Xiaodong Li, Jake Weiner

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

Combinatorial optimization plays an important role in real-world problem solving. In the big data era, the dimensionality of a combinatorial optimization problem is usually very large, which poses a significant challenge to existing solution methods.In this paper, we examine the generalization capability of a machine learning model for problem reduction on the classic travelling salesman problems (TSP). We demonstrate that our method can greedily remove decision variables from an optimization problem that are predicted not to be part of an optimal solution. More specifically, we investigate our model’s capability to generalize on test instances that have not been seen during the training phase. We consider three scenarios where training and test instances are different in terms of: (1) problem characteristics; (2) problem sizes; and (3) problem types. Our experiments show that this machine learning-based technique can generalize reasonably well over a wide range of TSP test instances with different characteristics or sizes. Since the accuracy of predicting unused variables naturally deteriorates as a test instance is further away from the training set, we observe that, even when tested on a different TSP problem variant, the machine learning model still makes useful predictions about which variables can be eliminated without significantly impacting solution quality.

Original languageEnglish
Number of pages27
JournalOR Spectrum
DOIs
Publication statusAccepted/In press - 1 Jan 2020

Keywords

  • Combinatorial optimization
  • Generalization error
  • Machine learning
  • Problem reduction
  • Travelling salesman problem

Cite this