On benchmarking constraint logic programming platforms. Response to Fernandez and Hill's "A comparative study of eight constraint programming languages over the boolean and finite domains"

Mark Wallace, Joachim Schimpf, Kish Shen, Warwick Harvey

    Research output: Contribution to journalReview ArticleResearchpeer-review

    15 Citations (Scopus)

    Abstract

    The comparative study published in this journal by Fernandez and Hill benchmarked some constraint programming systems on a set of well-known puzzles. The current article examines the positive and negative aspects of this kind of benchmarking. The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems. A benchmarking exercise should cover a broad set of representative problems and a broad set of programming constructs. This can be achieved using two kinds of benchmarking: Applications Benchmarking and Unit Testing. The article reports the authors' experiences with these two kinds of benchmarking in the context of the CHIC-2 Esprit project. The benchmarks were used to unit test different features of the CLP system ECLiPSe and to compare application development with different high-level constraint platforms. The conclusion is that, in deciding which system to use on a new application, it is less useful to compare standard features of CLP systems, than to compare their relevant functionalities.

    Original languageEnglish
    Pages (from-to)5-34
    Number of pages30
    JournalConstraints
    Volume9
    Issue number1
    DOIs
    Publication statusPublished - 1 Jan 2004

    Keywords

    • Benchmarking
    • CLP
    • Constraint logic programming
    • ECLiPSe
    • Finite Domain
    • Solver performance
    • Unit Testing

    Cite this