On benchmarking constraint logic programming platforms. Response to Fernandez and Hill's "A comparative study of eight constraint programming languages over the boolean and finite domains"

Mark Wallace, Joachim Schimpf, Kish Shen, Warwick Harvey

    Research output: Contribution to journalReview ArticleResearchpeer-review

    Abstract

    The comparative study published in this journal by Fernandez and Hill benchmarked some constraint programming systems on a set of well-known puzzles. The current article examines the positive and negative aspects of this kind of benchmarking. The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems. A benchmarking exercise should cover a broad set of representative problems and a broad set of programming constructs. This can be achieved using two kinds of benchmarking: Applications Benchmarking and Unit Testing. The article reports the authors' experiences with these two kinds of benchmarking in the context of the CHIC-2 Esprit project. The benchmarks were used to unit test different features of the CLP system ECLiPSe and to compare application development with different high-level constraint platforms. The conclusion is that, in deciding which system to use on a new application, it is less useful to compare standard features of CLP systems, than to compare their relevant functionalities.

    Original languageEnglish
    Pages (from-to)5-34
    Number of pages30
    JournalConstraints
    Volume9
    Issue number1
    DOIs
    Publication statusPublished - 1 Jan 2004

    Keywords

    • Benchmarking
    • CLP
    • Constraint logic programming
    • ECLiPSe
    • Finite Domain
    • Solver performance
    • Unit Testing

    Cite this

    @article{bfb490bbb6334afd8cbc359168b9d2d5,
    title = "On benchmarking constraint logic programming platforms. Response to Fernandez and Hill's {"}A comparative study of eight constraint programming languages over the boolean and finite domains{"}",
    abstract = "The comparative study published in this journal by Fernandez and Hill benchmarked some constraint programming systems on a set of well-known puzzles. The current article examines the positive and negative aspects of this kind of benchmarking. The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems. A benchmarking exercise should cover a broad set of representative problems and a broad set of programming constructs. This can be achieved using two kinds of benchmarking: Applications Benchmarking and Unit Testing. The article reports the authors' experiences with these two kinds of benchmarking in the context of the CHIC-2 Esprit project. The benchmarks were used to unit test different features of the CLP system ECLiPSe and to compare application development with different high-level constraint platforms. The conclusion is that, in deciding which system to use on a new application, it is less useful to compare standard features of CLP systems, than to compare their relevant functionalities.",
    keywords = "Benchmarking, CLP, Constraint logic programming, ECLiPSe, Finite Domain, Solver performance, Unit Testing",
    author = "Mark Wallace and Joachim Schimpf and Kish Shen and Warwick Harvey",
    year = "2004",
    month = "1",
    day = "1",
    doi = "10.1023/B:CONS.0000006181.40558.37",
    language = "English",
    volume = "9",
    pages = "5--34",
    journal = "Constraints",
    issn = "1383-7133",
    publisher = "Springer-Verlag London Ltd.",
    number = "1",

    }

    On benchmarking constraint logic programming platforms. Response to Fernandez and Hill's "A comparative study of eight constraint programming languages over the boolean and finite domains". / Wallace, Mark; Schimpf, Joachim; Shen, Kish; Harvey, Warwick.

    In: Constraints, Vol. 9, No. 1, 01.01.2004, p. 5-34.

    Research output: Contribution to journalReview ArticleResearchpeer-review

    TY - JOUR

    T1 - On benchmarking constraint logic programming platforms. Response to Fernandez and Hill's "A comparative study of eight constraint programming languages over the boolean and finite domains"

    AU - Wallace, Mark

    AU - Schimpf, Joachim

    AU - Shen, Kish

    AU - Harvey, Warwick

    PY - 2004/1/1

    Y1 - 2004/1/1

    N2 - The comparative study published in this journal by Fernandez and Hill benchmarked some constraint programming systems on a set of well-known puzzles. The current article examines the positive and negative aspects of this kind of benchmarking. The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems. A benchmarking exercise should cover a broad set of representative problems and a broad set of programming constructs. This can be achieved using two kinds of benchmarking: Applications Benchmarking and Unit Testing. The article reports the authors' experiences with these two kinds of benchmarking in the context of the CHIC-2 Esprit project. The benchmarks were used to unit test different features of the CLP system ECLiPSe and to compare application development with different high-level constraint platforms. The conclusion is that, in deciding which system to use on a new application, it is less useful to compare standard features of CLP systems, than to compare their relevant functionalities.

    AB - The comparative study published in this journal by Fernandez and Hill benchmarked some constraint programming systems on a set of well-known puzzles. The current article examines the positive and negative aspects of this kind of benchmarking. The article analyses some pitfalls in benchmarking, recalling previous published results from benchmarking different kinds of software, and explores some issues in comparative benchmarking of CLP systems. A benchmarking exercise should cover a broad set of representative problems and a broad set of programming constructs. This can be achieved using two kinds of benchmarking: Applications Benchmarking and Unit Testing. The article reports the authors' experiences with these two kinds of benchmarking in the context of the CHIC-2 Esprit project. The benchmarks were used to unit test different features of the CLP system ECLiPSe and to compare application development with different high-level constraint platforms. The conclusion is that, in deciding which system to use on a new application, it is less useful to compare standard features of CLP systems, than to compare their relevant functionalities.

    KW - Benchmarking

    KW - CLP

    KW - Constraint logic programming

    KW - ECLiPSe

    KW - Finite Domain

    KW - Solver performance

    KW - Unit Testing

    UR - http://www.scopus.com/inward/record.url?scp=0942277332&partnerID=8YFLogxK

    U2 - 10.1023/B:CONS.0000006181.40558.37

    DO - 10.1023/B:CONS.0000006181.40558.37

    M3 - Review Article

    VL - 9

    SP - 5

    EP - 34

    JO - Constraints

    JF - Constraints

    SN - 1383-7133

    IS - 1

    ER -