The evaluation of predictive learners: some theoretical and empirical results

Kevin B Korb, Lucas R Hope, Michelle J Hughes

    Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

    6 Citations (Scopus)


    With the growth of interest in data mining, there has been increasing interest in applying machine learning algorithms to real-world problems. This raises the question of how to evaluate the performance of machine learning algorithms. The standard procedure performs random sampling of predictive accuracy until a statistically significant difference arises between competing algorithms. That procedure fails to take into account the calibration of predictions. An alternative procedure uses an information reward measure (from I.J. Good) which is sensitive both to domain knowledge (predictive accuracy) and calibration. We analyze this measure, relating it to Kullback-Leibler distance. We also apply it to five well-known machine learning algorithms across a variety of problems, demonstrating some variations in their assessments using accuracy vs. information reward. We also look experimentally at information reward as a function of calibration and accuracy.
    Original languageEnglish
    Title of host publicationMachine Learning: ECML 2001
    Subtitle of host publication12th European Conference on Machine Learning Freiburg, Germany, September 5-7, 2001 Proceedings
    EditorsLuc De Raedt, Peter Flach
    Place of PublicationBerlin Germany
    Number of pages12
    ISBN (Print)3540425365
    Publication statusPublished - 2001
    EventEuropean Conference on Machine Learning 2001 - Freiburg, Germany
    Duration: 5 Jul 20017 Jul 2001
    Conference number: 12th (Proceedings)

    Publication series

    NameLecture Notes in Artificial Intelligence
    ISSN (Print)0302-9743


    ConferenceEuropean Conference on Machine Learning 2001
    Abbreviated titleECML 2001
    Internet address

    Cite this