Abstract
We generalize an information-based reward function, introduced by Good (1952), for use with machine learners of classification functions. We discuss the advantages of our function over predictive accuracy and the metric of Kononenko and Bratko (1991). We examine the use of information reward to evaluate popular machine learning algorithms (e.g., C5.0, Naive Bayes, CaMML) using UCI archive datasets, finding that the assessment implied by predictive accuracy is often reversed when using information reward.
| Original language | English |
|---|---|
| Title of host publication | AI 2002: Advances in Artificial Intelligence |
| Subtitle of host publication | 15th Australian Joint Conference on Artificial Intelligence Canberra, Australia, December 2-6, 2002 Proceedings |
| Editors | Bob McKay, John Slaney |
| Place of Publication | Berlin Germany |
| Publisher | Springer |
| Pages | 272-283 |
| Number of pages | 12 |
| ISBN (Print) | 3540001972 |
| DOIs | |
| Publication status | Published - 2002 |
| Event | Australasian Joint Conference on Artificial Intelligence 2002 - Canberra, Australia Duration: 2 Dec 2002 → 6 Dec 2002 Conference number: 15th https://link.springer.com/book/10.1007/3-540-36187-1 (Proceedings) |
Publication series
| Name | Lecture Notes in Computer Science |
|---|---|
| Publisher | Springer |
| Volume | 2557 |
| ISSN (Print) | 0302-9743 |
Conference
| Conference | Australasian Joint Conference on Artificial Intelligence 2002 |
|---|---|
| Abbreviated title | AI 2002 |
| Country/Territory | Australia |
| City | Canberra |
| Period | 2/12/02 → 6/12/02 |
| Internet address |
|