Abstract
Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. This paper outlines how a tree learning algorithm can be derived using Bayesian statistics. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule is similar to Quinlan's information gain, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, c4 (Quinlan et al., 1987) and cart (Breiman et al., 1984), show that the full Bayesian algorithm can produce more accurate predictions than versions of these other approaches, though pays a computational price.
Original language | English |
---|---|
Pages (from-to) | 63-73 |
Number of pages | 11 |
Journal | Statistics and Computing |
Volume | 2 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Jun 1992 |
Keywords
- Bayesian statistics
- Classification trees