Abstract
One approach to learning classification rules from examples is to build decision trees. A review and comparison paper by Mingers (Mingers, 1989) looked at the first stage of tree building, which uses a “splitting rule” to grow trees with a greedy recursive partitioning algorithm. That paper considered a number of different measures and experimentally examined their behavior on four domains. The main conclusion was that a random splitting rule does not significantly decrease classificational accuracy. This note suggests an alternative experimental method and presents additional results on further domains. Our results indicate that random splitting leads to increased error. These results are at variance with those presented by Mingers.
Original language | English |
---|---|
Pages (from-to) | 75-85 |
Number of pages | 11 |
Journal | Machine Learning |
Volume | 8 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 1992 |
Keywords
- comparative studies
- Decision trees
- induction
- noisy data