Better short than greedy: interpretable models through optimal rule boosting

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review


Rule ensembles are designed to provide a useful trade-off between predictive accuracy and model interpretability. However, the myopic and random search components of current rule ensemble methods can compromise this goal: they often need more rules than necessary to reach a certain accuracy level or can even outright fail to accurately model a distribution that can actually be described well with a few rules. Here, we present a novel approach aiming to fit rule ensembles of maximal predictive power for a given ensemble size (and thus model comprehensibility). In particular, we present an efficient branch-and-bound algorithm that optimally solves the per-rule objective function of the popular second-order gradient boosting framework. Our main insight is that the boosting objective can be tightly bounded in linear time of the number of covered data points. Along with an additional novel pruning technique related to rule redundancy, this leads to a computationally feasible approach for boosting optimal rules that, as we demonstrate on a wide range of common benchmark problems, consistently outperforms the predictive performance of boosting greedy rules.
Original languageEnglish
Title of host publicationProceedings of the 2021 SIAM International Conference on Data Mining (SDM)
EditorsCarlotta Demeniconi, Ian Davidson
Place of PublicationPhiladephia PA USA
PublisherSociety for Industrial & Applied Mathematics (SIAM)
Number of pages9
ISBN (Electronic)9781611976700
Publication statusPublished - 2021
EventSIAM International Conference on Data Mining 2021 - Online, United States of America
Duration: 29 Apr 20211 May 2021 (Proceedings) (Website)


ConferenceSIAM International Conference on Data Mining 2021
Abbreviated titleSDM21
Country/TerritoryUnited States of America
Internet address


  • rule learning
  • boosting
  • branch-and-bound
  • ensemble methods

Cite this