Efficient parameter learning of Bayesian network classifiers

Nayyar A Zaidi, Geoffrey I. Webb, Mark Carman, François Petitjean, Wray Buntine, Mike Hynes, Hans De Sterck

Research output: Contribution to journalArticleResearchpeer-review

24 Citations (Scopus)


Recent advances have demonstrated substantial benefits from learning with both generative and discriminative parameters. On the one hand, generative approaches address the estimation of the parameters of the joint distribution—(Formula presented.), which for most network types is very computationally efficient (a notable exception to this are Markov networks) and on the other hand, discriminative approaches address the estimation of the parameters of the posterior distribution—and, are more effective for classification, since they fit (Formula presented.) directly. However, discriminative approaches are less computationally efficient as the normalization factor in the conditional log-likelihood precludes the derivation of closed-form estimation of parameters. This paper introduces a new discriminative parameter learning method for Bayesian network classifiers that combines in an elegant fashion parameters learned using both generative and discriminative methods. The proposed method is discriminative in nature, but uses estimates of generative probabilities to speed-up the optimization process. A second contribution is to propose a simple framework to characterize the parameter learning task for Bayesian network classifiers. We conduct an extensive set of experiments on 72 standard datasets and demonstrate that our proposed discriminative parameterization provides an efficient alternative to other state-of-the-art parameterizations.

Original languageEnglish
Pages (from-to)1289-1329
Number of pages41
JournalMachine Learning
Issue number9-10
Publication statusPublished - 2017

Cite this