A Generic Ensemble Approach to Estimate Multidimensional Likelihood in Bayesian Classifier Learning

Sunil Aryal, Kai Ming Ting

    Research output: Contribution to journalArticleResearchpeer-review


    In Bayesian classifier learning, estimating the joint probability distribution p(x,y) or the likelihood p(x|y) directly from training data is considered to be difficult, especially in large multidimensional data sets. To circumvent this difficulty, existing Bayesian classifiers such as Naive Bayes, BayesNet, and AηDE have focused on estimating simplified surrogates of p(x,y) from different forms of one-dimensional likelihoods. Contrary to the perceived difficulty in multidimensional likelihood estimation, we present a simple generic ensemble approach to estimate multidimensional likelihood directly from data. The idea is to aggregate pi(x|y) estimated from a random subsample of data Di(i=1,2,....,t). This article presents two ways to estimate multidimensional likelihoods using the proposed generic approach and introduces two new Bayesian classifiers called ENNBayes and MassBayes that estimate pi(x|y) using a nearest-neighbor density estimation and a probability estimation through feature space partitioning, respectively. Unlike the existing Bayesian classifiers, ENNBayes and MassBayes have constant training time and space complexities and they scale better than existing Bayesian classifiers in very large data sets. Our empirical evaluation shows that ENNBayes and MassBayes yield better predictive accuracy than the existing Bayesian classifiers in benchmark data sets.

    Original languageEnglish
    Pages (from-to)458-479
    Number of pages22
    JournalComputational Intelligence
    Issue number3
    Publication statusPublished - 1 Aug 2016


    • Bayesian classifiers, multidimensional likelihood estimation, ENNBayes, MassBayes

    Cite this