Bayesian inference using synthetic likelihood: asymptotics and adjustments

David T. Frazier, David J. Nott, Christopher Drovandi, Robert Kohn

Research output: Contribution to journalArticleResearchpeer-review

5 Citations (Scopus)


Implementing Bayesian inference is often computationally challenging in complex models, especially when calculating the likelihood is difficult. Synthetic likelihood is one approach for carrying out inference when the likelihood is intractable, but it is straightforward to simulate from the model. The method constructs an approximate likelihood by taking a vector summary statistic as being multivariate normal, with the unknown mean and covariance estimated by simulation. Previous research demonstrates that the Bayesian implementation of synthetic likelihood can be more computationally efficient than approximate Bayesian computation, a popular likelihood-free method, in the presence of a high-dimensional summary statistic. Our article makes three contributions. The first shows that if the summary statistics are well-behaved, then the synthetic likelihood posterior is asymptotically normal and yields credible sets with the correct level of coverage. The second contribution compares the computational efficiency of Bayesian synthetic likelihood and approximate Bayesian computation. We show that Bayesian synthetic likelihood is computationally more efficient than approximate Bayesian computation. Based on the asymptotic results, the third contribution proposes using adjusted inference methods when a possibly misspecified form is assumed for the covariance matrix of the synthetic likelihood, such as diagonal or a factor model, to speed up computation. Supplementary materials for this article are available online.

Original languageEnglish
Pages (from-to)2821-2832
Number of pages12
JournalJournal of the American Statistical Association
Publication statusPublished - 2023


  • Approximate Bayesian computation
  • Likelihood-free inference
  • Model misspecification

Cite this