Robust approximate Bayesian inference with synthetic likelihood

David T. Frazier, Christopher Drovandi

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Bayesian synthetic likelihood (BSL) is now an established method for conducting approximate Bayesian inference in models where, due to the intractability of the likelihood function, exact Bayesian approaches are either infeasible or computationally too demanding. Implicit in the application of BSL is the assumption that the data-generating process (DGP) can produce simulated summary statistics that capture the behaviour of the observed summary statistics. We demonstrate that if this compatibility between the actual and assumed DGP is not satisfied, that is, if the model is misspecified, BSL can yield unreliable parameter inference. To circumvent this issue, we propose a new BSL approach that can detect the presence of model misspecification, and simultaneously deliver useful inferences even under significant model misspecification. Two simulated and two real data examples demonstrate the performance of this new approach to BSL, and document its superior accuracy over standard BSL when the assumed model is misspecified. Supplementary materials for this article are available online.

Original languageEnglish
Number of pages19
JournalJournal of Computational and Graphical Statistics
DOIs
Publication statusAccepted/In press - 2021

Keywords

  • Approximate Bayesian computation
  • Likelihood-free inference
  • Model misspecification
  • Robust Bayesian inference
  • Slice sampling
  • Synthetic likelihood

Cite this