Efficient selection of hyperparameters in large Bayesian VARs using automatic differentiation

Joshua C.C. Chan , Liana Jacobi, Dan Zhu

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

Large Bayesian vector autoregressions with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data-driven manner can often substantially improve forecast performance. We propose a computationally efficient method to obtain the optimal hyperparameters based on automatic differentiation, which is an efficient way to compute derivatives. Using a large US data set, we show that using the optimal hyperparameter values leads to substantially better forecast performance. Moreover, the proposed method is much faster than the conventional grid-search approach, and is applicable in high-dimensional optimization problems. The new method thus provides a practical and systematic way to develop better shrinkage priors for forecasting in a data-rich environment.

Original languageEnglish
Pages (from-to)934-943
Number of pages10
JournalJournal of Forecasting
Volume39
Issue number6
DOIs
Publication statusPublished - Sep 2020

Keywords

  • Automatic differentiation
  • Vector autoregression
  • Optimal hyperparameters
  • Forecasts
  • Marginal likelihood

Cite this