Projects per year
Abstract
Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we re-investigate the practice of using proper scoring rules to produce probabilistic forecasts that are ‘optimal’ according to a given score and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility, however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios and using both artificially simulated and empirical data.
Original language | English |
---|---|
Pages (from-to) | 384-406 |
Number of pages | 23 |
Journal | International Journal of Forecasting |
Volume | 38 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2022 |
Keywords
- Linear predictive pools
- Optimal predictions
- Predictive distributions
- Proper scoring rules
- Stochastic volatility with jumps
- Testing equal predictive ability
-
Loss-based Bayesian Prediction
Maneesoonthorn, O. (Primary Chief Investigator (PCI)), Martin, G. (Chief Investigator (CI)), Frazier, D. (Chief Investigator (CI)) & Hyndman, R. (Chief Investigator (CI))
19/06/20 → 18/06/25
Project: Research
-
Consequences of Model Misspecification in Approximate Bayesian Computation
Frazier, D. (Primary Chief Investigator (PCI))
Australian Research Council (ARC)
1/02/20 → 30/06/25
Project: Research
-
The Validation of Approximate Bayesian Computation: Theory and Practice
Martin, G. (Primary Chief Investigator (PCI)), Frazier, D. (Chief Investigator (CI)), Renault, E. (Chief Investigator (CI)) & Robert, C. (Partner Investigator (PI))
Australian Research Council (ARC), Monash University, Brown University, Université Paris Dauphine (Paris Dauphine University)
1/02/17 → 31/12/21
Project: Research