TY - JOUR
T1 - Improving the accuracy of global forecasting models using time series data augmentation
AU - Bandara, Kasun
AU - Hewamalage, Hansika
AU - Liu, Yuan Hao
AU - Kang, Yanfei
AU - Bergmeir, Christoph
N1 - Funding Information:
This research was supported by the Australian Research Council under grant DE190100045, by a Facebook Statistics for Improving Insights and Decisions research award, by Monash Institute of Medical Engineering seed funding, by the MASSIVE - High performance computing facility, Australia, and by the National Natural Science Foundation of China (No. 11701022).
Publisher Copyright:
© 2021 Elsevier Ltd
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/12
Y1 - 2021/12
N2 - Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown recently promising results in forecasting competitions and real-world applications, outperforming many state-of-the-art univariate forecasting techniques. In most cases, GFMs are implemented using deep neural networks, and in particular Recurrent Neural Networks (RNN), which require a sufficient amount of time series to estimate their numerous model parameters. However, many time series databases have only a limited number of time series. In this study, we propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of the GFM models in less data-abundant settings. We use three time series augmentation techniques: GRATIS, moving block bootstrap (MBB), and dynamic time warping barycentric averaging (DBA) to synthetically generate a collection of time series. The knowledge acquired from these augmented time series is then transferred to the original dataset using two different approaches: the pooled approach and the transfer learning approach. When building GFMs, in the pooled approach, we train a model on the augmented time series alongside the original time series dataset, whereas in the transfer learning approach, we adapt a pre-trained model to the new dataset. In our evaluation on competition and real-world time series datasets, our proposed variants can significantly improve the baseline accuracy of GFM models and outperform state-of-the-art univariate forecasting methods.
AB - Forecasting models that are trained across sets of many time series, known as Global Forecasting Models (GFM), have shown recently promising results in forecasting competitions and real-world applications, outperforming many state-of-the-art univariate forecasting techniques. In most cases, GFMs are implemented using deep neural networks, and in particular Recurrent Neural Networks (RNN), which require a sufficient amount of time series to estimate their numerous model parameters. However, many time series databases have only a limited number of time series. In this study, we propose a novel, data augmentation based forecasting framework that is capable of improving the baseline accuracy of the GFM models in less data-abundant settings. We use three time series augmentation techniques: GRATIS, moving block bootstrap (MBB), and dynamic time warping barycentric averaging (DBA) to synthetically generate a collection of time series. The knowledge acquired from these augmented time series is then transferred to the original dataset using two different approaches: the pooled approach and the transfer learning approach. When building GFMs, in the pooled approach, we train a model on the augmented time series alongside the original time series dataset, whereas in the transfer learning approach, we adapt a pre-trained model to the new dataset. In our evaluation on competition and real-world time series datasets, our proposed variants can significantly improve the baseline accuracy of GFM models and outperform state-of-the-art univariate forecasting methods.
KW - Data augmentation
KW - Global forecasting models
KW - RNN
KW - Time series forecasting
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85110297689&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2021.108148
DO - 10.1016/j.patcog.2021.108148
M3 - Article
AN - SCOPUS:85110297689
SN - 0031-3203
VL - 120
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 108148
ER -