Dual-branch cross-dimensional self-attention-based imputation model for multivariate time series

Le Fang, Wei Xiang, Yuan Zhou, Juan Fang, Lianhua Chi, Zongyuan Ge

Research output: Contribution to journalArticleResearchpeer-review

Abstract

In real-world scenarios, partial information losses of multivariate time series degrade the time series analysis. Hence, the time series imputation technique has been adopted to compensate for the missing values. Existing methods focus on investigating temporal correlations, cross-variable correlations, and bidirectional dynamics of time series, and most of these methods rely on recurrent neural networks (RNNs) to capture temporal dependency. However, the RNN-based models suffer from the common problems of slow speed and high complexity when dealing with long-term dependency. While some self-attention-based models without any recurrent structures can tackle long-term dependency with parallel computing, they do not fully learn and utilize correlations across the temporal and cross-variable dimensions. To address the limitations of existing methods, we propose a novel so-called dual-branch cross-dimensional self-attention-based imputation (DCSAI) model for multivariate time series, which is capable of performing global and auxiliary cross-dimensional analyses when imputing the missing values. In particular, this model contains masked multi-head self-attention-based encoders aligned with auxiliary generators to obtain global and auxiliary correlations in two dimensions, and these correlations are then combined into one final representation through three weighted combinations. Extensive experiments are presented to show that our model performs better than other state-of-the-art benchmarkers on three real-world public datasets under various missing rates. Furthermore, ablation study results demonstrate the efficacy of each component of the model.

Original languageEnglish
Article number110896
Number of pages10
JournalKnowledge-Based Systems
Volume279
DOIs
Publication statusPublished - 4 Nov 2023

Keywords

  • Deep learning
  • Missing value imputation
  • Multivariate time series
  • Self-attention

Cite this