Learning traffic as a graph: A gated graph wavelet recurrent neural network for network-scale traffic prediction

Zhiyong Cui, Ruimin Ke, Ziyuan Pu, Xiaolei Ma, Yinhai Wang

Research output: Contribution to journalArticleResearchpeer-review

87 Citations (Scopus)

Abstract

Network-wide traffic forecasting is a critical component of modern intelligent transportation systems for urban traffic management and control. With the rise of artificial intelligence, many recent studies attempted to use deep neural networks to extract comprehensive features from traffic networks to enhance prediction performance, given the volume and variety of traffic data has been greatly increased. Considering that traffic status on a road segment is highly influenced by the upstream/downstream segments and nearby bottlenecks in the traffic network, extracting well-localized features from these neighboring segments is essential for a traffic prediction model. Although the convolution neural network or graph convolution neural network has been adopted to learn localized features from the complex geometric or topological structure of traffic networks, the lack of flexibility in the local-feature extraction process is still a big issue. Classical wavelet transform can detect sudden changes and peaks in temporal signals. Analogously, when extending to the graph/spectral domain, graph wavelet can concentrate more on key vertices in the graph and discriminatively extract localized features. In this study, to capture the complex spatial-temporal dependencies in network-wide traffic data, we learn the traffic network as a graph and propose a graph wavelet gated recurrent (GWGR) neural network. The graph wavelet is incorporated as a key component for extracting spatial features in the proposed model. A gated recurrent structure is employed to learn temporal dependencies in the sequence data. Comparing to baseline models, the proposed model can achieve state-of-the-art prediction performance and training efficiency on two real-world datasets. In addition, experiments show that the sparsity of graph wavelet weight matrices greatly increases the interpretability of GWGR.

Original languageEnglish
Article number102620
Number of pages15
JournalTransportation Research Part C: Emerging Technologies
Volume115
DOIs
Publication statusPublished - Jun 2020
Externally publishedYes

Keywords

  • Deep learning
  • Graph wavelet
  • Interpretability
  • Recurrent neural network
  • Sparsity
  • Traffic forecasting

Cite this