A pairwise attentive adversarial spatiotemporal network for cross-domain few-shot action recognition-r2

Zan Gao, Leming Guo, Weili Guan, An An Liu, Tongwei Ren, Shengyong Chen

Research output: Contribution to journalArticleResearchpeer-review

49 Citations (Scopus)


Abstract - Action recognition is a popular research topic in the computer vision and machine learning domains. Although many action recognition methods have been proposed, only a few researchers have focused on cross-domain few-shot action recognition, which must often be performed in real security surveillance. Since the problems of action recognition, domain adaptation, and few-shot learning need to be simultaneously solved, the cross-domain few-shot action recognition task is a challenging problem. To solve these issues, in this work, we develop a novel end-to-end pairwise attentive adversarial spatiotemporal network (PASTN) to perform the cross-domain few-shot action recognition task, in which spatiotemporal information acquisition, few-shot learning, and video domain adaptation are realised in a unified framework. Specifically, the Resnet-50 network is selected as the backbone of the PASTN, and a 3D convolution block is embedded in the top layer of the 2D CNN (ResNet-50) to capture the spatiotemporal representations. Moreover, a novel attentive adversarial network architecture is designed to align the spatiotemporal dynamics actions with higher domain discrepancies. In addition, the pairwise margin discrimination loss is designed for the pairwise network architecture to improve the discrimination of the learned domain-invariant spatiotemporal feature. The results of extensive experiments performed on three public benchmarks of the cross-domain action recognition datasets, including SDAI Action I, SDAI Action II and UCF50-OlympicSport, demonstrate that the proposed PASTN can significantly outperform the state-of-the-art cross-domain action recognition methods in terms of both the accuracy and computational time. Even when only two labelled training samples per category are considered in the office1 scenario of the SDAI Action I dataset, the accuracy of the PASTN is improved by 6.1%, 10.9%, 16.8%, and 14% compared to that of the $TA^{3}N$ , TemporalPooling, I3D, and P3D methods, respectively.

Original languageEnglish
Pages (from-to)767-782
Number of pages16
JournalIEEE Transactions on Image Processing
Publication statusPublished - 24 Nov 2021


  • action recognition
  • attentive adversarial network
  • Cross-domain learning
  • few-shot
  • pairwise margin discrimination loss
  • TR3D

Cite this