Gaze-assisted multi-stream deep neural network for action recognition

Yinan Liu, Qingbo Wu, Liangzhi Tang, Hengcan Shi

Research output: Contribution to journalArticleResearchpeer-review

17 Citations (Scopus)

Abstract

There are two important aspects in human action recognition. The first one is how to locate the area that better indicates what the subjects in the videos are doing. The second one is how we can utilize the appearance and motion information from the video data. In this paper, we propose a gaze-assisted deep neural network, which performs the action recognition task with the help of human visual attention. Based on the above-mentioned consideration, we first collect a large number of human gaze data by recording the eye movements of human subjects when they watch the video. Then, we employ a fully convolutional network to learn to predict the human gaze. To efficiently utilize the human gaze, inspired by the rank pooling concept, which can encode the video into one image, we design a novel video representation named by dynamic gaze. The proposed dynamic gaze captures both the appearance and motion information from the video, and our human gaze data can better locate the area of interest. Based on the dynamic gaze, we build our dynamic gaze stream. We combine the proposed dynamic gaze stream together with the two-stream architecture as our final multi-stream architecture. We have collected over 300-k human gaze maps for the J-HMDB data set in this paper, and experiments show that the proposed multi-stream architecture can achieve comparable results with the state of the art in the task of action recognition with both collected human gaze data and predicted human gaze data.

Original languageEnglish
Pages (from-to)19432-19441
Number of pages10
JournalIEEE Access
Volume5
DOIs
Publication statusPublished - 18 Sept 2017
Externally publishedYes

Keywords

  • Action recognition
  • convolutional neural network
  • human gaze

Cite this