TY - JOUR
T1 - Leveraging structural context models and ranking score fusion for human interaction prediction
AU - Ke, Qiuhong
AU - Bennamoun, Mohammed
AU - An, Senjian
AU - Sohel, Ferdous
AU - Boussaid, Farid
N1 - Funding Information:
Manuscript received June 7, 2017; revised September 14, 2017 and November 8, 2017; accepted November 9, 2017. Date of publication November 29, 2017; date of current version June 15, 2018. This work was supported by Australian Research Council Grants DP150100294, DP150104251, and DE120102960. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Tao Mei. (Corresponding author: Qiuhong Ke.) Q. Ke, M. Bennamoun, and S. An are with the School of Computer Science and Software Engineering, The University of Western Australia, Crawley, WA 6009, Australia (e-mail: [email protected]; mohammed. [email protected]; [email protected]).
Publisher Copyright:
© 1999-2012 IEEE.
PY - 2018/7
Y1 - 2018/7
N2 - Predicting an interaction before it is fully executed is very important in applications, such as human-robot interaction and video surveillance. In a two-human interaction scenario, there are often contextual dependency structures between the global interaction context of the two humans and the local context of the different body parts of each human. In this paper, we propose to learn the structure of the interaction contexts and combine it with the spatial and temporal information of a video sequence to better predict the interaction class. The structural models, including the spatial and the temporal models, are learned with long short term memory (LSTM) networks to capture the dependency of the global and local contexts of each RGB frame and each optical flow image, respectively. LSTM networks are also capable of detecting the key information from global and local interaction contexts. Moreover, to effectively combine the structural models with the spatial and temporal models for interaction prediction, a ranking score fusion method is introduced to automatically compute the optimal weight of each model for score fusion. Experimental results on the BIT-Interaction Dataset and the UT-Interaction Dataset clearly demonstrate the benefits of the proposed method.
AB - Predicting an interaction before it is fully executed is very important in applications, such as human-robot interaction and video surveillance. In a two-human interaction scenario, there are often contextual dependency structures between the global interaction context of the two humans and the local context of the different body parts of each human. In this paper, we propose to learn the structure of the interaction contexts and combine it with the spatial and temporal information of a video sequence to better predict the interaction class. The structural models, including the spatial and the temporal models, are learned with long short term memory (LSTM) networks to capture the dependency of the global and local contexts of each RGB frame and each optical flow image, respectively. LSTM networks are also capable of detecting the key information from global and local interaction contexts. Moreover, to effectively combine the structural models with the spatial and temporal models for interaction prediction, a ranking score fusion method is introduced to automatically compute the optimal weight of each model for score fusion. Experimental results on the BIT-Interaction Dataset and the UT-Interaction Dataset clearly demonstrate the benefits of the proposed method.
KW - Interaction prediction
KW - interaction structure
KW - LSTM
KW - ranking score fusion
UR - http://www.scopus.com/inward/record.url?scp=85036530091&partnerID=8YFLogxK
U2 - 10.1109/TMM.2017.2778559
DO - 10.1109/TMM.2017.2778559
M3 - Article
AN - SCOPUS:85036530091
SN - 1520-9210
VL - 20
SP - 1712
EP - 1723
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
IS - 7
ER -