Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification

Zongyuan Ge, Chris McCool, Conrad Sanderson, Peng Wang, Lingqiao Liu, Ian Reid, Peter Corke

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fully-connected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co- occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1% (using single images) to 41.1% when using the Spatio-Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6%.

Original languageEnglish
Title of host publication2016 International Conference on Digital Image Computing
Subtitle of host publicationTechniques and Applications (DICTA)
EditorsAlan Wee-Chung Liew, Brian Lovell, Clinton Fookes
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages6
ISBN (Electronic)9781509028955
ISBN (Print)9781509028962
DOIs
Publication statusPublished - 22 Dec 2016
Externally publishedYes
EventDigital Image Computing Techniques and Applications 2016 - Mantra on View Hotel, Surfers Paradise, Gold Coast, Australia
Duration: 30 Nov 20162 Dec 2016
Conference number: 18
http://dicta2016.dictaconference.org/index.html

Conference

ConferenceDigital Image Computing Techniques and Applications 2016
Abbreviated titleDICTA 2016
CountryAustralia
CityGold Coast
Period30/11/162/12/16
OtherThe International Conference on Digital Image Computing: Techniques and Applications (DICTA) is the main Australian Conference on computer vision, image processing, pattern recognition, and related areas. DICTA was established in 1991 as the premier conference of the Australian Pattern Recognition Society (APRS).
Internet address

Cite this

Ge, Z., McCool, C., Sanderson, C., Wang, P., Liu, L., Reid, I., & Corke, P. (2016). Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification. In A. W-C. Liew, B. Lovell, & C. Fookes (Eds.), 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA) [7797039] Piscataway NJ USA: IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/DICTA.2016.7797039
Ge, Zongyuan ; McCool, Chris ; Sanderson, Conrad ; Wang, Peng ; Liu, Lingqiao ; Reid, Ian ; Corke, Peter. / Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification. 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). editor / Alan Wee-Chung Liew ; Brian Lovell ; Clinton Fookes. Piscataway NJ USA : IEEE, Institute of Electrical and Electronics Engineers, 2016.
@inproceedings{ee64eb16634c46a19f433d3e70e6eede,
title = "Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification",
abstract = "Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fully-connected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co- occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1{\%} (using single images) to 41.1{\%} when using the Spatio-Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6{\%}.",
author = "Zongyuan Ge and Chris McCool and Conrad Sanderson and Peng Wang and Lingqiao Liu and Ian Reid and Peter Corke",
year = "2016",
month = "12",
day = "22",
doi = "10.1109/DICTA.2016.7797039",
language = "English",
isbn = "9781509028962",
editor = "Liew, {Alan Wee-Chung} and Brian Lovell and Clinton Fookes",
booktitle = "2016 International Conference on Digital Image Computing",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
address = "United States of America",

}

Ge, Z, McCool, C, Sanderson, C, Wang, P, Liu, L, Reid, I & Corke, P 2016, Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification. in AW-C Liew, B Lovell & C Fookes (eds), 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA)., 7797039, IEEE, Institute of Electrical and Electronics Engineers, Piscataway NJ USA, Digital Image Computing Techniques and Applications 2016, Gold Coast, Australia, 30/11/16. https://doi.org/10.1109/DICTA.2016.7797039

Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification. / Ge, Zongyuan; McCool, Chris; Sanderson, Conrad; Wang, Peng; Liu, Lingqiao; Reid, Ian; Corke, Peter.

2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). ed. / Alan Wee-Chung Liew; Brian Lovell; Clinton Fookes. Piscataway NJ USA : IEEE, Institute of Electrical and Electronics Engineers, 2016. 7797039.

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

TY - GEN

T1 - Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification

AU - Ge, Zongyuan

AU - McCool, Chris

AU - Sanderson, Conrad

AU - Wang, Peng

AU - Liu, Lingqiao

AU - Reid, Ian

AU - Corke, Peter

PY - 2016/12/22

Y1 - 2016/12/22

N2 - Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fully-connected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co- occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1% (using single images) to 41.1% when using the Spatio-Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6%.

AB - Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fully-connected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co- occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1% (using single images) to 41.1% when using the Spatio-Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6%.

UR - http://www.scopus.com/inward/record.url?scp=85011112588&partnerID=8YFLogxK

U2 - 10.1109/DICTA.2016.7797039

DO - 10.1109/DICTA.2016.7797039

M3 - Conference Paper

SN - 9781509028962

BT - 2016 International Conference on Digital Image Computing

A2 - Liew, Alan Wee-Chung

A2 - Lovell, Brian

A2 - Fookes, Clinton

PB - IEEE, Institute of Electrical and Electronics Engineers

CY - Piscataway NJ USA

ER -

Ge Z, McCool C, Sanderson C, Wang P, Liu L, Reid I et al. Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification. In Liew AW-C, Lovell B, Fookes C, editors, 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA). Piscataway NJ USA: IEEE, Institute of Electrical and Electronics Engineers. 2016. 7797039 https://doi.org/10.1109/DICTA.2016.7797039