Compact representation for image classification: to choose or to compress?

Yu Zhang, Jianxin Wu, Jianfei Cai

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

41 Citations (Scopus)


In large scale image classification, features such as Fisher vector or VLAD have achieved state-of-the-art results. However, the combination of large number of examples and high dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper argues that feature selection is a better choice than feature compression. We show that strong multicollinearity among feature dimensions may not exist, which undermines feature compression's effectiveness and renders feature selection a natural choice. We also show that many dimensions are noise and throwing them away is helpful for classification. We propose a supervised mutual information (MI) based importance sorting algorithm to choose features. Combining with 1-bit quantization, MI feature selection has achieved both higher accuracy and less computational cost than feature compression methods such as product quantization and BPBC.

Original languageEnglish
Title of host publicationProceedings - 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014
EditorsRonen Basri, Cornelia Fermuller, Aleix Martinez, René Vidal
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages8
ISBN (Electronic)9781479951178
Publication statusPublished - 2014
Externally publishedYes
EventIEEE Conference on Computer Vision and Pattern Recognition 2014 - Columbus, United States of America
Duration: 23 Jun 201428 Jun 2014 (IEEE Conference Proceedings)


ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2014
Abbreviated titleCVPR 2014
Country/TerritoryUnited States of America
Internet address

Cite this