Content and context features for scene image representation

Chiranjibi Sitaula, Sunil Aryal, Yong Xiang, Anish Basnet, Xuequan Lu

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)


Existing research works in scene image classification have focused on different aspects such as content features (e.g., visual information), context features (e.g., annotations, semantic information, etc.) and both. However, such works suffer from various issues such as higher feature size and lower classification performance. In this paper, we propose a new feature extraction approach for scene image representation using two kinds of rich information: content features and context features. Specifically, the new content features are generated by multi-scale foreground and background information. Similarly, the new context features are generated by the novel compact supervised codebook. Our compact supervised codebook minimizes irrelevant and redundant information, which, in result, achieves the lower-sized contextual feature vector. Finally, we combine both content and context features to represent the scene image. Our experiments on three widely used benchmark scene datasets using Support Vector Machine (SVM) classifier reveal that our proposed context and content features produce better results than existing context and content features, respectively. The fusion of the proposed two types of features significantly outperform numerous state-of-the-art features.

Original languageEnglish
Article number107470
Number of pages12
JournalKnowledge-Based Systems
Publication statusPublished - 28 Nov 2021
Externally publishedYes


  • Content features
  • Context features
  • Feature extraction
  • Image classification
  • Image processing
  • Machine learning

Cite this