Multi-modal joint clustering with application for unsupervised attribute discovery

Liangchen Liu, Feiping Nie, Arnold Wiliem, Zhihui Li, Teng Zhang, Brian C. Lovell

Research output: Contribution to journalArticleResearchpeer-review

36 Citations (Scopus)


Utilizing multiple descriptions/views of an object is often useful in image clustering tasks. Despite many works that have been proposed to effectively cluster multi-view data, there are still unaddressed problems such as the errors introduced by the traditional spectral-based clustering methods due to the two disjoint stages: 1) eigendecomposition and 2) the discretization of new representations. In this paper, we propose a unified clustering framework which jointly learns the two stages together as well as utilizing multiple descriptions of the data. More specifically, two learning methods from this framework are proposed: 1) through a graph construction from different views and 2) through combining multiple graphs. Furthermore, benefiting from the separability and local graph preserving properties of the proposed methods, a novel unsupervised automatic attribute discovery method is proposed. We validate the efficacy of our methods on five data sets, showing that the proposed joint learning clustering methods outperform the recent state-of-the-art methods. We also show that it is possible to derive a novel method to address the unsupervised automatic attribute discovery tasks.

Original languageEnglish
Pages (from-to)4345-4356
Number of pages12
JournalIEEE Transactions on Image Processing
Issue number9
Publication statusPublished - Sept 2018
Externally publishedYes


  • attribute
  • Image clustering
  • unsupervised
  • unsupervised automatic attribute discovery

Cite this