Dimensionality reduction on SPD manifolds: the emergence of geometry-aware methods

Mehrtash Harandi, Mathieu Salzmann, Richard Hartley

Research output: Contribution to journalArticleResearchpeer-review

63 Citations (Scopus)

Abstract

Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices -especially of high-dimensional ones- comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.

Original languageEnglish
Pages (from-to)48-62
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume40
Issue number1
DOIs
Publication statusPublished - 1 Jan 2018
Externally publishedYes

Cite this