3D hand pose estimation using synthetic data and weakly labeled RGB images

Yujun Cai, Liuhao Ge, Jianfei Cai, Nadia Magnenat Thalmann, Junsong Yuan

Research output: Contribution to journalArticleResearchpeer-review

28 Citations (Scopus)

Abstract

Compared with depth-based 3D hand pose estimation, it is more challenging to infer 3D hand pose from monocular RGB images, due to the substantial depth ambiguity and the difficulty of obtaining fully-annotated training data. Different from the existing learning-based monocular RGB-input approaches that require accurate 3D annotations for training, we propose to leverage the depth images that can be easily obtained from commodity RGB-D cameras during training, while during testing we take only RGB inputs for 3D joint predictions. In this way, we alleviate the burden of the costly 3D annotations in real-world dataset. Particularly, we propose a weakly-supervised method, adaptating from fully-annotated synthetic dataset to weakly-labeled real-world single RGB dataset with the aid of a depth regularizer, which serves as weak supervision for 3D pose prediction. To further exploit the physical structure of 3D hand pose, we present a novel CVAE-based statistical framework to embed the pose-specific subspace from RGB images, which can then be used to infer the 3D hand joint locations. Extensive experiments on benchmark datasets validate that our proposed approach outperforms baselines and state-of-the-art methods, which proves the effectiveness of the proposed depth regularizer and the CVAE-based framework.

Original languageEnglish
Pages (from-to)3739-3753
Number of pages15
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume43
Issue number11
DOIs
Publication statusPublished - 1 Nov 2021

Keywords

  • 3D hand pose estimation
  • depth regularizer
  • pose-specific subspace
  • weakly-supervised methods

Cite this