Depth recovery with face priors

Chongyu Chen, Hai Xuan Pham, Vladimir Pavlovic, Jianfei Cai, Guangming Shi

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

3 Citations (Scopus)


Existing depth recovery methods for commodity RGB-D sensors primarily rely on low-level information for repairing the measured depth estimates. However, as the distance of the scene from the camera increases, the recovered depth estimates become increasingly unreliable. The human face is often a primary subject in the captured RGB-D data in applications such as the video conference. In this paper we propose to incorporate face priors extracted from a general sparse 3D face model into the depth recovery process. In particular, we propose a joint optimization framework that consists of two main steps: deforming the face model for better alignment and applying face priors for improved depth recovery. The two main steps are iteratively and alternatively operated so as to help each other. Evaluations on benchmark datasets demonstrate that the proposed method with face priors significantly outperforms the baseline method that does not use face priors, with up to 15.1% improvement in depth recovery quality and up to 22.3% in registration accuracy.

Original languageEnglish
Title of host publicationComputer Vision – ACCV 2014
Subtitle of host publication12th Asian Conference on Computer Vision Singapore, Singapore, November 1–5, 2014 Revised Selected Papers, Part IV
EditorsDaniel Cremers, Ian Reid, Hideo Saito, Ming-Hsuan Yang
Place of PublicationCham Switzerland
Number of pages16
ISBN (Electronic)9783319168173
ISBN (Print)9783319168166
Publication statusPublished - 2015
Externally publishedYes
EventAsian Conference on Computer Vision 2014 - Singapore, Singapore
Duration: 1 Nov 20145 Nov 2014
Conference number: 12th (Proceedings)

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferenceAsian Conference on Computer Vision 2014
Abbreviated titleACCV 2014
Internet address

Cite this