A generative model for depth-based robust 3D facial pose tracking

Lu Sheng, Jianfei Cai, Tat-Jen Cham, Vladimir Pavlovic, King Ngi Ngan

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

12 Citations (Scopus)


We consider the problem of depth-based robust 3D facial pose tracking under unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Unlike the previous depth-based discriminative or data-driven methods that require sophisticated training or manual intervention, we propose a generative framework that unifies pose tracking and face model adaptation on-the-fly. Particularly, we propose a statistical 3D face model that owns the flexibility to generate and predict the distribution and uncertainty underlying the face model. Moreover, unlike prior arts employing the ICP-based facial pose estimation, we propose a ray visibility constraint that regularizes the pose based on the face model's visibility against the input point cloud, which augments the robustness against the occlusions. The experimental results on Biwi and ICT-3DHP datasets reveal that the proposed framework is effective and outperforms the state-of-the-art depth-based methods.

Original languageEnglish
Title of host publicationProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
EditorsYanxi Liu, James M. Rehg, Camillo J. Taylor, Ying Wu
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages10
ISBN (Electronic)9781538604571
ISBN (Print)9781538604588
Publication statusPublished - 2017
Externally publishedYes
EventIEEE Conference on Computer Vision and Pattern Recognition 2017 - Honolulu, United States of America
Duration: 21 Jul 201726 Jul 2017
https://ieeexplore.ieee.org/xpl/conhome/8097368/proceeding (Proceedings)


ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2017
Abbreviated titleCVPR 2017
Country/TerritoryUnited States of America
Internet address

Cite this