Abstract
The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a 'meaningful transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.
Original language | English |
---|---|
Article number | 6025350 |
Pages (from-to) | 1511-1519 |
Number of pages | 9 |
Journal | IEEE Transactions on Visualization and Computer Graphics |
Volume | 18 |
Issue number | 9 |
DOIs | |
Publication status | Published - 4 Jun 2012 |
Externally published | Yes |
Keywords
- Active appearance models
- face modeling and animation
- facial performance transfer