TR2005-048

Face Transfer with Multilinear Models


    •  Vlasic, D., Brand, M., Pfister, H., Popovic, J., "Face Transfer with Multilinear Models", ACM Transactions on Graphics (TOG), Vol. 24, No. 3, pp. 426-433, July 2005.
      BibTeX TR2005-048 PDF
      • @article{Vlasic2005jul,
      • author = {Vlasic, D. and Brand, M. and Pfister, H. and Popovic, J.},
      • title = {Face Transfer with Multilinear Models},
      • journal = {ACM Transactions on Graphics (TOG)},
      • year = 2005,
      • volume = 24,
      • number = 3,
      • pages = {426--433},
      • month = jul,
      • issn = {0730=0301},
      • url = {https://www.merl.com/publications/TR2005-048}
      • }
  • MERL Contact:
  • Research Area:

    Computer Vision

Abstract:

Face Transfer is a method for mapping videorecorded performances of one individual to facial animations of another. It extracts visemes (speech-related mouth articulations), expressions, and three-dimensional (3D) pose from monocular video or film footage. These parameters are then used to generate and drive a detailed 3D textured face mesh for a target identity, which can be seamlessly rendered back into target footage. The underlying face model automatically adjusts for how the target performs facial expressions and visemes. The performance data can be easily edited to change the visemes, expressions, pose, or even the identity of the target - the attributes are separably controllable. This supports a wide variety of video rewrite and puppetry applications.

Face Transfer is based on a multilinear model of 3D face meshes that separably parameterizes the space of geometric variations due to difference attributes (e.g., identity, expression, and viseme). Separability means that each of these attributes can be independently varied. A multilinear model can be estimated from a Cartesian product of examples (identities x expressions x visemes) with techniques from statistical analysis, but only after careful preprocessing of the geometric data set to secure one-to-one correspondence, to minimize cross-coupling artifacts, and to fill in any missing examples. Face Transfer offers new solutions to these problems and links the estimated model with a face-tracking algorithm to extract pose, expression, and viseme parameters.

 

  • Related News & Events