Example-Based Facial Animation of Virtual Reality Avatars Using Auto-Regressive Neural Networks

IEEE Comput Graph Appl. 2021 Jul-Aug;41(4):52-63. doi: 10.1109/MCG.2021.3068035. Epub 2021 Jul 15.

Abstract

This article presents a hybrid animation approach that combines example-based and neural animation methods to create a simple, yet powerful animation regime for human faces. Example-based methods usually employ a database of prerecorded sequences that are concatenated or looped in order to synthesize novel animations. In contrast to this traditional example-based approach, we introduce a light-weight auto-regressive network to transform our animation-database into a parametric model. During training, our network learns the dynamics of facial expressions, which enables the replay of annotated sequences from our animation database as well as their seamless concatenation in new order. This representation is especially useful for the synthesis of visual speech, where coarticulation creates interdependencies between adjacent visemes, which affects their appearance. Instead of creating an exhaustive database that contains all viseme variants, we use our animation-network to predict the correct appearance. This allows realistic synthesis of novel facial animation sequences like visual-speech but also general facial expressions in an example-based manner.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Facial Expression
  • Humans
  • Neural Networks, Computer
  • Speech
  • User-Computer Interface*
  • Virtual Reality*