Learning to Generate 3D Stylized Character Expressions from Humans

Learning to Generate 3D Stylized Character Expressions from Humans

2018 IEEE Winter Conference on Applications of Computer Vision Learning to Generate 3D Stylized Character Expressions from Humans Deepali Aneja1, Bindita Chaudhuri1, Alex Colburn2, Gary Faigin3, Linda Shapiro1 and Barbara Mones1 1Paul G. Allen School of Computer Science & Engineering, UW, Seattle, WA, USA 2Zillow Group, Seattle, WA, USA 3Gage Academy of Art, Seattle, WA, USA {deepalia,bindita,alexco,shapiro,mones}@cs.washington.edu, [email protected] Abstract We present ExprGen, a system to automatically generate 3D stylized character expressions from humans in a per- ceptually valid and geometrically consistent manner. Our multi-stage deep learning system utilizes the latent vari- ables of human and character expression recognition con- (a) (b) volutional neural networks to control a 3D animated char- acter rig. This end-to-end system takes images of human Figure 1: Example of inaccurate expression transfer. (a) Expres- faces and generates the character rig parameters that best sion transfer from human (top right) to a character [17]. (b) Me- match the human’s facial expression. ExprGen generalizes chanical Turk testers perceive the human expression as sadness, to multiple characters, and allows expression transfer be- while the character expression is perceived as neutral and a mix- ture of other expressions. The character expression has neither tween characters in a semi-supervised manner. Qualitative expression clarity nor geometric consistency. and quantitative evaluation of our method based on Me- chanical Turk tests show the high perceptual accuracy of sive and clear but require expertise and hours of work. In our expression transfer results. order to speed up the animation process, animators often use human actors to control and animate a 3D stylized char- acter using a facial performance capture system. These sys- 1. Introduction tems often lack the expressive quality and perceptual valid- ity of animator-created animations, mainly due to their as- Our work is motivated by the goal of enhancing animated sumption that geometric markers are sufficient for expres- storytelling by improving 3D stylized character facial ex- sion transfer. The geometry-based methods and retarget- pressions. The importance of believable and accurate ani- ing [26] based on handcrafted descriptors may be unable to mated character facial expressions is readily demonstrated take into account the perception of the intended expression by films and games such as Polar Express [47] and Mass when transferred onto a stylized character. We are unaware Effect: Andromeda [2]. In these examples, it is difficult for of any tools or methods that support animators by validating the audience to connect to the characters and broader story- the perception of character expressions during creation. De- line, because the characters do not exhibit clearly recogniz- spite recent advances in modeling capabilities, motion cap- able facial expressions that are consistent with their emo- ture and control parameterization, current methods do not tional state in the storyline [39, 34]. Characters must have address the fundamental problem of creating clear expres- perceptually valid expressions, that are clearly perceived by sions that humans recognize as the intended expression. humans to be in the intended expression class. Fig. 1 shows Our goal is to learn 3D stylized character expressions a concrete example of a perceptually invalid expression, in from humans in a perceptually valid and geometrically con- which the human expression did not transfer correctly to the sistent manner. To this end, we propose an end-to-end sys- character when tested on Mechanical Turk (MT) for expres- tem, ExprGen, that takes a 2D image of a human and pre- sion clarity with 30 test subjects. dicts the 3D rig parameters of a character. This is a chal- Animator-created character expressions can be expres- lenging goal because there is no existing dataset mapping 978-1-5386-4886-5/18/$31.00 ©2018 IEEE 160 DOI 10.1109/WACV.2018.00024 Human input Primary character Secondary characters Data 3D-CNN C-MLP Processing (a) (b) (c) (d) (e) (f) (g) Figure 2: Overview of our multi-stage expression transfer system ExprGen: (a) 2D images of human facial expressions are pre-processed (b, c, Sec. 3). A CNN (d, Sec. 4.2.1) generates rig parameters corresponding to the human expression for primary characters (e). A separate neural network (f, Sec. 4.2.2) performs primary-character-to-secondary-character expression transfer (g). 2D images of human expressions to 3D character rig pa- stylized character rig controls. (2) The ability to utilize this rameters. Further, it is prohibitively expensive to manually mapping to generate 3D characters with clear unambiguous create a dataset with explicit human image to 3D rig pa- facial expressions. (3) A semi-supervised method to enable rameter labeling. To address this challenge, we leverage expression transfer between multiple characters. publicly available human and character expression datasets ExprGen uses images of human facial expressions to with 2D images and expression labels [12, 24, 33, 25, 28, 3]. generate perceptually valid and geometrically consistent 3D Each 2D character image in the dataset is rendered from its stylized character expressions using a deep learning frame- 3D facial rig (which has associated rig control parameters). work. Both qualitative and quantitative results detailed in In our work, the system learns from the six broad categories Sec. 5 illustrate the accurate, plausible, and perceptually (anger, disgust, fear, joy, sadness, surprise) [15] and neu- valid expression transfer from humans to 3D stylized char- tral, since there is agreement on their recognition within the acters. Our approach is unique in that it uses geometry plus facial expression research community, and these seven ex- perceptual validity rather than pure geometry-based math- pressions occur in a wide range of intensities and can blend ematical operations. Our method also introduces a simple with each other to create additional expressions. lightweight expression transfer mechanism between char- Our approach is to learn the correspondence between 2D acters, which enables one character rig to drive any other images of human and character expressions and use this cor- rig and eliminates the need to individually train each rig respondence to map human expression images to 3D char- with the full pipeline. We hope that our method will enable acter rig parameters. We start with recognizing facial ex- the creation of new perceptually driven tools that help ani- pressions for both humans and characters to induce a per- mators create clear and accurate character expressions and ceptual metric and constraints for expression generation and ultimately successful animated stories. matching. Our system then learns a joint embedding to map human expressions to character expressions. This mecha- 2. Related Work nism also accounts for geometric consistency, so that any Facial Expression Recognition (FER). With recent ad- mapping is both perceptually valid and geometrically con- vancement in deep learning research, Convolutional Neu- sistent. At this point, we can take any human expression ral Networks have shown great improvement in the FER image and find similar or dissimilar images in a database tasks [30, 6, 23, 21, 13, 22] and there are a number of of character images. Using this similarity analysis, we train fusion algorithms to boost the recognition performance a regression network, 3D-CNN, from a human expression [42, 46, 38, 44]. Many systems are tuned on a single fa- onto the parameters of a specific or primary character 3D cial expression database, which makes them sensitive to rig. Finally, a lightweight mechanism, Character Multi- the lighting and particular poses present in that database. Layer Perceptron (C-MLP), transfers character expressions These methods focus on engineered features, which lack the to other characters. This enables re-use of a primary char- generalizability to perform “in the wild” expression recog- acter rig trained in the previous steps to drive secondary nition. To overcome this limitation, we combine human characters. Fig. 2 depicts an overview of our system at run databases from different sources including a dataset col- time. Images of human facial expressions are initially pro- lected in the wild for our training step in order to improve cessed to detect the face and 49 geometric landmarks, and the robustness of our trained model. In our work, we use then fed into the 3D-CNN to generate expression specific CNNs to learn perceptual features pertaining to facial ex- parameters for the primary character rig. The C-MLP uses pressions and combine them with geometric features, since the generated expression parameters to produce the expres- this approach has been shown to perform better in expres- sion on other secondary 3D stylized characters. sion recognition and transfer to stylized characters than ge- The main contributions of our work are: (1) A novel per- ometric features alone [3]. Note that Aneja et al. [3] pro- ceptually valid method to map 2D human face images to 3D posed a retrieval method to identify the closest 2D character 161 expression image from the existing database to a given hu- 3. Data Acquisition and Processing man image, whereas we propose a method that generates a 3D stylized character expression for a given human image. Our framework uses two databases: (1) Human expres- sion database (HED) and (2) Character

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us