Animating Lip-Sync Characters

Animating Lip-Sync Characters

Animating Lip-Sync Characters Yu-Mei Chen∗ Fu-Chung Huang† Shuen-Huei Guan∗‡ Bing-Yu Chen§ Shu-Yang Lin∗ Yu-Hsin Lin∗ Tse-Hsien Wang∗ ∗§National Taiwan University †University of California at Berkeley ‡Digimax ∗{yumeiohya,drake,young,b95705040,starshine}@cmlab.csie.ntu.edu.tw †[email protected] §[email protected] ABSTRACT able alternative is a performance-driven approach [40, 20, Speech animation is traditionally considered as important 26], where the motion of an actor is cross-mapped and trans- but tedious work for most applications, especially when tak- ferred to a virtual character (see [32] for further discussion). ing lip synchronization (lip-sync) into consideration, because This approach gains much success, but the captured per- the muscles on the face are complex and interact dynami- formance is difficult to re-use and a new performance is re- cally. Although there are several methods proposed to ease quired each time when creating a new animation or speech the burden on artists to create facial and speech animation, sequence. Manual adjustment is still a popular approach almost none are fast and efficient. In this paper, we intro- besides the above two, where artists are requested to adjust duce a framework for synthesizing lip-sync character speech the face model controls frame by frame and compare the animation from a given speech sequence and its correspond- results back and forth. ing text. Starting from training the dominated animeme When creating facial animation, lip synchronization (lip- models for each kind of phoneme by learning the anima- sync) speech animation for a character model is more chal- tion control signals of the character through an EM-style lenging, which requires much more labor and accuracy in optimization approach, and further decomposing the dom- timing for millisecond-precise key-framing. Given a spoken inated animeme models to the polynomial-fitted animeme script, the artist has to first match the position of the lips at models and corresponding dominance functions while taking their supposed position. The transitions from word to word coarticulation into account. Finally, given a novel speech or phoneme to phoneme are even more important and need sequence and its corresponding text, a lip-sync character to be adjusted carefully. As opposed to simple articulated speech animation can be synthesized in a very short time animation which can be key-framed with linear techniques, with the dominated animeme models. The synthesized lip- the transitions between lip shapes are non-linear and diffi- sync animation can even preserve exaggerated characteris- cult to model. tics of the character’s facial geometry. Moreover, since our The transitions from phoneme to phoneme, or coarticula- method can synthesize an acceptable and robust lip-sync tion, play a major role in facial and speech animation [30, animation in almost realtime, it can be used for many ap- 15]. Coarticulation is the phenomenon where a phoneme plications, such as lip-sync animation prototyping, multilin- can influence the mouth shape of the previous and next gual animation reproduction, avatar speech, mass animation phonemes. In other words, the mouth shape depends not production, etc. only just on the current phoneme itself but also on its con- text including at least the previous and next phonemes. Fre- quently this happens when a vowel influences a preceding 1. INTRODUCTION or succeding consonant. Some previous methods have tried With the popularity of 3D animation and video games, to model coarticulation using a strong mathematical frame- facial and speech animations are becoming more important work, or to reduce it to a simpler version, but they are how- than ever. Although many technologies have allowed artists ever complicated or insufficient to produce a faithful model. to create high quality character animation, facial and speech In this paper, a framework is proposed to synthesize a lip- animations are still difficult to sculpt, because the corre- sync character speech animation from a given novel speech lation and interaction of the muscles on the face are very sequence and its corresponding text by generating anima- complicated. Some physically-based simulation methods are tion control signals from the pre-trained dominated ani- provided to approximate the muscles on the face, but the meme models, which are obtained by learning the speech- computational cost is very high. A less flexible but afford- to-animation control signals (e.g., the character controls used in Maya or similar modeling tools) with sub-phoneme accu- racy for capturing coarticulation faithfully, and further de- composed to the polynomial-fitted animeme models and cor- Permission to make digital or hard copies of all or part of this work for responding dominance functions according to the phonemes personal or classroom use is granted without fee provided that copies are through an EM-style optimization approach. Rather than not made or distributed for profit or commercial advantage and that copies using absolute lip shapes for training as some previous work, bear this notice and the full citation on the first page. To copy otherwise, to the speech-to-animation control signals are used for better republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. training/synthesis results and animation pipeline integra- Computer Graphics Workshop 2010 Hsinchu, Taiwan tion. Moreover, once there is no well-adjusted speech-to- Copyright 2010 ACM X-XXXXX-XX-X/XX/XX ...$10.00. animation control signal, we also provide a method to cross- Previous methods [1, 10, 39, 37] employed various mathe- map captured lip motion to the character, which can be the matical models and can generate new faces from the learned lip-tracking result from a speech video or a 3D lip motion statistics while respecting the given sparse observations of captured by a motion capture device. the new data. In the synthesis phase, given a novel speech sequence and its corresponding text, the dominated animeme mod- 2.2 Lip-Sync Speech Animation els are composed to generate the speech-to-animation con- Many speech animation methods derive from the facial trol signals automatically to synthesize a lip-sync charac- animation and modeling techniques. The analysis of the ter speech animation. This process only takes a very short phonemes under the context of speech-to-face correspon- time and can preserve the character’s exaggerated charac- dence, a.k.a. the viseme, is the subject of much successful teristics. Since the synthesized speech-to-animation control work. Many previous methods addressed this issue with signals can be used in Maya or similar modeling tools di- spline generation, path-finding, or signal concatenation. rectly, our framework can be integrated into existing anima- Parameterized/blend-shape techniques [3, 2, 9] for speech tion production pipelines easily. Moreover, since our method animation are the most popular methods because of their can synthesize an acceptable and robust lip-sync animation simplicity. Sifakis et al. [35] presented a physical-based ap- in almost realtime, it can be used in many applications for proach to simulate the speech controls based on their previ- which prior techniques are too slow, such as lip-sync an- ous work [34] for muscle activation. This method can inter- imation prototyping, multilingual animation reproduction, act with objects while simulating, but still, the problem is avatar speech, mass animation production, etc. the simulation cost. Data-driven approaches [5, 14] form a graph for searching the given sentences. Like similar data- 2. RELATED WORK driven approaches, they used various techniques, such as Face modeling and facial/speech animation generation are dynamic programming, to optimize the searching process. broad topics in computer graphics; [15, 30, 32] provide a Nevertheless they still suffer from missing data or dupli- good survey. In this section, we separate the face modeling cate occurrence. Machine-learning methods [18, 7, 16, 22, and specific modeling for lips in the discussion. 38] learn the statistics for phoneme-to-animation correspon- dence, which is called the animeme. 2.1 Facial Animation and Modeling L¨ofqvist [25] and Cohen and Massaro [11] provided a key Most facial animation and modeling methods can be cat- insight to decompose speech animation signal into target val- egorized into parameterized/blend-shape, physically-based, ues and dominance functions to model coarticulation. The data-driven, and machine-learning approaches. For parameterized/blend-dominance functions are sometimes reduced to a diphone or shape modeling, faces are parameterized into controls; the triphone model [16] for simplicity. The original framework, synthesis is done manually or automatically via control ad- however, shows examples such as a time-locked model or justment. Previous work on linear blend-shape [17, 31, 4], a look-ahead model that are difficult to explain by either face capturing/manipulation (FaceIK) [41], and face cloning/cross-the diphone or triphone model. Their methods are later ex- mapping [29, 33, 6, 28, 36] provided a fundamental guideline tended by Cosi et al. [12] with shape functions and resistance for many extensions, however, the limitation of the underly- functions, which are the basic concept for the animeme. ing mathematical framework causes some problems, e.g., the Some recent methods [35, 22, 38] used the concept of ani- faces outside the span of examples or parameters cannot be meme, a shape function, to model the sub-viseme signals to realistically synthesized, and the technique requires an ex- increase the accuracy of phoneme fitting. cessive number of examples. There are also some methods Kim and Ko [22] extended [18] by modeling viseme within for reducing the interference between the blend-shapes [24] a smaller sub-phoneme range with a data-driven approach. or enhancing the capabilities of cross-mapping to animate However coarticulation is modeled via a smooth function the face models [13].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us