A Practical and Configurable Lip Sync Method for Games

Total Page:16

File Type:pdf, Size:1020Kb

A Practical and Configurable Lip Sync Method for Games A Practical and Configurable Lip Sync Method for Games Yuyu Xu ∗ Andrew W. Feng† Stacy Marsella‡ Ari Shapiro§ USC Institute for Creative Technologies Figure 1: Accurate lip synchronization results for multiplecharacterscanbegeneratedusingthesamesetofphonebigram blend curves. Our method uses animator-driven data to produce high quality lip synchronization in multiple languages. Our method is well-suited for animation pipelines, since it uses static facial poses or blendshapes and can be directly edited by an animator, and modified as needed on a per-character basis. ABSTRACT Methodologies]: Computer Graphics—Three-Dimensional Graph- ics and Realism; I.3.8 [Computing Methodologies]: Computer We demonstrate a lip animation (lip sync) algorithm for real-time Graphics—Applications applications that can be used to generate synchronized facial move- ments with audio generated from natural speech or a text-to-speech 1MOTIVATION engine. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a Synchronizing the lip and mouth movements naturally along with reduced phoneme set (phone bigrams). These animations are then animation is an important part of convincing 3D character perfor- stitched together to construct the final animation, adding velocity mance. In this paper, we present a simple, portable and editable and lip-pose constraints. This method can be applied to any charac- lip-synchronization method that works for multiple languages, re- ter that uses the same, small set of visemes. Our method can operate quires no machine learning, can be constructed by a skilled an- efficiently in multiple languages by reusing phone bigram anima- imator, is effective for real-time simulations such as games, and tions that are shared among languages, and specific word sounds can be personalized for each character. Our method associates an- can be identified and changed on a per-character basis. Our method imation curves designed by an animator on a fixed set of static fa- uses no machine learning, which offers two advantages over tech- cial poses, with sequential pairs of phonemes (phone bigrams), and niques that do: 1) data can be generated for non-human characters then stitches these animations together to create a set of curves for whose faces can not be easily retargeted from a human speaker’s the facial poses along with constraints that ensure that key poses face, and 2) the specific facial poses or shapes used for animation are properly played. Diphone- and triphone-based methods have can be specified during the setup and rigging stage, and before the been explored in various previous works, often requiring machine lip animation stage, thus making it suitable for game pipelines or learning. However, our experiments have shown that animating circumstances where the speech targets poses are predetermined, phoneme pairs (such as phone bigrams or diphones), as opposed such as after acquisition from an online 3D marketplace. to phoneme triples or longer sequences of phonemes, is sufficient for many types of animated characters. Also, our experiments Index Terms: I.3.2 [Computing Methodologies]: Com- have shown that skilled animators can sufficiently generate the data puter Graphics—Methodology and Techniques; I.3.7 [Computing needed for good quality results. Thus our algorithm does not need any specific rules about coarticulation, such as dominance functions ∗e-mail: [email protected] or language rules. Such rules are implicit within the artist-produced †e-mail: [email protected] data. In order to produce a tractible set of data, our method reduces ‡e-mail: [email protected] the full set of 40 English phonemes to a smaller set of 21, which §e-mail: [email protected] are then annotated by an animator. Once the full animation set has been generated, it can be reused for multiple characters. Each addi- tional character requires a small set of static poses or blendshapes that match the original pose set. Lip sync data needed for a new language requires a new set of phone bigrams for each language, although similar phonemes among languages can share the same model. Although this parameterization method is intuitive to use, phone bigram curves. We show how to reuse our English phone bi- it is difficult to generalize across different faces since a set of pa- gram animation set to adapt to a Mandarin set. Our method works rameters is bounded to a certain facial topology. Therefore a lot of with both natural speech and text-to-speech engines. manual effort is required for each new face. Our method parame- Our artist driven approach is useful for lip syncing non-human terized the face animation as a set of blend curves using a canonical characters whose lip or face configuration doesn’t match a human’s, set of viseme shapes. Therefore the same set of parameter curve which would make it difficult to retarget. Animators can generate can be applied on different characters as far as they use the same a lip animation data set for any character or class of characters, canonical definition for viseme shapes. regardless of the facial setup specified. We demonstrate such an ex- Many data-driven methods are developed to learn co-articulation ample on an animated, talking crab that we acquired from an online patterns from animation data. Ma et al learned variable length units 3D content source, and whose facial configuration does not match based on a motion capture corpus [26]. Deng et al proposed a any existing set of lip sync poses. In addition, our method can be method that learn diphone and triphones model presented in weight used with any set of facial poses or blend shapes as input, thus curves from motion captured data [16]. Dynamic programming is making it suitable for game pipelines where the characters must applied to calculate the best co-articulation viseme sequences based adhere to specific facial configurations. This differs from many on speech input. Although their method can produce natural speech machine learning algorithms where the retargeting efforts must be animation with learned co-articulation, it requires large amounts done once the learned data is generated, or where specific facial of motion data for training and the result is highly dependent on poses are output from the machine learning process, but differ ac- the training data. Cao et al transferred captured data to a model cording to the input data. By allowing the setup of the character to synthesize speech motions [6]. The recent work by Taylor et to be determined in advance of generating the lip animation data, al [34] also introduced a dynamic visemes method, which extracts our system can be adjusted to be compatible with any existing lip and clusters visual gestures from video input of human subjects. To animation setup, thus making it ideally suitable for game pipelines. synthesize dynamic visemes from a phoneme sequence, it looks for We demonstrate this capability by using the same facial poses as a the best mapping through the probability graph learned from visual commercial lip animation tool, FaceFX [19], whose results we can gesture data and stitch the viseme sequences together. Their method then directly compare against. In addition, control over our method produces good results with co-articulation effects but requires sig- can be achieved by reanimating specific phoneme pairs (such as L- nificant animator efforts to setup around 150 visemes animations Oh, as in the word ’low’). Thus control over various parts of the for each rigged characters face. By contrast, our method requires algorithm are both intuitive and controllable. only a small set of static facial poses for each character. It is difficult to compare the quality of the results between one Blendshape is a widely used technique due to its simplicity, when lip syncing method to another, since the models, data set, and char- 3D shapes are interpolated and blended together in customized acteristics are often different between methods. With some excep- ways. One of the drawbacks when using blendshapes is the dif- tions [27], few research methods attempt to compare their own re- ficulty in defining a set of face shapes that are orthogonal to each sults to other established research methods or commercial results. other. This in turn causes the influence from one face shape to There are many reasons for this, including the difficulty in repro- degrade other shapes. [25, 15] provided a method to avoid inter- ducing exactly other methods and the lack of availability of data ferences. Human performance also provides a way to produce high sets. Consequently, comparisons are made to simpler methods [34]. quality and realistic facial animations. Waters and coworkers [36] By contrast, we present a direct comparison with a popular com- applied radial laser scanner to capture facial geometry from the sub- mercial lip syncing engine, FaceFX [19], using identical models ject and used the scanned markers to drive the underlying muscle and face shapes. We thus maintain that our method generally pro- system. Chai et al [10] used optical flow tracking to capture the duces good results with very low requirements, while simultane- performance from a subject to drive facial animations. ously being controllable and editable. 2.2 Expressive Facial Animation 2RELATED WORK Expressive facial animations, including eye gaze and head move- There are many facial animation techniques, including 2D image- ments, would greatly enhance the realism of a virtual character. based methods, and 3D geometry-based methods. A background Cassell et al developed rule based automatic system [8]. Brand et on many of these techniques can be found in the survey by Parke al [5] constructed a facial internal state machine driven by voice in- and Waters [31]. However, the focus of this paper is on lip syncing put using Hidden Markov Model (HMM).
Recommended publications
  • Animation: Types
    Animation: Animation is a dynamic medium in which images or objects are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Today most animations are made with computer generated (CGI). Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. Apart from short films, feature films, animated gifs and other media dedicated to the display moving images, animation is also heavily used for video games, motion graphics and special effects. The history of animation started long before the development of cinematography. Humans have probably attempted to depict motion as far back as the Paleolithic period. Shadow play and the magic lantern offered popular shows with moving images as the result of manipulation by hand and/or some minor mechanics Computer animation has become popular since toy story (1995), the first feature-length animated film completely made using this technique. Types: Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one against a painted background by rostrum camera onto motion picture film.
    [Show full text]
  • Reallusion Premieres Iclone 1.52 - New Technology for Independent Machinima
    For Immediate Release Reallusion Premieres iClone 1.52 - New Technology for Independent Machinima New York City - November 1st, 2006 – Reallusion, official Academy of Machinima Arts and Sciences’ Machinima Festival partner hits the red carpet with the iClone powered Best Visual Design nominee, “Reich and Roll”, an iClone film festival feature created by Scintilla Films, Italy. iClone™ is the 3D real-time filmmaking engine featuring complete tools for character animation, scene building, and movie directing. Leap from brainstorm to screen with advanced animation and visual effects for 3D characters, animated short-films, Machinima, or simulated training. The technology of storytelling evolves with iClone. Real-time visual effects, character interaction, and cameras all animate in with vivid quality providing storytellers instant visualization into their world. Stefano Di Carli, Director of ‘Reich and Roll’ says, “Each day I find new tricks in iClone. It's really an incredible new way of filmmaking. So Reallusion is not just hypin' a product. What you promise on the site is true. You can do movies!” Machinima is a rising genre in independent filmmaking providing storytellers with technology to bring their films to screen excluding high production costs or additional studio staff. iClone launches independent filmmakers and studios alike into a production power pack of animation tools designed to simplify character creation, facial animation, and scene production for 3D films. “iClone Independent filmmakers can commercially produce Machinima guaranteeing that the movies you make are the movies you own and may distribute or sell,” said John Martin II, Director of Product Marketing of Reallusion. “iClone’s freedom for filmmakers who desire to own and market their films supports the new real-time independent filmmaking spirit and the rise of directors who seek new technologies for the future of film.” iClone Studio 1.52 sets the stage delivering accessible directing tools to develop and debut films with speed.
    [Show full text]
  • Computerising 2D Animation and the Cleanup Power of Snakes
    Computerising 2D Animation and the Cleanup Power of Snakes. Fionnuala Johnson Submitted for the degree of Master of Science University of Glasgow, The Department of Computing Science. January 1998 ProQuest Number: 13818622 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a com plete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. uest ProQuest 13818622 Published by ProQuest LLC(2018). Copyright of the Dissertation is held by the Author. All rights reserved. This work is protected against unauthorized copying under Title 17, United States C ode Microform Edition © ProQuest LLC. ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 48106- 1346 GLASGOW UNIVERSITY LIBRARY U3 ^coji^ \ Abstract Traditional 2D animation remains largely a hand drawn process. Computer-assisted animation systems do exists. Unfortunately the overheads these systems incur have prevented them from being introduced into the traditional studio. One such prob­ lem area involves the transferral of the animator’s line drawings into the computer system. The systems, which are presently available, require the images to be over- cleaned prior to scanning. The resulting raster images are of unacceptable quality. Therefore the question this thesis examines is; given a sketchy raster image is it possible to extract a cleaned-up vector image? Current solutions fail to extract the true line from the sketch because they possess no knowledge of the problem area.
    [Show full text]
  • Facial Animation: Overview and Some Recent Papers
    Facial animation: overview and some recent papers Benjamin Schroeder January 25, 2008 Outline Iʼll start with an overview of facial animation and its history; along the way Iʼll discuss some common approaches. After that Iʼll talk about some notable recent papers and finally offer a few thoughts about the future. 1 Defining the problem 2 Historical highlights 3 Some recent papers 4 Thoughts on the future Defining the problem Weʼll take facial animation to be the process of turning a characterʼs speech and emotional state into facial poses and motion. (This might extend to motion of the whole head.) Included here is the problem of modeling the form and articulation of an expressive head. There are some related topics that we wonʼt consider today. Autonomous characters require behavioral models to determine what they might feel or say. Hand and body gestures are often used alongside facial animation. Faithful animation of hair and rendering of skin can greatly enhance the animation of a face. Defining the problem This is a tightly defined problem, but solving it is difficult. We are intimately aware of how human faces should look, and sensitive to subtleties in the form and motion. Lip and mouth shapes donʼt correspond to individual sounds, but are context-dependent. These are further affected by the emotions of the speaker and by the language being spoken. Many different parts of the face and head work together to convey meaning. Facial anatomy is both structurally and physically complex: there are many layers of different kinds of material (skin, fat, muscle, bones). Defining the problem Here are some sub-questions to consider.
    [Show full text]
  • Teachers Guide
    Teachers Guide Exhibit partially funded by: and 2006 Cartoon Network. All rights reserved. TEACHERS GUIDE TABLE OF CONTENTS PAGE HOW TO USE THIS GUIDE 3 EXHIBIT OVERVIEW 4 CORRELATION TO EDUCATIONAL STANDARDS 9 EDUCATIONAL STANDARDS CHARTS 11 EXHIBIT EDUCATIONAL OBJECTIVES 13 BACKGROUND INFORMATION FOR TEACHERS 15 FREQUENTLY ASKED QUESTIONS 23 CLASSROOM ACTIVITIES • BUILD YOUR OWN ZOETROPE 26 • PLAN OF ACTION 33 • SEEING SPOTS 36 • FOOLING THE BRAIN 43 ACTIVE LEARNING LOG • WITH ANSWERS 51 • WITHOUT ANSWERS 55 GLOSSARY 58 BIBLIOGRAPHY 59 This guide was developed at OMSI in conjunction with Animation, an OMSI exhibit. 2006 Oregon Museum of Science and Industry Animation was developed by the Oregon Museum of Science and Industry in collaboration with Cartoon Network and partially funded by The Paul G. Allen Family Foundation. and 2006 Cartoon Network. All rights reserved. Animation Teachers Guide 2 © OMSI 2006 HOW TO USE THIS TEACHER’S GUIDE The Teacher’s Guide to Animation has been written for teachers bringing students to see the Animation exhibit. These materials have been developed as a resource for the educator to use in the classroom before and after the museum visit, and to enhance the visit itself. There is background information, several classroom activities, and the Active Learning Log – an open-ended worksheet students can fill out while exploring the exhibit. Animation web site: The exhibit website, www.omsi.edu/visit/featured/animationsite/index.cfm, features the Animation Teacher’s Guide, online activities, and additional resources. Animation Teachers Guide 3 © OMSI 2006 EXHIBIT OVERVIEW Animation is a 6,000 square-foot, highly interactive traveling exhibition that brings together art, math, science and technology by exploring the exciting world of animation.
    [Show full text]
  • 2D Animation Software You’Ll Ever Need
    The 5 Types of Animation – A Beginner’s Guide What Is This Guide About? The purpose of this guide is to, well, guide you through the intricacies of becoming an animator. This guide is not about leaning how to animate, but only to breakdown the five different types (or genres) of animation available to you, and what you’ll need to start animating. Best software, best schools, and more. Styles covered: 1. Traditional animation 2. 2D Vector based animation 3. 3D computer animation 4. Motion graphics 5. Stop motion I hope that reading this will push you to take the first step in pursuing your dream of making animation. No more excuses. All you need to know is right here. Traditional Animator (2D, Cel, Hand Drawn) Traditional animation, sometimes referred to as cel animation, is one of the older forms of animation, in it the animator draws every frame to create the animation sequence. Just like they used to do in the old days of Disney. If you’ve ever had one of those flip-books when you were a kid, you’ll know what I mean. Sequential drawings screened quickly one after another create the illusion of movement. “There’s always room out there for the hand-drawn image. I personally like the imperfection of hand drawing as opposed to the slick look of computer animation.”Matt Groening About Traditional Animation In traditional animation, animators will draw images on a transparent piece of paper fitted on a peg using a colored pencil, one frame at the time. Animators will usually do test animations with very rough characters to see how many frames they would need to draw for the action to be properly perceived.
    [Show full text]
  • High-Performance Play: the Making of Machinima
    High-Performance Play: The Making of Machinima Henry Lowood Stanford University <DRAFT. Do not cite or distribute. To appear in: Videogames and Art: Intersections and Interactions, Andy Clarke and Grethe Mitchell (eds.), Intellect Books (UK), 2005. Please contact author, [email protected], for permission.> Abstract: Machinima is the making of animated movies in real time through the use of computer game technology. The projects that launched machinima embedded gameplay in practices of performance, spectatorship, subversion, modification, and community. This article is concerned primarily with the earliest machinima projects. In this phase, DOOM and especially Quake movie makers created practices of game performance and high-performance technology that yielded a new medium for linear storytelling and artistic expression. My aim is not to answer the question, “are games art?”, but to suggest that game-based performance practices will influence work in artistic and narrative media. Biography: Henry Lowood is Curator for History of Science & Technology Collections at Stanford University and co-Principal Investigator for the How They Got Game Project in the Stanford Humanities Laboratory. A historian of science and technology, he teaches Stanford’s annual course on the history of computer game design. With the collaboration of the Internet Archive and the Academy of Machinima Arts and Sciences, he is currently working on a project to develop The Machinima Archive, a permanent repository to document the history of Machinima moviemaking. A body of research on the social and cultural impacts of interactive entertainment is gradually replacing the dismissal of computer games and videogames as mindless amusement for young boys. There are many good reasons for taking computer games1 seriously.
    [Show full text]
  • First Words: the Birth of Sound Cinema
    First Words The Birth of Sound Cinema, 1895 – 1929 Wednesday, September 23, 2010 Northwest Film Forum Co-Presented by The Sprocket Society Seattle, WA www.sprocketsociety.org Origins “In the year 1887, the idea occurred to me that it was possible to devise an instrument which should do for the eye what the phonograph does for the ear, and that by a combination of the two all motion and sound could be recorded and reproduced simultaneously. …I believe that in coming years by my own work and that of…others who will doubtlessly enter the field that grand opera can be given at the Metropolitan Opera House at New York [and then shown] without any material change from the original, and with artists and musicians long since dead.” Thomas Edison Foreword to History of the Kinetograph, Kinetoscope and Kineto-Phonograph (1894) by WK.L. Dickson and Antonia Dickson. “My intention is to have such a happy combination of electricity and photography that a man can sit in his own parlor and see reproduced on a screen the forms of the players in an opera produced on a distant stage, and, as he sees their movements, he will hear the sound of their voices as they talk or sing or laugh... [B]efore long it will be possible to apply this system to prize fights and boxing exhibitions. The whole scene with the comments of the spectators, the talk of the seconds, the noise of the blows, and so on will be faithfully transferred.” Thomas Edison Remarks at the private demonstration of the (silent) Kinetoscope prototype The Federation of Women’s Clubs, May 20, 1891 This Evening’s Film Selections All films in this program were originally shot on 35mm, but are shown tonight from 16mm duplicate prints.
    [Show full text]
  • Video Rewrite: Driving Visual Speech with Audio Christoph Bregler, Michele Covell, Malcolm Slaney Interval Research Corporation
    ACM SIGGRAPH 97 Video Rewrite: Driving Visual Speech with Audio Christoph Bregler, Michele Covell, Malcolm Slaney Interval Research Corporation ABSTRACT Video Rewrite automatically pieces together from old footage a new video that shows an actor mouthing a new utterance. The Video Rewrite uses existing footage to create automatically new results are similar to labor-intensive special effects in Forest video of a person mouthing words that she did not speak in the Gump. These effects are successful because they start from actual original footage. This technique is useful in movie dubbing, for film footage and modify it to match the new speech. Modifying example, where the movie sequence can be modified to sync the and reassembling such footage in a smart way and synchronizing it actors’ lip motions to the new soundtrack. to the new sound track leads to final footage of realistic quality. Video Rewrite automatically labels the phonemes in the train- Video Rewrite uses a similar approach but does not require labor- ing data and in the new audio track. Video Rewrite reorders the intensive interaction. mouth images in the training footage to match the phoneme Our approach allows Video Rewrite to learn from example sequence of the new audio track. When particular phonemes are footage how a person’s face changes during speech. We learn what unavailable in the training footage, Video Rewrite selects the clos- a person’s mouth looks like from a video of that person speaking est approximations. The resulting sequence of mouth images is normally. We capture the dynamics and idiosyncrasies of her artic- stitched into the background footage.
    [Show full text]
  • How to Become an Animator – By
    How to Become an Animator Your guide for evaluating animation jobs and animation schools By Kris Larson About the Author: Kris Larson was a senior marketing executive and hiring manager in the entertainment industry for many years. She currently works with Animation Mentor to develop eBooks and articles about the animation industry. (Shameless Plug) Learn Character Animation from Professional Animators Who Are Working at Major Studios If you want to be a character animator, you’ll need to know more than just animation tips and tricks. Bobby Beck, Shawn Kelly, and Carlos Baena cofounded the online character animation school in 2005 because many aspiring animators weren’t learning the art behind animation. Animation Mentor teaches you the art behind animation under the guidance of professional animators who are currently working at leading studios. Our approach also prepares you with the skills and experience to succeed in a studio environment. At the end of the program, you graduate with a killer demo reel that’s your resume for a job and connections to jobs at film studios and video game companies. If you want to learn more about Animation Mentor, check us out. We can teach you everything you need to know to create a great demo reel and land a job in 18 months. We want to help you reach your dreams of becoming an animator. LETTER FROM ANIMATIONMENTOR.COM FOUNDERS Dear Aspiring Animator, If you’re reading this eBook, that means you’re interested in becoming an animator, the coolest job in the world! We’ve been animators for a while now, working at Pixar Animation Studios and Industrial Light & Magic (ILM).
    [Show full text]
  • FILM-GOING in the SILENT ERA Elizabeth Ezra in The
    THE CINEMISING PROCESS: FILM-GOING IN THE SILENT ERA Elizabeth Ezra In the Surrealist text Nadja, published in 1928, André Breton reminisces about going to the cinema in the days ‘when, with Jacques Vaché we would settle down to dinner in the orchestra of [the cinema in] the former Théâtre des Folies- Dramatiques, opening cans, slicing bread, uncorking bottles, and talking in ordinary tones, as if around a table, to the great amazement of the spectators, who dared not say a word’ (Breton 1960 [1928]: 37). When Breton recalled these youthful antics, which had probably taken place in the late teens or early twenties, he characterized them as ‘a question of going beyond the bounds of what is “allowed,” which, in the cinema as nowhere else, prepared me to invite in the “forbidden” (Breton 1951; original emphasis). Breton’s remarks beg the following questions. How did such behaviour in a cinema come to be considered transgressive? How did the structures emerge that made it a transgressive act to speak or eat a meal in the cinema, to reject, in other words, the narrative absorption that had become a standard feature of the filmgoing experience? The conventions that gave meaning to such forms of transgression emerged over the course of the silent era in what may be called, to adapt Norbert Elias’s term, ‘the cinemizing process’ (1994). The earliest historians of cinema already relied heavily on a rhetoric of lost innocence (witness the multiple accounts of the first cinema-goers cowering under their seats in front of the Lumière brothers’ film of a train pulling into a station); the cinemizing process thus presupposes a ‘golden age’ of naive, uninitiated spectatorship followed by 2 an evolution of audiences into worldy-wise creatures of habit.
    [Show full text]
  • Multimedia Artist and Animator
    Explore your future pathway Multimedia Artist and Animator Career Overview Education/Training Multimedia artists and animators create two- and three- Employers typically require a bachelor’s degree, and they dimensional models, animation, and visual effects for look for workers who have a portfolio of work and strong television, movies, video games, and other forms of technical skills. Animators typically have a media. They often work in a specific form or medium. bachelor’s degree in fine art, computer graphics, animation, Some create their work primarily by using computer or a related field. Programs in computer graphics often software while others prefer to work by drawing and include courses in computer science in addition painting by hand. Each animator works on a portion of the to art courses. project, and then the pieces are put together to create one cohesive animation. Bachelor’s degree programs in art include courses in painting, drawing, and sculpture. Degrees in animation Animators typically do the following: often require classes in drawing, animation, and film. Many -Use computer programs and illustrations to create schools have specialized degrees in topics such as graphics and animation interactive media or game design. -Work with a team of animators and artists to create a movie, game, or visual effect Important Qualities -Research upcoming projects to help create realistic designs or animations Artistic talent should have artistic ability and a good understanding of color, texture, and light. However, -Develop storyboards that map out key scenes they may be able to compensate for artistic shortcomings in animations with better technical skills. -Edit animations and effects on the basis of feedback from Communication skills need to work as part of a directors, other animators, game designers, or clients complex team and respond well to criticism and feedback.
    [Show full text]