A Novel Lip Synchronization Approach for Games and Virtual Environments

Total Page:16

File Type:pdf, Size:1020Kb

A Novel Lip Synchronization Approach for Games and Virtual Environments A Novel Lip Synchronization Approach for Games and Virtual Environments Jan Krejsa Fotis Liarokapis Masaryk University CYENS – Centre of Excellence Brno, Czech Republic Nicosia, Cyprus [email protected] [email protected] Abstract—This paper presents an algorithm for the offline visually from one another and multiple phonemes may map generation of lip-sync animation. It redefines visemes as sets of to the same viseme. constraints on the facial articulators such as lips, jaw, tongue. This paper proposes a novel solution to improve the quality The algorithm was comparatively evaluated with 30 healthy participants by presenting a set of phrases delivered verbally by of auto-generated lip-sync animation. Even despite the lower a virtual character. Each phrase was presented in two versions: quality of its results, the automated approach still has an once with the traditional lip-sync method, and once with our irreplaceable role due to its time- and cost-saving properties. method. Results confirm that the suggested solution produces Traditionally, auto-generated lip-sync animation is achieved more natural animation than standard keyframe interpolation via keyframe interpolation. First, the speech audio is seg- techniques. Index Terms—virtual reality, computer games, lip-sync anima- mented into short units known as visemes. For each viseme, tion, user-studies the traditional approach defines a viseme shape: the shape of the mouth that is required to produce the given sound. I. INTRODUCTION These poses are used as animation keyframes placed at the The purpose of lip synchronization (or lip-sync) is to rein- onset of each viseme, and the final animation is created by force the illusion that the voice of an animated character is interpolating between these keyframes. The visemic context or coming out of the character’s mouth. This can help create co-articulation (i.e. how the appearance of the given viseme is an emotional connection to the character, and improve the influenced by the visemes that come before and after it) is usu- immersion of the viewer in a virtual environment. There are ally disregarded, although there are works that have suggested currently two leading techniques in lip synchronization. One solutions. Notable methods involve dominance functions [4] is capturing the performance of human actors via motion or neural networks [5]. capture techniques, the other is hiring professional animators In this work, we suggest an alternative solution for modeling to produce the facial animations manually. Both methods co-articulation (“Fig. 1”). We propose to replace each viseme deliver high-quality results, but they are time-consuming and shape with a set of constraints on certain parameters of the face expensive. As such, they are not suitable for applications (e.g. mouth openness). Instead of specifying a single facial where a large quantity of speech animation is required (e.g. pose for each viseme, the constraints define a wide range of role-playing games). In such applications, it is preferable to allowed poses. Since each viseme in our model has constraints generate the majority of speech animations automatically. for only some of the facial parameters (e.g. mouth width, jaw Lip animation can also be automatically generated from openness, etc), the parameters that are not constrained by the audio input or a phonetic transcript. However, although signif- given viseme are free to be driven by neighboring visemes. icant advances have been made recently [1], [2], the resulting Effectively, this models co-articulation and leads to more fluid quality is still not on par with the two aforementioned ap- and natural motion. proaches. In practice, the automated approach tends to be used Our method was evaluated in a study with 30 participants. only to generate low-quality baseline animations. Animators Participants were presented with a set of phrases delivered then try to refine as many animations as possible, and for the verbally by a virtual character. Each phrase was presented in most important scenes, they often use motion capture. two versions: once with the traditional lip-sync method, and When synthesizing speech animation, the speech audio we once with our method. Participants were asked to rate the lip want to synchronize to is usually segmented into atomic units motion for each phrase in several aspects, such as naturalness of sound, called phonemes. For example, the “m” sound in the or perceived quality. word “mother” is described by the phoneme /m/. A viseme, In all measured aspects, our method significantly outper- then, is a group of phonemes, all of which are visually similar formed the traditional method. On a scale from 0 to 100%, to each other, but distinguishable from phonemes that belong the naturalness of our method was rated at 70% on average, to another viseme [3]. Some phonemes are hard to distinguish compared to 56% naturalness for the traditional method. In terms of quality, our method was rated at 75%, while the 978-1-6654-3886-5/21/$31.00 ©2021 IEEE traditional method received a rating of 60%. Overall, the Fig. 1. An example user interface for viseme tagging. average score of our method was 73%, which is a significant although the method produces plausible wrinkles, they are still improvement over the 60% rating of the traditional method. a mere approximation limited by the training data, and it may deliver unexpected results when presented with a pose outside II. RELATED WORK of its training data set. Ma and Deng [10] suggest a geometry- There are three fundamental approaches to lip synchro- based method for capturing facial performance that includes nization: animation by hand, facial performance capture, and wrinkle information, while still retaining low-cost acquisition. automatic synthesis. Hand-made facial animation has the ad- Their method can reconstruct high-resolution facial geometry vantage of having full control over the performance. A skilled and appearance in real-time and with superior accuracy. artist can add more expressiveness, even beyond the limits of Laine et al. [11] utilized deep learning to reduce the amount what an actor could do. However, animating by hand is time- of manual clean-up and corrections required to maintain a consuming, and as such, it may not be suitable for large-scale high-quality output. Their pipeline involves manually cleaning projects that require many hours of facial animation (such as up a small subset of the input video footage to train a deep an expansive role-playing game), especially if their budget is neural network, which is then used to automatically process limited. This section will focus on the other two approaches the remaining video footage at a very high speed. While the (facial performance capture and automatic synthesis). training data must be recorded in stereo, the remaining footage may be monocular. In comparison with previous methods, their A. Facial Performance Capture method delivers increased accuracy, especially in the mouth Performance capture records motion data of a human actor and eye regions. and transfers it onto a virtual character [6]. By recording the B. Automatic Synthesis performance of an actor, we can capture all the intricacies of their facial movements in great detail. This approach currently Existing methods of automatic synthesis can be grouped achieves the highest-quality results, but it is also the most into two categories: (1) data-driven methods and (2) procedural expensive method. The current industry standard in film is methods based on key-frame interpolation. to track markers on the face using a helmet-mounted camera 1) Data-driven Methods: Data-driven methods of lip-sync (HMC) rig [7]. Tracking precision can be an issue, requiring an animation are generally based on either (a) concatenating animator to clean up the recorded data manually, which can be short speech animation units sampled from a large data set quite laborious. The resulting animations are difficult to edit of motion-captured speech, or (b) sampling from statistical or refine [8]. The method also generally requires expensive models extracted from motion-captured data [12]–[15]. equipment and professional actors. Taylor et al. [5] proposed to replace viseme shapes with Since motion capture equipment is so expensive, there have animation units, called dynamic visemes. They are short been multiple works that attempt to reduce the required cost. animation units extracted from a training data set, which can Cao et al. [9] have presented a method that adds high-fidelity then be rearranged in a different order and blended into each facial detail to low-resolution face tracking data. However, other. This technique produces good results for phrases that are similar to the training data, but to consistently provide good results for any possible phrase, an exhaustive data set that covers all possible sequences of phonemes would be required. Deng et al. [16] proposed a method that builds co- articulation models based on motion-capture data, using weight decomposition. Again, as with most other data-driven methods, their method would provide best results with a complete training data set of all possible combinations of phonemes. Deena et al. [17], [18] presented an approach based on a switching state model. One disadvantage is that they require the entire training data set to be manually phoneme- tagged. Asadiabadi et al. [19] used deep learning to train a speaker- Fig. 2. The facial pose produced when pronouncing the phoneme /m/ in the independent model that can reproduce the emotions of an syllable “moo”. actor. In another approach, Long Short-Term Memory (LSTM) networks were used to improve the mapping from phoneme sequences to their dynamic visemes [20], [21]. Karras et al. approach, Edwards et al. [1] introduced two parameters: jaw [2] presented a deep-learning-based method that generates lip- and lip. The values of these two parameters dynamically sync animation in real time, based on real-time audio input.
Recommended publications
  • Animation: Types
    Animation: Animation is a dynamic medium in which images or objects are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets to be photographed and exhibited on film. Today most animations are made with computer generated (CGI). Commonly the effect of animation is achieved by a rapid succession of sequential images that minimally differ from each other. Apart from short films, feature films, animated gifs and other media dedicated to the display moving images, animation is also heavily used for video games, motion graphics and special effects. The history of animation started long before the development of cinematography. Humans have probably attempted to depict motion as far back as the Paleolithic period. Shadow play and the magic lantern offered popular shows with moving images as the result of manipulation by hand and/or some minor mechanics Computer animation has become popular since toy story (1995), the first feature-length animated film completely made using this technique. Types: Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one against a painted background by rostrum camera onto motion picture film.
    [Show full text]
  • Reallusion Premieres Iclone 1.52 - New Technology for Independent Machinima
    For Immediate Release Reallusion Premieres iClone 1.52 - New Technology for Independent Machinima New York City - November 1st, 2006 – Reallusion, official Academy of Machinima Arts and Sciences’ Machinima Festival partner hits the red carpet with the iClone powered Best Visual Design nominee, “Reich and Roll”, an iClone film festival feature created by Scintilla Films, Italy. iClone™ is the 3D real-time filmmaking engine featuring complete tools for character animation, scene building, and movie directing. Leap from brainstorm to screen with advanced animation and visual effects for 3D characters, animated short-films, Machinima, or simulated training. The technology of storytelling evolves with iClone. Real-time visual effects, character interaction, and cameras all animate in with vivid quality providing storytellers instant visualization into their world. Stefano Di Carli, Director of ‘Reich and Roll’ says, “Each day I find new tricks in iClone. It's really an incredible new way of filmmaking. So Reallusion is not just hypin' a product. What you promise on the site is true. You can do movies!” Machinima is a rising genre in independent filmmaking providing storytellers with technology to bring their films to screen excluding high production costs or additional studio staff. iClone launches independent filmmakers and studios alike into a production power pack of animation tools designed to simplify character creation, facial animation, and scene production for 3D films. “iClone Independent filmmakers can commercially produce Machinima guaranteeing that the movies you make are the movies you own and may distribute or sell,” said John Martin II, Director of Product Marketing of Reallusion. “iClone’s freedom for filmmakers who desire to own and market their films supports the new real-time independent filmmaking spirit and the rise of directors who seek new technologies for the future of film.” iClone Studio 1.52 sets the stage delivering accessible directing tools to develop and debut films with speed.
    [Show full text]
  • Computerising 2D Animation and the Cleanup Power of Snakes
    Computerising 2D Animation and the Cleanup Power of Snakes. Fionnuala Johnson Submitted for the degree of Master of Science University of Glasgow, The Department of Computing Science. January 1998 ProQuest Number: 13818622 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a com plete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. uest ProQuest 13818622 Published by ProQuest LLC(2018). Copyright of the Dissertation is held by the Author. All rights reserved. This work is protected against unauthorized copying under Title 17, United States C ode Microform Edition © ProQuest LLC. ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 48106- 1346 GLASGOW UNIVERSITY LIBRARY U3 ^coji^ \ Abstract Traditional 2D animation remains largely a hand drawn process. Computer-assisted animation systems do exists. Unfortunately the overheads these systems incur have prevented them from being introduced into the traditional studio. One such prob­ lem area involves the transferral of the animator’s line drawings into the computer system. The systems, which are presently available, require the images to be over- cleaned prior to scanning. The resulting raster images are of unacceptable quality. Therefore the question this thesis examines is; given a sketchy raster image is it possible to extract a cleaned-up vector image? Current solutions fail to extract the true line from the sketch because they possess no knowledge of the problem area.
    [Show full text]
  • Facial Animation: Overview and Some Recent Papers
    Facial animation: overview and some recent papers Benjamin Schroeder January 25, 2008 Outline Iʼll start with an overview of facial animation and its history; along the way Iʼll discuss some common approaches. After that Iʼll talk about some notable recent papers and finally offer a few thoughts about the future. 1 Defining the problem 2 Historical highlights 3 Some recent papers 4 Thoughts on the future Defining the problem Weʼll take facial animation to be the process of turning a characterʼs speech and emotional state into facial poses and motion. (This might extend to motion of the whole head.) Included here is the problem of modeling the form and articulation of an expressive head. There are some related topics that we wonʼt consider today. Autonomous characters require behavioral models to determine what they might feel or say. Hand and body gestures are often used alongside facial animation. Faithful animation of hair and rendering of skin can greatly enhance the animation of a face. Defining the problem This is a tightly defined problem, but solving it is difficult. We are intimately aware of how human faces should look, and sensitive to subtleties in the form and motion. Lip and mouth shapes donʼt correspond to individual sounds, but are context-dependent. These are further affected by the emotions of the speaker and by the language being spoken. Many different parts of the face and head work together to convey meaning. Facial anatomy is both structurally and physically complex: there are many layers of different kinds of material (skin, fat, muscle, bones). Defining the problem Here are some sub-questions to consider.
    [Show full text]
  • Teachers Guide
    Teachers Guide Exhibit partially funded by: and 2006 Cartoon Network. All rights reserved. TEACHERS GUIDE TABLE OF CONTENTS PAGE HOW TO USE THIS GUIDE 3 EXHIBIT OVERVIEW 4 CORRELATION TO EDUCATIONAL STANDARDS 9 EDUCATIONAL STANDARDS CHARTS 11 EXHIBIT EDUCATIONAL OBJECTIVES 13 BACKGROUND INFORMATION FOR TEACHERS 15 FREQUENTLY ASKED QUESTIONS 23 CLASSROOM ACTIVITIES • BUILD YOUR OWN ZOETROPE 26 • PLAN OF ACTION 33 • SEEING SPOTS 36 • FOOLING THE BRAIN 43 ACTIVE LEARNING LOG • WITH ANSWERS 51 • WITHOUT ANSWERS 55 GLOSSARY 58 BIBLIOGRAPHY 59 This guide was developed at OMSI in conjunction with Animation, an OMSI exhibit. 2006 Oregon Museum of Science and Industry Animation was developed by the Oregon Museum of Science and Industry in collaboration with Cartoon Network and partially funded by The Paul G. Allen Family Foundation. and 2006 Cartoon Network. All rights reserved. Animation Teachers Guide 2 © OMSI 2006 HOW TO USE THIS TEACHER’S GUIDE The Teacher’s Guide to Animation has been written for teachers bringing students to see the Animation exhibit. These materials have been developed as a resource for the educator to use in the classroom before and after the museum visit, and to enhance the visit itself. There is background information, several classroom activities, and the Active Learning Log – an open-ended worksheet students can fill out while exploring the exhibit. Animation web site: The exhibit website, www.omsi.edu/visit/featured/animationsite/index.cfm, features the Animation Teacher’s Guide, online activities, and additional resources. Animation Teachers Guide 3 © OMSI 2006 EXHIBIT OVERVIEW Animation is a 6,000 square-foot, highly interactive traveling exhibition that brings together art, math, science and technology by exploring the exciting world of animation.
    [Show full text]
  • 2D Animation Software You’Ll Ever Need
    The 5 Types of Animation – A Beginner’s Guide What Is This Guide About? The purpose of this guide is to, well, guide you through the intricacies of becoming an animator. This guide is not about leaning how to animate, but only to breakdown the five different types (or genres) of animation available to you, and what you’ll need to start animating. Best software, best schools, and more. Styles covered: 1. Traditional animation 2. 2D Vector based animation 3. 3D computer animation 4. Motion graphics 5. Stop motion I hope that reading this will push you to take the first step in pursuing your dream of making animation. No more excuses. All you need to know is right here. Traditional Animator (2D, Cel, Hand Drawn) Traditional animation, sometimes referred to as cel animation, is one of the older forms of animation, in it the animator draws every frame to create the animation sequence. Just like they used to do in the old days of Disney. If you’ve ever had one of those flip-books when you were a kid, you’ll know what I mean. Sequential drawings screened quickly one after another create the illusion of movement. “There’s always room out there for the hand-drawn image. I personally like the imperfection of hand drawing as opposed to the slick look of computer animation.”Matt Groening About Traditional Animation In traditional animation, animators will draw images on a transparent piece of paper fitted on a peg using a colored pencil, one frame at the time. Animators will usually do test animations with very rough characters to see how many frames they would need to draw for the action to be properly perceived.
    [Show full text]
  • High-Performance Play: the Making of Machinima
    High-Performance Play: The Making of Machinima Henry Lowood Stanford University <DRAFT. Do not cite or distribute. To appear in: Videogames and Art: Intersections and Interactions, Andy Clarke and Grethe Mitchell (eds.), Intellect Books (UK), 2005. Please contact author, [email protected], for permission.> Abstract: Machinima is the making of animated movies in real time through the use of computer game technology. The projects that launched machinima embedded gameplay in practices of performance, spectatorship, subversion, modification, and community. This article is concerned primarily with the earliest machinima projects. In this phase, DOOM and especially Quake movie makers created practices of game performance and high-performance technology that yielded a new medium for linear storytelling and artistic expression. My aim is not to answer the question, “are games art?”, but to suggest that game-based performance practices will influence work in artistic and narrative media. Biography: Henry Lowood is Curator for History of Science & Technology Collections at Stanford University and co-Principal Investigator for the How They Got Game Project in the Stanford Humanities Laboratory. A historian of science and technology, he teaches Stanford’s annual course on the history of computer game design. With the collaboration of the Internet Archive and the Academy of Machinima Arts and Sciences, he is currently working on a project to develop The Machinima Archive, a permanent repository to document the history of Machinima moviemaking. A body of research on the social and cultural impacts of interactive entertainment is gradually replacing the dismissal of computer games and videogames as mindless amusement for young boys. There are many good reasons for taking computer games1 seriously.
    [Show full text]
  • First Words: the Birth of Sound Cinema
    First Words The Birth of Sound Cinema, 1895 – 1929 Wednesday, September 23, 2010 Northwest Film Forum Co-Presented by The Sprocket Society Seattle, WA www.sprocketsociety.org Origins “In the year 1887, the idea occurred to me that it was possible to devise an instrument which should do for the eye what the phonograph does for the ear, and that by a combination of the two all motion and sound could be recorded and reproduced simultaneously. …I believe that in coming years by my own work and that of…others who will doubtlessly enter the field that grand opera can be given at the Metropolitan Opera House at New York [and then shown] without any material change from the original, and with artists and musicians long since dead.” Thomas Edison Foreword to History of the Kinetograph, Kinetoscope and Kineto-Phonograph (1894) by WK.L. Dickson and Antonia Dickson. “My intention is to have such a happy combination of electricity and photography that a man can sit in his own parlor and see reproduced on a screen the forms of the players in an opera produced on a distant stage, and, as he sees their movements, he will hear the sound of their voices as they talk or sing or laugh... [B]efore long it will be possible to apply this system to prize fights and boxing exhibitions. The whole scene with the comments of the spectators, the talk of the seconds, the noise of the blows, and so on will be faithfully transferred.” Thomas Edison Remarks at the private demonstration of the (silent) Kinetoscope prototype The Federation of Women’s Clubs, May 20, 1891 This Evening’s Film Selections All films in this program were originally shot on 35mm, but are shown tonight from 16mm duplicate prints.
    [Show full text]
  • Video Rewrite: Driving Visual Speech with Audio Christoph Bregler, Michele Covell, Malcolm Slaney Interval Research Corporation
    ACM SIGGRAPH 97 Video Rewrite: Driving Visual Speech with Audio Christoph Bregler, Michele Covell, Malcolm Slaney Interval Research Corporation ABSTRACT Video Rewrite automatically pieces together from old footage a new video that shows an actor mouthing a new utterance. The Video Rewrite uses existing footage to create automatically new results are similar to labor-intensive special effects in Forest video of a person mouthing words that she did not speak in the Gump. These effects are successful because they start from actual original footage. This technique is useful in movie dubbing, for film footage and modify it to match the new speech. Modifying example, where the movie sequence can be modified to sync the and reassembling such footage in a smart way and synchronizing it actors’ lip motions to the new soundtrack. to the new sound track leads to final footage of realistic quality. Video Rewrite automatically labels the phonemes in the train- Video Rewrite uses a similar approach but does not require labor- ing data and in the new audio track. Video Rewrite reorders the intensive interaction. mouth images in the training footage to match the phoneme Our approach allows Video Rewrite to learn from example sequence of the new audio track. When particular phonemes are footage how a person’s face changes during speech. We learn what unavailable in the training footage, Video Rewrite selects the clos- a person’s mouth looks like from a video of that person speaking est approximations. The resulting sequence of mouth images is normally. We capture the dynamics and idiosyncrasies of her artic- stitched into the background footage.
    [Show full text]
  • How to Become an Animator – By
    How to Become an Animator Your guide for evaluating animation jobs and animation schools By Kris Larson About the Author: Kris Larson was a senior marketing executive and hiring manager in the entertainment industry for many years. She currently works with Animation Mentor to develop eBooks and articles about the animation industry. (Shameless Plug) Learn Character Animation from Professional Animators Who Are Working at Major Studios If you want to be a character animator, you’ll need to know more than just animation tips and tricks. Bobby Beck, Shawn Kelly, and Carlos Baena cofounded the online character animation school in 2005 because many aspiring animators weren’t learning the art behind animation. Animation Mentor teaches you the art behind animation under the guidance of professional animators who are currently working at leading studios. Our approach also prepares you with the skills and experience to succeed in a studio environment. At the end of the program, you graduate with a killer demo reel that’s your resume for a job and connections to jobs at film studios and video game companies. If you want to learn more about Animation Mentor, check us out. We can teach you everything you need to know to create a great demo reel and land a job in 18 months. We want to help you reach your dreams of becoming an animator. LETTER FROM ANIMATIONMENTOR.COM FOUNDERS Dear Aspiring Animator, If you’re reading this eBook, that means you’re interested in becoming an animator, the coolest job in the world! We’ve been animators for a while now, working at Pixar Animation Studios and Industrial Light & Magic (ILM).
    [Show full text]
  • FILM-GOING in the SILENT ERA Elizabeth Ezra in The
    THE CINEMISING PROCESS: FILM-GOING IN THE SILENT ERA Elizabeth Ezra In the Surrealist text Nadja, published in 1928, André Breton reminisces about going to the cinema in the days ‘when, with Jacques Vaché we would settle down to dinner in the orchestra of [the cinema in] the former Théâtre des Folies- Dramatiques, opening cans, slicing bread, uncorking bottles, and talking in ordinary tones, as if around a table, to the great amazement of the spectators, who dared not say a word’ (Breton 1960 [1928]: 37). When Breton recalled these youthful antics, which had probably taken place in the late teens or early twenties, he characterized them as ‘a question of going beyond the bounds of what is “allowed,” which, in the cinema as nowhere else, prepared me to invite in the “forbidden” (Breton 1951; original emphasis). Breton’s remarks beg the following questions. How did such behaviour in a cinema come to be considered transgressive? How did the structures emerge that made it a transgressive act to speak or eat a meal in the cinema, to reject, in other words, the narrative absorption that had become a standard feature of the filmgoing experience? The conventions that gave meaning to such forms of transgression emerged over the course of the silent era in what may be called, to adapt Norbert Elias’s term, ‘the cinemizing process’ (1994). The earliest historians of cinema already relied heavily on a rhetoric of lost innocence (witness the multiple accounts of the first cinema-goers cowering under their seats in front of the Lumière brothers’ film of a train pulling into a station); the cinemizing process thus presupposes a ‘golden age’ of naive, uninitiated spectatorship followed by 2 an evolution of audiences into worldy-wise creatures of habit.
    [Show full text]
  • Multimedia Artist and Animator
    Explore your future pathway Multimedia Artist and Animator Career Overview Education/Training Multimedia artists and animators create two- and three- Employers typically require a bachelor’s degree, and they dimensional models, animation, and visual effects for look for workers who have a portfolio of work and strong television, movies, video games, and other forms of technical skills. Animators typically have a media. They often work in a specific form or medium. bachelor’s degree in fine art, computer graphics, animation, Some create their work primarily by using computer or a related field. Programs in computer graphics often software while others prefer to work by drawing and include courses in computer science in addition painting by hand. Each animator works on a portion of the to art courses. project, and then the pieces are put together to create one cohesive animation. Bachelor’s degree programs in art include courses in painting, drawing, and sculpture. Degrees in animation Animators typically do the following: often require classes in drawing, animation, and film. Many -Use computer programs and illustrations to create schools have specialized degrees in topics such as graphics and animation interactive media or game design. -Work with a team of animators and artists to create a movie, game, or visual effect Important Qualities -Research upcoming projects to help create realistic designs or animations Artistic talent should have artistic ability and a good understanding of color, texture, and light. However, -Develop storyboards that map out key scenes they may be able to compensate for artistic shortcomings in animations with better technical skills. -Edit animations and effects on the basis of feedback from Communication skills need to work as part of a directors, other animators, game designers, or clients complex team and respond well to criticism and feedback.
    [Show full text]