Realistic Crowd Animation: a Perceptual Approach

Total Page:16

File Type:pdf, Size:1020Kb

Realistic Crowd Animation: a Perceptual Approach Realistic Crowd Animation: A Perceptual Approach by Rachel McDonnell, B.A. (Mod) Dissertation Presented to the University of Dublin, Trinity College in fulfillment of the requirements for the Degree of Doctor of Philosophy University of Dublin, Trinity College November 2006 Declaration I, the undersigned, declare that this work has not previously been submitted as an exercise for a degree at this, or any other University, and that unless otherwise stated, is my own work. Rachel McDonnell November 14, 2006 Permission to Lend and/or Copy I, the undersigned, agree that Trinity College Library may lend or copy this thesis upon request. Rachel McDonnell November 14, 2006 Acknowledgments First and foremost, I would like to thank my supervisor Carol O’Sullivan, for all her help, guidance, advice, and encouragement. I couldn’t imagine a better supervisor! Secondly, I would like thank the past and present members of the ISG, for making the last four years very enjoyable. In particular, Ann McNamara for her enthusiasm and for advising me to do a PhD. Also, to all my friends and in particular my housemates for taking my mind off my research in the evenings! A special thank you to Mam, Dad and Sarah for the all the encouragement and interest in everything that I do. Finally, thanks to Simon for helping me through the four years, and in particular for the all the support in the last few months leading up to submitting. Rachel McDonnell University of Dublin, Trinity College November 2006 iv Realistic Crowd Animation: A Perceptual Approach Publication No. Rachel McDonnell, Ph.D. University of Dublin, Trinity College, 2006 Supervisor: Carol O’Sullivan v Abstract Real-time applications such as games or urban simulations are often highly complex in nature. User expectations grow year by year, along with a concomitant desire for added realism. Due to the performance limitations of computing and rendering hardware, the use of simplification techniques to trade accuracy for performance is therefore of paramount importance. The goal of this thesis is to develop methods and metrics to balance visual fidelity with performance, primarily through the use of perceptual principles. While large-scale crowds of virtual characters have become commonplace in movies in recent years (e.g., the Lord of the Rings trilogy), this ubiquity has not been paralleled in real- time systems such as video games, due to the limited resources that are available. The game industry is the primary market, but real-time crowd simulation is also important in other areas such as cultural heritage, environmental applications, architectural pre-visualisations and VR for rehabilitation. The starting point for this research was a Virtual Dublin urban model, inhabited by crowds of pedestrians, developed by other researchers in the Interaction, Simulation and Graphics Lab in Trinity College Dublin. The crowds are rendered using a hybrid approach, where detailed geometric representations are presented at close proximity to the viewer, and impostors are presented in the background. This thesis builds on this system, by incorpo- rating perceptual principles to improve the visual fidelity, and adding deformable clothing to the characters. A number of somewhat ad hoc strategies for balancing realism with performance have been employed for the rendering of crowds. Decisions for adding extra levels of detail are typically based on frame rate improvements alone, so it is difficult to get a true impression vi of what the end-user perceived. We wanted to directly explore these issues, thereby filling in the gaps in our understanding of the relationships between animation, level of detail, and perception. Another motivation was that real-time crowds still suffer from severely reduced realism, compared with their movie counterparts and the real world. We felt that one of the significant factors that accounted for this was the rigidity of the clothing on the characters. Real clothing exhibits folding and crumpling, and its motion can range anywhere from starchy to flowing. The main contribution of this thesis is improving the realism of real-time crowds, through psychophysical evaluation of the influencing factors, the development of perceptual metrics, and the addition of pattern variety and pre-simulated clothing. vii Relevant Publications: 1. Perceptually Adaptive Graphics : A Review: Carol O’Sullivan, Rachel McDon- nell, Yann Morvan; Computer Graphics Forum (to appear in 2007) 2. Crowd Creation Pipeline for Games: Rachel McDonnell, Simon Dobbyn, Carol O’Sullivan; To appear in proceedings of the 9th international conference on computer games, CGames 2006 3. Perceptual Evaluation of LOD Clothing for Virtual Humans: Rachel McDon- nell, Simon Dobbyn, Steven Collins, Carol O’Sullivan; In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 117-126, 2006 4. Clothing The Masses: Real-Time Clothed Crowds With Variation: Simon Dobbyn, Rachel McDonnell, Steven Collins, Ladislav Kavan, Carol O’Sullivan; Euro- graphics Short Presentations, pp. 103-160, 2006 5. LOD Human Representations A Comparative Study: Rachel McDonnell, Si- mon Dobbyn, Carol O’Sullivan; In Proceedings of the First International Workshop on Crowd Simulation, pp. 101-115, 2005 6. Perceptual Evaluation of Impostor Representations for Virtual Humans and Buildings: John Hamill, Rachel McDonnell, Simon Dobbyn; Computer Graphics Fo- rum (Eurographics 2005), 24(3), pp. 623-633, 2005 7. Perceptually Adaptive Graphics: Carol O’Sullivan, Sarah Howlett, Yann Morvan, Rachel McDonnell, Keith O’Conor; Eurographics State of the Art reports, pp. 141-164, 2004 viii Contents Acknowledgments iv Abstract v List of Tables xii List of Figures xiii Chapter 1 Introduction 1 1.1 Methodology .................................... 3 1.2 Motivation ..................................... 4 1.3 Scope ........................................ 5 1.4 Contributions................................... 5 1.5 SummaryofChapters ............................... 6 Chapter 2 Background and Related Work 7 2.1 Appearance..................................... 7 2.1.1 Virtual Human Representation . 7 2.1.2 VirtualCrowds............................... 11 2.1.3 CrowdVisualisation . 12 2.1.4 Variety ................................... 16 2.2 GeneratingHumanMotion . 18 2.2.1 Keyframing................................. 18 2.2.2 Physically-based Animation . 19 2.2.3 MotionCapture .............................. 20 2.2.4 GroupandCrowdanimation . 22 2.3 SecondaryMotion ................................. 24 2.4 Discussion...................................... 29 ix Chapter 3 Perception of Human Motion 30 3.1 Perception of Human Motion: Psychology Findings . ......... 30 3.1.1 Biological motion perception . 31 3.1.2 Connection between the brain and motion perception . ....... 32 3.1.3 Summary .................................. 34 3.2 Perception of Human Motion for Computer Animation . ........ 34 3.2.1 Specifying and Modifying Motion . 35 3.2.2 Perceptual Studies on Motion Transitions . ...... 36 3.2.3 Perceptual Metrics for Animation . 39 3.2.4 LevelofDetail ............................... 44 3.3 ExperimentalDesign .............................. 46 3.3.1 Measuresofresponse. 46 3.3.2 Staircases .................................. 49 3.3.3 Analysis................................... 50 3.4 Discussion...................................... 52 Chapter 4 Psychophysical Evaluation of LOD Humans 54 4.1 Introduction.................................... 54 4.2 Perception of Human Appearance . 55 4.2.1 ExpHuman1: ImpostorExperiment. 55 4.2.2 Exp Human 2: Low Resolution Experiment . 58 4.3 Exp Human 3: Perception of Human Motion . 66 4.3.1 ModelTypes ................................ 66 4.3.2 Creating the Motion Variation . 72 4.3.3 CreatingtheStimuli .. .. .. .. .. .. .. .. 75 4.3.4 Apparatus and Participants . 76 4.3.5 VisualContentandProcedure . 77 4.3.6 JointWeighting............................... 79 4.3.7 Results ................................... 81 4.4 Discussion...................................... 86 Chapter 5 Psychophysical Evaluation of LOD Clothing 91 5.1 Overview ...................................... 91 5.2 Introduction.................................... 91 5.3 Motivation ..................................... 92 5.4 PsychophysicalExperiments. ..... 93 5.4.1 Exp Cloth 1: Determining Perceptually Linear “Stiffness” Scale . 97 x 5.4.2 Exp Cloth 2: Stiffness Sorting Experiment . 100 5.4.3 Exp Cloth 3: Stiffness Forced Choice Experiment . 103 5.4.4 ExpCloth 4: LODComparisonExperiment . 106 5.4.5 Exp Cloth 5: Impostor Update Frequency Test . 109 5.4.6 ExpCloth6: SystemExperiment. 117 5.5 Discussion...................................... 125 Chapter 6 Perceptually Guided Geopostor System with Clothed Characters127 6.1 Stage1:ModelPrepping. .. .. .. .. .. .. .. .. 129 6.1.1 PreparingtheHuman ........................... 129 6.1.2 Preparing the Deformable Clothing . 133 6.2 Stage2:ExportingtheData . 141 6.2.1 ExportingtheGeometry. 142 6.2.2 ExportingtheImpostor . 142 6.3 Stage3:CrowdRendering. 144 6.3.1 Setup .................................... 144 6.3.2 Rendering the Geometric Human and Cloth Models . 144 6.3.3 RenderingtheImpostor . 145 6.4 ProofofConcept .................................. 152 6.5 Discussion...................................... 153 Chapter 7 Conclusions and Future Work 155 7.1 SummaryofContributions. 155 7.2 Reflections on Experimental Design . 158 7.2.1 2AFCvsyes-nodesign........................... 158 7.2.2 Staircase vs Method of Constant Stimuli . 158 7.3
Recommended publications
  • The Uses of Animation 1
    The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator.
    [Show full text]
  • Multimodal Behavior Realization for Embodied Conversational Agents
    Multimed Tools Appl DOI 10.1007/s11042-010-0530-2 Multimodal behavior realization for embodied conversational agents Aleksandra Čereković & Igor S. Pandžić # Springer Science+Business Media, LLC 2010 Abstract Applications with intelligent conversational virtual humans, called Embodied Conversational Agents (ECAs), seek to bring human-like abilities into machines and establish natural human-computer interaction. In this paper we discuss realization of ECA multimodal behaviors which include speech and nonverbal behaviors. We devise RealActor, an open-source, multi-platform animation system for real-time multimodal behavior realization for ECAs. The system employs a novel solution for synchronizing gestures and speech using neural networks. It also employs an adaptive face animation model based on Facial Action Coding System (FACS) to synthesize face expressions. Our aim is to provide a generic animation system which can help researchers create believable and expressive ECAs. Keywords Multimodal behavior realization . Virtual characters . Character animation system 1 Introduction The means by which humans can interact with computers is rapidly improving. From simple graphical interfaces Human-Computer interaction (HCI) has expanded to include different technical devices, multimodal interaction, social computing and accessibility for impaired people. Among solutions which aim to establish natural human-computer interaction the subjects of considerable research are Embodied Conversational Agents (ECAs). Embodied Conversation Agents are graphically embodied virtual characters that can engage in meaningful conversation with human users [5]. Their positive impacts in HCI have been proven in various studies [16] and thus they have become an essential element of A. Čereković (*) : I. S. Pandžić Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia e-mail: [email protected] I.
    [Show full text]
  • Data Driven Auto-Completion for Keyframe Animation
    Data Driven Auto-completion for Keyframe Animation by Xinyi Zhang B.Sc, Massachusetts Institute of Technology, 2014 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Computer Science) The University of British Columbia (Vancouver) August 2018 c Xinyi Zhang, 2018 The following individuals certify that they have read, and recommend to the Fac- ulty of Graduate and Postdoctoral Studies for acceptance, the thesis entitled: Data Driven Auto-completion for Keyframe Animation submitted by Xinyi Zhang in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. Examining Committee: Michiel van de Panne, Computer Science Supervisor Leonid Sigal, Computer Science Second Reader ii Abstract Keyframing is the main method used by animators to choreograph appealing mo- tions, but the process is tedious and labor-intensive. In this thesis, we present a data-driven autocompletion method for synthesizing animated motions from input keyframes. Our model uses an autoregressive two-layer recurrent neural network that is conditioned on target keyframes. Given a set of desired keys, the trained model is capable of generating a interpolating motion sequence that follows the style of the examples observed in the training corpus. We apply our approach to the task of animating a hopping lamp character and produce a rich and varied set of novel hopping motions using a diverse set of hops from a physics-based model as training data. We discuss the strengths and weak- nesses of this type of approach in some detail. iii Lay Summary Computer animators today use a tedious process called keyframing to make anima- tions.
    [Show full text]
  • Forsíða Ritgerða
    Hugvísindasvið The Thematic and Stylistic Differences found in Walt Disney and Ozamu Tezuka A Comparative Study of Snow White and Astro Boy Ritgerð til BA í Japönsku Máli og Menningu Jón Rafn Oddsson September 2013 1 Háskóli Íslands Hugvísindasvið Japanskt Mál og Menning The Thematic and Stylistic Differences found in Walt Disney and Ozamu Tezuka A Comparative Study of Snow White and Astro Boy Ritgerð til BA / MA-prófs í Japönsku Máli og Menningu Jón Rafn Oddsson Kt.: 041089-2619 Leiðbeinandi: Gunnella Þorgeirsdóttir September 2013 2 Abstract This thesis will be a comparative study on animators Walt Disney and Osamu Tezuka with a focus on Snow White and the Seven Dwarfs (1937) and Astro Boy (1963) respectively. The focus will be on the difference in story themes and style as well as analyzing the different ideologies those two held in regards to animation. This will be achieved through examining and analyzing not only the history of both men but also the early history of animation in Japan and USA respectively. Finally, a comparison of those two works will be done in order to show how the cultural differences affected the creation of those products. 3 Chapter Outline 1. Introduction ____________________________________________5 1.1 Introduction ___________________________________________________5 1.2 Thesis Outline _________________________________________________5 2. Definitions and perception_________________________________6 3. History of Anime in Japan_________________________________8 3.1 Early Years____________________________________________________8
    [Show full text]
  • Multi-Modal Authoring Tool for Populating a Database of Emotional Reactive Animations
    Multi-modal Authoring Tool for Populating a Database of Emotional Reactive Animations Alejandra Garc´ıa-Rojas, Mario Guti´errez, Daniel Thalmann and Fr´ed´eric Vexo Virtual Reality Laboratory (VRlab) Ecole´ Polytechnique F´ed´erale de Lausanne (EPFL) CH-1015 Lausanne, Switzerland {alejandra.garciarojas,mario.gutierrez,daniel.thalmann,frederic.vexo}@epfl.ch Abstract. We aim to create a model of emotional reactive virtual hu- mans. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen. 1 Introduction Our goal is to create a model to drive the behavior of autonomous Virtual Hu- mans (VH) taking into account their personality and emotional state. We are focused on the generation of reflex movements triggered by events in a Virtual Environment and modulated by inner variables of the VH (personality, emo- tions). We intend to build our animation model on the basis of a large set of animation sequences described in terms of personality and emotions. In order to store, organize and exploit animation data, we need to create a knowledge-based system (animations database). This paper focuses on the authoring tool that we designed for populating such animation database. We observe that the process of animating is inherently multi-modal because it involves many inputs such as motion capture (mocap) sensors and user control on an animation software.
    [Show full text]
  • An Online Database of Animal Motions
    THE CHIMAERA PROJECT: AN ONLINE DATABASE OF ANIMAL MOTIONS A Thesis by JULIE KATHERINE GELÉ Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2007 Major Subject: Visualization Sciences THE CHIMAERA PROJECT: AN ONLINE DATABASE OF ANIMAL MOTIONS A Thesis by JULIE KATHERINE GELÉ Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved by: Chair of Committee, Carol LaFayette Committee Members, Ergun Akleman B. Stephen Carpenter, II Head of Department, Mark J. Clayton December 2007 Major Subject: Visualization Sciences iii ABSTRACT The Chimaera Project: An Online Database of Animal Motions. (December 2007) Julie Katherine Gelé, B.S., Texas A&M University Chair of Advisory Committee: Prof. Carol LaFayette Digital animators will save vast amounts of project time by starting with a completed skeleton and some base animations. This result can be accomplished with Web 2.0 technologies by creating a repository of skeletons and animations that any animator may use for free. While free Maya™ skeletons currently exist on the Internet, the websites housing them have only brief features and functions for browsing and interacting with these files. None of these websites contain downloadable animations for the provided skeletons. The Chimaera Project improves the field of Web 2.0 sites offering free rigs by offering many new features and freedoms to the animation community. Users may upload and download Maya™ skeletons, share comments and tips with each other, upload animations associated with the skeletons, and search or browse the skeletons in a variety of ways.
    [Show full text]
  • Near Optimal Synthesis of Digital Animation from Multiple Motion Capture Clips
    UNIVERSITA’ DEGLI STUDI DI MILANO Facoltà di Scienze Matematiche, Fisiche e Naturali Near Optimal Synthesis of Digital Animation from Multiple Motion Capture Clips Corso di Laurea Magistrale in: Tecnologie dell’Informazione e della Comunicazione Marco Alamia Matricola n. 736779 Relatore: Prof. Ing. Alberto N. BORGHESE Correlatore: Dott. Iuri FROSIO Anno Accademico 2008/2009 ABSTRACT The present work discusses theory and practice of a powerful animation method designed to generate walk animations for digital characters. Our system permits to interact with the animation, allowing the user to change at run-time several animation’s parameters, such as the motion direction or the gait style, influencing the final animation. Our work starts presenting the skeleton animation system and the motion capture system; then we explain how we can, thanks to these two techniques, generate a database of walk animation clips. The so obtained animations are provided to the animation system which links them together in a sequence. Linking is obtained generating a smooth transition from one clip to the next one through the use of the animation blending technique. Since we want to interact with the animation at run-time, the clip’s sequence is not given a priori, instead it is generated in real-time in respect to the user’s desires. To this aim we create a controller in charge of choosing the next clip in respect to the task that it is given, which can be simulating a walk along a line, or simulating a walk where motion direction and character’s orientation are required by the user. The controller leans on a selection policy in order to choose the next clip; in our work we propose two possible policies, a greedy one and a near-optimal one.
    [Show full text]
  • Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis Maxime Garcia, Rémi Ronfard, Marie-Paule Cani
    Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis Maxime Garcia, Rémi Ronfard, Marie-Paule Cani To cite this version: Maxime Garcia, Rémi Ronfard, Marie-Paule Cani. Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis. MIG 2019 - ACM SIGGRAPH Con- ference on Motion, Interaction and Games, Oct 2019, Newcastle upon Tyne, United Kingdom. 10.1145/3359566.3360061. hal-02303803 HAL Id: hal-02303803 https://hal.archives-ouvertes.fr/hal-02303803 Submitted on 9 Jan 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis Maxime Garcia Rémi Ronfard Marie-Paule Cani Univ. Grenoble Alpes, CNRS, Inria, Univ. Grenoble Alpes, CNRS, Inria, Ecole Polytechnique, CNRS, LIX Grenoble INP, LJK Grenoble INP, LJK 9128, Palaiseau, France 38000 Grenoble, France 38000 Grenoble, France Figure 1: Left: A user drawing a spatial motion doodle (SMD) which is the six-dimensional trajectory of a moving frame (position and orientation), here attached to the HTC Vive controller. Right: The SMD is parsed into a string of motion tokens, allowing to recognize actions and extract the associated motion qualities.
    [Show full text]
  • Automatic Hand-Over Animation for Free-Hand Motions from Low Resolution Input
    Automatic hand-over animation for free-hand motions from low resolution input Chris Kang1, Nkenge Wheatland1, Michael Neff2, and Victor Zordan1 1 University of California, Riverside 2 University of California, Davis Abstract. Hand-over animation is the process by which hand animation is added to existing full-body motion. This paper proposes a technique for automatically synthesizing full-resolution, high quality free-hand mo- tion based on the capture of a specific, select small number of markers. Starting from a large full-resolution hand motion corpus, our technique extracts a finite pose database and selects the marker sets offline, based on user-defined inputs. For synthesis, capture sequences that include this marker set drive a reconstruction process that results in a full-resolution of the hand through the aid of the pose database. This effort addresses two distinct issues, first how to objectively select which is the best marker set based on a fixed number of desired markers and, second, how to per- form reconstruction from this data set automatically. Findings on both of these fronts are reported in this paper. Keywords: Character animation; motion capture; hand animation 1 Introduction Hand-over is a term used in the animation industry to refer to the process of adding hand animation to pre-existing full-body motion. Hand motion is a criti- cal part of many animations in which a full-body character is present. However, animation of the hand can be difficult especially where realism is important. While high-quality, full-body motion capture is a popular means for animating realistic characters, hand animation is most often not recorded at the same time as the motion of the full-body for a number of reasons.
    [Show full text]
  • Digital Sparks 03
    digital sparks 03 WETTBEWERB STUDENTISCHER MEDIENPROJEKTE Medienkunst / Mediengestaltung / Medieninformatik / Mediale Inszenierung und Vermittlung >digital sparks< Wettbewerb studentischer Medienprojekte >digital sparks< ist ein Wettbewerb für Studierende der Medienkunst, Mediengestaltung, Medieninformatik sowie für Studierende im Bereich mediale Inszenierung und Vermittlung. Ziel des Wettbewerbes ist es, den medienkulturellen Nach- wuchs zu fördern und zugleich einen Einblick in Forschung und Lehre an deutschsprachigen Hochschulen zu geben. >digital sparks< ist ein Modul der Internetplattform netzspan- nung.org, dem Wissensraum für digitale Kunst und Kultur. Für die Konzeption und Durchführung des Wettbewerbs ist die For- schungsgruppe MARS (Media Arts & Research Studies) am Fraunhofer-Institut für Medienkommunikation unter der Lei- tung von Monika Fleischmann und Wolfgang Strauss verant- wortlich. Das Bundesministerium für Bildung und Forschung fördert die Entwicklung von netzspannung.org. http://netzspannung.org/digital-sparks i M a MARS – Media Arts & Research Studies | Monika Fleischmann | Schloss Birlinghoven | D-53754 Sankt Augustin | [email protected] http://www.imk.fraunhofer.de/mars http://netzspannung.org 4 Inhalt Grußwort Christoph Ehrenberg 7 >digital sparks< 2003 Was war? Was kommt? – 9 Bestandsaufnahme und Ausblick Der Wettbewerb >digital sparks< Konzept und Durchführung 11 Jury/Vorjury 15 Die Preisträger Stellungnahme der Jury 21 how-to-bow.com 23 Loser Raum 27 machines will eat itself 31 Die nominierten
    [Show full text]
  • Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture
    Animation of Natural Virtual Characters Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture Kerstin Ruhland, Mukta Prasad, and Rachel McDonnell ■ Trinity College Dublin n the entertainment industry, cartoon anima- of the refining stage, with easily editable curves for tors have a long tradition of creating expres- final polish by an animator. sive characters that are highly appealing to The proposed approach replaces the acting and Iaudiences. It is particularly important to get right blocking phases by recording the artists’ move- the movement of the face, eyes, and head because ments in front of a real-time video-based motion they are the main communicators of the charac- tracking and retargeting system (see Figure 1). ter’s internal state and emotions and an indication We use a commercially available and affordable of its level of engagement with real-time facial-retargeting system for this stage, others. However, creating these which creates the initial motion curves of the Producing cartoon animations highly appealing animations is character animation. These motion curves con- is a laborious task, and there a labor-intensive task that is not tain realistic human movements that are typically is a distinct lack of automatic supported by automatic tools. Re- dense in keyframes, making them difficult for an tools to help animators, ducing the animator’s effort and animator to edit. particularly with creating facial speeding up this process, while Our approach thus focuses on the next stage animation. To speed up and allowing for artistic style and of the pipeline. The initial motion curves of the ease this process, the proposed creativity, would be invaluable to character animation provide the input to our algo- method uses real-time video- production companies that cre- rithm.
    [Show full text]
  • Intuitive Facial Animation Editing Based on a Generative RNN Framework
    ACM SIGGRAPH / Eurographics Symposium on Computer Animation 2020 Volume 39 (2020), Number 8 J. Bender and T. Popa (Guest Editors) Intuitive Facial Animation Editing Based On A Generative RNN Framework Eloïse Berson,1;2Catherine Soladié2 and Nicolas Stoiber1 1Dynamixyz, France 2CentraleSupélec, CNRS, IETR, UMR 6164, F-35000, France Abstract For the last decades, the concern of producing convincing facial animation has garnered great interest, that has only been accelerating with the recent explosion of 3D content in both entertainment and professional activities. The use of motion capture and retargeting has arguably become the dominant solution to address this demand. Yet, despite high level of quality and automation performance-based animation pipelines still require manual cleaning and editing to refine raw results, which is a time- and skill-demanding process. In this paper, we look to leverage machine learning to make facial animation editing faster and more accessible to non-experts. Inspired by recent image inpainting methods, we design a generative recurrent neural network that generates realistic motion into designated segments of an existing facial animation, optionally following user- provided guiding constraints. Our system handles different supervised or unsupervised editing scenarios such as motion filling during occlusions, expression corrections, semantic content modifications, and noise filtering. We demonstrate the usability of our system on several animation editing use cases. CCS Concepts arXiv:2010.05655v1 [cs.GR] 12 Oct 2020 • Computing methodologies ! Motion processing; Neural networks; 1. Introduction of motion capture (mocap) technologies have opened a new era, where realistic animation generation is more deterministic and re- Creating realistic facial animation has been a long-time challenge peatable.
    [Show full text]