Animation Database System and Its Tools for Productiqn

Total Page:16

File Type:pdf, Size:1020Kb

Animation Database System and Its Tools for Productiqn The Institute ofImageof Image Information and Television Engineers MIP2002-19, AI T2002-19, NIM2002-19, VI S2002-19 ' ANIMATION DATABASE SYSTEM AND ITS TOOLS FOR PRODUCTIQN. Fumio Sumi and Masayuki Nakopima ' ./ ' Graduate School of Infbrmation Science and Enginee'ring Tokyo InStitute of Tlechnology 12-1 Ookayama 2-chome, Meguro-ku Tokyo,.1,.P2-,e.552Japan, l ' ' f t・・ E-mail:{sumi, nakajima}@img.cS,t'itech,ac.jp as talenti experience and knowhow. It is exPected Such technology that' productien of animatien' i's efficiently. ABSTRACT There are two apptohk]hes to sol"e above mentioned ' problems, Creating an animation sequence needs tremendous . Developing a technique of generating motion eM- workioads and talented animators to specify and con- ciently. trol motions. But lt is very popular to make complgtely 'technique' . Developing a new animation without reusing existing sequences of System anci of reusing the animation. One of the reasons is the diMculty of retar- existing animation sequences. getting the motion to the other models and deform ation The technology to create motion efficiently is re- of the model shapes. If it is possible to provide the an- searched very much, Especially, there are many works imation database, we can eMciently produce a pew an- 'But using kinematics and motiori dynamics[1]N[7], to imation sequence using existing sequences. There are use these method, animators should take care about many research works in retargetting the motion dota the kinematic structure of the charakter and this is a captured from motion capture system, but those are high hurdle for animators. alrnost for 3D computer animation and there is none of ' While, the work for reuse the existing materials is the works in 2D animation. liinited in reusing triotiori captured data' 6r facial ani- We a standard 2D anirnation database sys- propose mation sequences. tem, in which there are model data with motion in This paper provid'es the solution for above M'entioned Teusable form, ・and utility tools for deformation of ''' '' ''' ' ''' ''' 'problems. model shapes and motions., We develop such techniques T'he remaindbr of the paper"is orga4i'zed"'as fbllows: implemented in the tools as reusable form of rpaking in the next section, vie 6riefly'describe r'elated work. model shapes by vectorization, and educing motlons Section 3 describes the detaiI of animation'database from input bi-tmap images. sYstem.'Iri section 4, we desicribe our result. We con- '61ude with a'brief dLiscussi6n'6f oifr mdih'ed'si gdvEh- tages and limitations, and directions fot future' Work. t tt tt t t t. /tt. t 1.t tt/ 1. INTRQDUCTION Currently, It becomes easy to create comp'uter graph- . ies rnovies on the sake of progress of computer graphics 2・ RELATED WORK ''' ''' '・ ''' ' technology, Especially, modelling and rendering tech- -'' ' ' ' niques advanced very much so that CG images become In 3D cotnputer graphics, it is pOpular to reuse some ・ ' much as real as photograph. data such as modelling data. It is selling many model But, it is still requires workload and time are many works 'tetargettin・g 'motion tremendous data. The're in to make the 6omputer animation with such realistic captured data to other different -models[8]. motion as creator desired. Moreover, the quality of There is the work in facial animation area, that is animation works depend on the animator's skills such copying the motion to the other model[9]. -73- NII-Electronic Library Service The Institute ofImageof Image Information and Television Engineers animation, ai;e many works in While, in 2D .t,her,e morphing reseamch[101A-[13],-but th.ere is few ulork ' i 'is in other than morphing[14i,[15] and none of the 2D(inginai'.tmages-3DModelDataIu reuse technology using existing animation sequences. The traditional celluloid animation requires much more worklgad than that of 3D animation and it is hoped t / AnimationDatabase that the production proce$s will be computerized. Funetions of Teols for Database : +Vectorizatiofiofimage StandardModelData StandardAnimationSequeriees + 3D to ・2D conversiQi . + Set feature ef lines o 3. ANIMATION DATABASE SYSTEM points ' +Grotipihg+Makingkeyfiair)es AnimationSequeneeLibfary 3.1 System Overview +interpolationbetweenkeyframes We propose the technique developing standard ani- User'sModelDataand mation database system for 2D animation. producing UseAnirmatioil o AnimatiofiSegueflces The existing animatien sequence is stored in the database in-reusable form which have the standard model with motion. The user can use the animation Figure 1: The Functional DiagraJn of Database System sequence as it is, or can applY it to the user's charac- ter by deforrning the standard model and inotion. The This tool is used when we want to select request- tools for creating database and using stored animation ing sLandard aniniation sequence. Preview and fraJne sequences are also developed and providecl. editing functions are also '(3) provided. Applicatiou tool 3.1.1 Animatiori Database 'following '[Vhis tool has the functioTls:Deforming the The animation consists of tbe following database st'andard modei to user's model, ad,iusting and applying its usage: parts by the motion to user's' model. Reusable fbrm of stand4rd axiimation sequences (1) {4) Database management tool There are standard model's vectorized data and de- This is thg database maintenance tool. T'he follow- tected motion from input amimation frames. This part ing functions are provided: Data addition, deletion, is used internally by system foi/ animation warping to retreat'al and backup. the other mL}dels. (2) Standard animation sequences are 3.2 Provided Database Content This part is used by users. All sequence data stored in 2D bitmap irnages without color. The gontent (1) Standard Animation Sequences examples are described in subsection 3.2. In this part, the fo11owing sequences are included: (3) Animatiop sequence library e Walking, Skipping, This part is used as it is. All animation sequences Human locomotion:Running, are 2D cQlored bitmap image data. This animation etc. sequence is used as it is. e Human facial expression: Taking, Smiling, Sur- Figure 1 shows the fuilctional of the anima- diagrain prising, Crying, Angry, etc. tion database system. e Quadruped animais: Dog, Cat, Sheep, Horse, etc. 3.1.2 7bols for Database System e Birds: Pigeon, Seagutl, Crow, Swm}, Eagle, etc. (1) Database creation teol to the This tool is used for registering the data . Natural phenomena: Raining, Snowing, Wlnd, database system. The rnain function of this tool is mak- Surface of river, Stream of river, Fire fiame, etc. ing input data to be reusable. The conversion function from 3D mode! data to 2D model is included in this (2) Animation Llbrary eptitled, "The PVind and VVd- toel.(2) The sarnple animation Retrieval tool ter Cuardians", ls in the library. This title !asts about -74- / NII-Electronic Library Service The Institute ofImageof Image Information and Television Engineers fiveiiPnutes. Users.cgl.}.edit th,jssarnple in .vPeir pwn .manrier to create'.nevy sequence. Standard Model StandardAnimation Some of standard. anrmation sequences such as nat- ・i, 'gg-l,N,, t t tt ssl ural with <t :' phenomena arg also provided in usable form ・ff i' colop. '.. t" ・ >."',tti r x ' if -i"x '・.i--S, l ` Metion V:ctor 3.3 Making, Reusable Forrp. [ i> .f e t ' tt 4,t'/""IIS・ x The st'an.dard animation ' tsequence is given /by the "-k. seriesi ef 2D binary'irnage frames. When we cre'ate the S.-i))>'c...V-E.-,i'".) standard animation we sh6uld create .tt/f database, them in reusable form[ The standard model shape' is extfacted smpbefonnahon frorn these bihary images by the piocess of the fo11owing stge1.y.and.ft.dj,"st steps.(1) S>7g x'As. Vectorizat'ion 'cbnverted AnimationvLlarpmg a vector The 2D binaryimage is tO drdwing t4., image using vector tracing algorithm[181. Pixels are approximated in straight lines. t".'.'fill.il,\% (2) Making correspondent feature points between franles ! fA,. kxx x. i; x. The correspondent feature points of lines are ex- traeted vectgrized frQm impges. The pattern matching £ .r lY algorithms are used for automated searching ,correspon- TargetFvTodelAJ#mFtion . dences: Thesg feature poipts are used.for deforming the Figure 2: Animation Warping Overview standard mq, del to vser ls.medels. (3) .Extracting ruptipn vectors from the series of frames same as the standard model was convereed. The ri}atching algoritbms are. alseused fbr process pattern 'us- extracting rnotion.s a Next, the eorrespondehces are searched and set froru geries of frames. Motion ing the reusable vectors are noted in a set of direction and distance dis- formatdata of standard animation'se- in the database. of feature quence placement'We points. Motion warping triedto use the several algorithms forthe pattern (2) When we warp thg motion to the target matching such as template-matching, Iine-matching, s.tandard/ modei, we should corisider'the relaocation-matching, structured matching[17] and ra- displacementof model by the deformation.The stanclard motign vector isad- dial bases function (RBF)[16]. RBF is weil known in to the target model in size and direction, and is its powerfu1 interpelation and matching capability. We justed fitting to the target model. Then used Nagao's algorithm[18],[19] for detecting motion the animation warp- ing to the target is completed. and correspondent feature peints . paodel shows overview of animation wafping. Figure2 the 3.4 Animatign Warping When.we use the standard animation sequences to 4i RESUL[It our crwn Character(target model), we should defbrm the shape of standard model and modify its motion to fit This is on' the way. standard animation project The for the target model. This is the warping. qnimation sequences are mostly made and ready to use., The animation warping is made in next two steps. The animation warping is succeeded betwebn the (1) Deforming the shape of the standard model similar type of models.
Recommended publications
  • The Uses of Animation 1
    The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator.
    [Show full text]
  • Multimodal Behavior Realization for Embodied Conversational Agents
    Multimed Tools Appl DOI 10.1007/s11042-010-0530-2 Multimodal behavior realization for embodied conversational agents Aleksandra Čereković & Igor S. Pandžić # Springer Science+Business Media, LLC 2010 Abstract Applications with intelligent conversational virtual humans, called Embodied Conversational Agents (ECAs), seek to bring human-like abilities into machines and establish natural human-computer interaction. In this paper we discuss realization of ECA multimodal behaviors which include speech and nonverbal behaviors. We devise RealActor, an open-source, multi-platform animation system for real-time multimodal behavior realization for ECAs. The system employs a novel solution for synchronizing gestures and speech using neural networks. It also employs an adaptive face animation model based on Facial Action Coding System (FACS) to synthesize face expressions. Our aim is to provide a generic animation system which can help researchers create believable and expressive ECAs. Keywords Multimodal behavior realization . Virtual characters . Character animation system 1 Introduction The means by which humans can interact with computers is rapidly improving. From simple graphical interfaces Human-Computer interaction (HCI) has expanded to include different technical devices, multimodal interaction, social computing and accessibility for impaired people. Among solutions which aim to establish natural human-computer interaction the subjects of considerable research are Embodied Conversational Agents (ECAs). Embodied Conversation Agents are graphically embodied virtual characters that can engage in meaningful conversation with human users [5]. Their positive impacts in HCI have been proven in various studies [16] and thus they have become an essential element of A. Čereković (*) : I. S. Pandžić Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia e-mail: [email protected] I.
    [Show full text]
  • Data Driven Auto-Completion for Keyframe Animation
    Data Driven Auto-completion for Keyframe Animation by Xinyi Zhang B.Sc, Massachusetts Institute of Technology, 2014 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Computer Science) The University of British Columbia (Vancouver) August 2018 c Xinyi Zhang, 2018 The following individuals certify that they have read, and recommend to the Fac- ulty of Graduate and Postdoctoral Studies for acceptance, the thesis entitled: Data Driven Auto-completion for Keyframe Animation submitted by Xinyi Zhang in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. Examining Committee: Michiel van de Panne, Computer Science Supervisor Leonid Sigal, Computer Science Second Reader ii Abstract Keyframing is the main method used by animators to choreograph appealing mo- tions, but the process is tedious and labor-intensive. In this thesis, we present a data-driven autocompletion method for synthesizing animated motions from input keyframes. Our model uses an autoregressive two-layer recurrent neural network that is conditioned on target keyframes. Given a set of desired keys, the trained model is capable of generating a interpolating motion sequence that follows the style of the examples observed in the training corpus. We apply our approach to the task of animating a hopping lamp character and produce a rich and varied set of novel hopping motions using a diverse set of hops from a physics-based model as training data. We discuss the strengths and weak- nesses of this type of approach in some detail. iii Lay Summary Computer animators today use a tedious process called keyframing to make anima- tions.
    [Show full text]
  • Forsíða Ritgerða
    Hugvísindasvið The Thematic and Stylistic Differences found in Walt Disney and Ozamu Tezuka A Comparative Study of Snow White and Astro Boy Ritgerð til BA í Japönsku Máli og Menningu Jón Rafn Oddsson September 2013 1 Háskóli Íslands Hugvísindasvið Japanskt Mál og Menning The Thematic and Stylistic Differences found in Walt Disney and Ozamu Tezuka A Comparative Study of Snow White and Astro Boy Ritgerð til BA / MA-prófs í Japönsku Máli og Menningu Jón Rafn Oddsson Kt.: 041089-2619 Leiðbeinandi: Gunnella Þorgeirsdóttir September 2013 2 Abstract This thesis will be a comparative study on animators Walt Disney and Osamu Tezuka with a focus on Snow White and the Seven Dwarfs (1937) and Astro Boy (1963) respectively. The focus will be on the difference in story themes and style as well as analyzing the different ideologies those two held in regards to animation. This will be achieved through examining and analyzing not only the history of both men but also the early history of animation in Japan and USA respectively. Finally, a comparison of those two works will be done in order to show how the cultural differences affected the creation of those products. 3 Chapter Outline 1. Introduction ____________________________________________5 1.1 Introduction ___________________________________________________5 1.2 Thesis Outline _________________________________________________5 2. Definitions and perception_________________________________6 3. History of Anime in Japan_________________________________8 3.1 Early Years____________________________________________________8
    [Show full text]
  • Multi-Modal Authoring Tool for Populating a Database of Emotional Reactive Animations
    Multi-modal Authoring Tool for Populating a Database of Emotional Reactive Animations Alejandra Garc´ıa-Rojas, Mario Guti´errez, Daniel Thalmann and Fr´ed´eric Vexo Virtual Reality Laboratory (VRlab) Ecole´ Polytechnique F´ed´erale de Lausanne (EPFL) CH-1015 Lausanne, Switzerland {alejandra.garciarojas,mario.gutierrez,daniel.thalmann,frederic.vexo}@epfl.ch Abstract. We aim to create a model of emotional reactive virtual hu- mans. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen. 1 Introduction Our goal is to create a model to drive the behavior of autonomous Virtual Hu- mans (VH) taking into account their personality and emotional state. We are focused on the generation of reflex movements triggered by events in a Virtual Environment and modulated by inner variables of the VH (personality, emo- tions). We intend to build our animation model on the basis of a large set of animation sequences described in terms of personality and emotions. In order to store, organize and exploit animation data, we need to create a knowledge-based system (animations database). This paper focuses on the authoring tool that we designed for populating such animation database. We observe that the process of animating is inherently multi-modal because it involves many inputs such as motion capture (mocap) sensors and user control on an animation software.
    [Show full text]
  • Realistic Crowd Animation: a Perceptual Approach
    Realistic Crowd Animation: A Perceptual Approach by Rachel McDonnell, B.A. (Mod) Dissertation Presented to the University of Dublin, Trinity College in fulfillment of the requirements for the Degree of Doctor of Philosophy University of Dublin, Trinity College November 2006 Declaration I, the undersigned, declare that this work has not previously been submitted as an exercise for a degree at this, or any other University, and that unless otherwise stated, is my own work. Rachel McDonnell November 14, 2006 Permission to Lend and/or Copy I, the undersigned, agree that Trinity College Library may lend or copy this thesis upon request. Rachel McDonnell November 14, 2006 Acknowledgments First and foremost, I would like to thank my supervisor Carol O’Sullivan, for all her help, guidance, advice, and encouragement. I couldn’t imagine a better supervisor! Secondly, I would like thank the past and present members of the ISG, for making the last four years very enjoyable. In particular, Ann McNamara for her enthusiasm and for advising me to do a PhD. Also, to all my friends and in particular my housemates for taking my mind off my research in the evenings! A special thank you to Mam, Dad and Sarah for the all the encouragement and interest in everything that I do. Finally, thanks to Simon for helping me through the four years, and in particular for the all the support in the last few months leading up to submitting. Rachel McDonnell University of Dublin, Trinity College November 2006 iv Realistic Crowd Animation: A Perceptual Approach Publication No. Rachel McDonnell, Ph.D.
    [Show full text]
  • An Online Database of Animal Motions
    THE CHIMAERA PROJECT: AN ONLINE DATABASE OF ANIMAL MOTIONS A Thesis by JULIE KATHERINE GELÉ Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2007 Major Subject: Visualization Sciences THE CHIMAERA PROJECT: AN ONLINE DATABASE OF ANIMAL MOTIONS A Thesis by JULIE KATHERINE GELÉ Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Approved by: Chair of Committee, Carol LaFayette Committee Members, Ergun Akleman B. Stephen Carpenter, II Head of Department, Mark J. Clayton December 2007 Major Subject: Visualization Sciences iii ABSTRACT The Chimaera Project: An Online Database of Animal Motions. (December 2007) Julie Katherine Gelé, B.S., Texas A&M University Chair of Advisory Committee: Prof. Carol LaFayette Digital animators will save vast amounts of project time by starting with a completed skeleton and some base animations. This result can be accomplished with Web 2.0 technologies by creating a repository of skeletons and animations that any animator may use for free. While free Maya™ skeletons currently exist on the Internet, the websites housing them have only brief features and functions for browsing and interacting with these files. None of these websites contain downloadable animations for the provided skeletons. The Chimaera Project improves the field of Web 2.0 sites offering free rigs by offering many new features and freedoms to the animation community. Users may upload and download Maya™ skeletons, share comments and tips with each other, upload animations associated with the skeletons, and search or browse the skeletons in a variety of ways.
    [Show full text]
  • Near Optimal Synthesis of Digital Animation from Multiple Motion Capture Clips
    UNIVERSITA’ DEGLI STUDI DI MILANO Facoltà di Scienze Matematiche, Fisiche e Naturali Near Optimal Synthesis of Digital Animation from Multiple Motion Capture Clips Corso di Laurea Magistrale in: Tecnologie dell’Informazione e della Comunicazione Marco Alamia Matricola n. 736779 Relatore: Prof. Ing. Alberto N. BORGHESE Correlatore: Dott. Iuri FROSIO Anno Accademico 2008/2009 ABSTRACT The present work discusses theory and practice of a powerful animation method designed to generate walk animations for digital characters. Our system permits to interact with the animation, allowing the user to change at run-time several animation’s parameters, such as the motion direction or the gait style, influencing the final animation. Our work starts presenting the skeleton animation system and the motion capture system; then we explain how we can, thanks to these two techniques, generate a database of walk animation clips. The so obtained animations are provided to the animation system which links them together in a sequence. Linking is obtained generating a smooth transition from one clip to the next one through the use of the animation blending technique. Since we want to interact with the animation at run-time, the clip’s sequence is not given a priori, instead it is generated in real-time in respect to the user’s desires. To this aim we create a controller in charge of choosing the next clip in respect to the task that it is given, which can be simulating a walk along a line, or simulating a walk where motion direction and character’s orientation are required by the user. The controller leans on a selection policy in order to choose the next clip; in our work we propose two possible policies, a greedy one and a near-optimal one.
    [Show full text]
  • Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis Maxime Garcia, Rémi Ronfard, Marie-Paule Cani
    Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis Maxime Garcia, Rémi Ronfard, Marie-Paule Cani To cite this version: Maxime Garcia, Rémi Ronfard, Marie-Paule Cani. Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis. MIG 2019 - ACM SIGGRAPH Con- ference on Motion, Interaction and Games, Oct 2019, Newcastle upon Tyne, United Kingdom. 10.1145/3359566.3360061. hal-02303803 HAL Id: hal-02303803 https://hal.archives-ouvertes.fr/hal-02303803 Submitted on 9 Jan 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis Maxime Garcia Rémi Ronfard Marie-Paule Cani Univ. Grenoble Alpes, CNRS, Inria, Univ. Grenoble Alpes, CNRS, Inria, Ecole Polytechnique, CNRS, LIX Grenoble INP, LJK Grenoble INP, LJK 9128, Palaiseau, France 38000 Grenoble, France 38000 Grenoble, France Figure 1: Left: A user drawing a spatial motion doodle (SMD) which is the six-dimensional trajectory of a moving frame (position and orientation), here attached to the HTC Vive controller. Right: The SMD is parsed into a string of motion tokens, allowing to recognize actions and extract the associated motion qualities.
    [Show full text]
  • Automatic Hand-Over Animation for Free-Hand Motions from Low Resolution Input
    Automatic hand-over animation for free-hand motions from low resolution input Chris Kang1, Nkenge Wheatland1, Michael Neff2, and Victor Zordan1 1 University of California, Riverside 2 University of California, Davis Abstract. Hand-over animation is the process by which hand animation is added to existing full-body motion. This paper proposes a technique for automatically synthesizing full-resolution, high quality free-hand mo- tion based on the capture of a specific, select small number of markers. Starting from a large full-resolution hand motion corpus, our technique extracts a finite pose database and selects the marker sets offline, based on user-defined inputs. For synthesis, capture sequences that include this marker set drive a reconstruction process that results in a full-resolution of the hand through the aid of the pose database. This effort addresses two distinct issues, first how to objectively select which is the best marker set based on a fixed number of desired markers and, second, how to per- form reconstruction from this data set automatically. Findings on both of these fronts are reported in this paper. Keywords: Character animation; motion capture; hand animation 1 Introduction Hand-over is a term used in the animation industry to refer to the process of adding hand animation to pre-existing full-body motion. Hand motion is a criti- cal part of many animations in which a full-body character is present. However, animation of the hand can be difficult especially where realism is important. While high-quality, full-body motion capture is a popular means for animating realistic characters, hand animation is most often not recorded at the same time as the motion of the full-body for a number of reasons.
    [Show full text]
  • Digital Sparks 03
    digital sparks 03 WETTBEWERB STUDENTISCHER MEDIENPROJEKTE Medienkunst / Mediengestaltung / Medieninformatik / Mediale Inszenierung und Vermittlung >digital sparks< Wettbewerb studentischer Medienprojekte >digital sparks< ist ein Wettbewerb für Studierende der Medienkunst, Mediengestaltung, Medieninformatik sowie für Studierende im Bereich mediale Inszenierung und Vermittlung. Ziel des Wettbewerbes ist es, den medienkulturellen Nach- wuchs zu fördern und zugleich einen Einblick in Forschung und Lehre an deutschsprachigen Hochschulen zu geben. >digital sparks< ist ein Modul der Internetplattform netzspan- nung.org, dem Wissensraum für digitale Kunst und Kultur. Für die Konzeption und Durchführung des Wettbewerbs ist die For- schungsgruppe MARS (Media Arts & Research Studies) am Fraunhofer-Institut für Medienkommunikation unter der Lei- tung von Monika Fleischmann und Wolfgang Strauss verant- wortlich. Das Bundesministerium für Bildung und Forschung fördert die Entwicklung von netzspannung.org. http://netzspannung.org/digital-sparks i M a MARS – Media Arts & Research Studies | Monika Fleischmann | Schloss Birlinghoven | D-53754 Sankt Augustin | [email protected] http://www.imk.fraunhofer.de/mars http://netzspannung.org 4 Inhalt Grußwort Christoph Ehrenberg 7 >digital sparks< 2003 Was war? Was kommt? – 9 Bestandsaufnahme und Ausblick Der Wettbewerb >digital sparks< Konzept und Durchführung 11 Jury/Vorjury 15 Die Preisträger Stellungnahme der Jury 21 how-to-bow.com 23 Loser Raum 27 machines will eat itself 31 Die nominierten
    [Show full text]
  • Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture
    Animation of Natural Virtual Characters Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture Kerstin Ruhland, Mukta Prasad, and Rachel McDonnell ■ Trinity College Dublin n the entertainment industry, cartoon anima- of the refining stage, with easily editable curves for tors have a long tradition of creating expres- final polish by an animator. sive characters that are highly appealing to The proposed approach replaces the acting and Iaudiences. It is particularly important to get right blocking phases by recording the artists’ move- the movement of the face, eyes, and head because ments in front of a real-time video-based motion they are the main communicators of the charac- tracking and retargeting system (see Figure 1). ter’s internal state and emotions and an indication We use a commercially available and affordable of its level of engagement with real-time facial-retargeting system for this stage, others. However, creating these which creates the initial motion curves of the Producing cartoon animations highly appealing animations is character animation. These motion curves con- is a laborious task, and there a labor-intensive task that is not tain realistic human movements that are typically is a distinct lack of automatic supported by automatic tools. Re- dense in keyframes, making them difficult for an tools to help animators, ducing the animator’s effort and animator to edit. particularly with creating facial speeding up this process, while Our approach thus focuses on the next stage animation. To speed up and allowing for artistic style and of the pipeline. The initial motion curves of the ease this process, the proposed creativity, would be invaluable to character animation provide the input to our algo- method uses real-time video- production companies that cre- rithm.
    [Show full text]