Glift: Generic Data Structures for Graphics Hardware

Total Page:16

File Type:pdf, Size:1020Kb

Glift: Generic Data Structures for Graphics Hardware Glift: Generic Data Structures for Graphics Hardware By AARON ELIOT LEFOHN B.A. Chemistry (Whitman College) 1997 M.S. Physical Chemistry (University of Utah) 2001 M.S. Computer Science (University of Utah) 2004 DISSERTATION Submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Computer Science in the OFFICE OF GRADUATE STUDIES of the UNIVERSITY OF CALIFORNIA DAVIS Approved: Committee in charge 2006 –i– Glift: Generic Data Structures for Graphics Hardware Copyright 2006 by Aaron Eliot Lefohn –ii– To Karen and Kaia. –iii– Acknowledgments I am humbled by the large number of people who have directly and indirectly contributed to this thesis. Throughout my time at UC Davis, I have had the great fortune to work with an outstand- ing collection of colleagues at UC Davis, Pixar, the University of Utah, Stanford, NVIDIA, the University of North Carolina, the University of Virginia and elsewhere. I begin by thanking my advisor, John Owens. I consider the most important traits in a graduate advisor to be trust and a passionate interest in my work. John has provided everything I needed to succeed and repeatedly put my best interests before his own. His passion for and interest in my work propelled me through the inevitable low points of research and encouraged me to take the risks required for true innovation. I cannot thank John enough for promoting my work and giving me the freedom to explore, even when it meant that I published four papers that are not part of this thesis. I also want to thank my committee members, Ken Joy, Bernd Hamann, Oliver Staadt, and Nina Amenta, for their feedback and comments. I additionally want to thank Ken Joy for his vision and continued determination to build the UC Davis Graphics Lab (officially known as the Institute for Data Analysis and Visualization, or IDAV) into a top-tier graphics and visualization research institute. Next, I would like to thank my coauthors on the work that is presented in this thesis: Joe Kniss, Shubhabrata “Shubho” Sengupta, Robert Strzodka, and Michael Kass. Joe Kniss entered my life as my GPU programming mentor at the University of Utah in 2002. Since then, Joe and I have pub- lished thirteen papers, sketches, technical reports, and conference tutorials together. Joe has been an integral part of the inspiration, design, and implementation of Glift, introduced me to generic pro- gramming and implemented most of the the octree 3D paint application. I cannot thank Joe enough for his friendship, time, and brilliance. Shubho Sengupta has morphed from beginning OpenGL programmer to an invaluable contributor of core algorithm and implementation over the course of the last two years. Shubho implemented large portions of several versions of the adaptive shadow map, resolution-matched shadow map, and hybrid parallel scan algorithms. Shubho also helped significantly with the theory section for the –iv– depth-of-field application. I thank him very much for his hard work, dedication, and insights. I want to thank Robert Strzodka at Stanford University for his rigor and being an integral part of the adaptive data structure development, especially with respect to correct filtering and the node- centered representation. Robert has been involved with the adaptive grid work from the beginning, has contributed substantially to the Glift code base, and continues to push the usage and applications of the Glift work with his research on GPU-based adaptive partial differential equation solvers. Fabio Pellacini, an assistant professor at Dartmouth University and formerly at Pixar Animation Studios, has had a large impact on the octree 3D paint and shadow applications in this thesis. It was Fabio who first pointed out that we might be able to implement adaptive shadow maps with an adaptation of our octree structure. Fabio has been a tremendous role model over the last three years, and I want to thank him for his mentorship, career advice, and research brainstorming sessions. We've mulled over many great and not-so-great ideas together, and I greatly value our time together. I am honored to have had the opportunity to work with Michael Kass at Pixar Animation Studios on the depth-of-field project. Michael is credited with the heat diffusion model for depth-of-field. I want to thank him very much for his mentorship during our collaboration. I am indebted to the other contributors to the depth-of-field project, including the following people from Pixar Animation Studios: Mark Adams built the flags and fence model, John Warren provided shots from Cars, Mark Meyer and Kiril Vidimce integrated the algorithm into Pixar's production tools, and Rob Cook and Tony DeRose supported the project from conception through completion, including the collaboration with UC Davis. In addition, I want to thank Mike Kirby at the University of Utah for valuable discussions on parallel tridiagonal solver techniques. In addition to the people mentioned so far, my colleagues at Pixar have had an enormous impact on my career and thinking. I want to thank Kiril Vidimce and Mark Leone for their support, friendship, and belief in my work during my time at Pixar. I also want to thank Dana Batali, Wayne Wooten, David Laur, Jonathan Shade, and the other members of the RenderMan team for including me in the group, supporting my work, and engaging me in wonderful arguments. I've learned a tremendous amount from working with the team and thank you very, very much for the opportunity. –v– The Glift work simply would not have been possible without the unprecedented level of support we received from NVIDIA. Nick Triantos tirelessly answered questions about GPU architecture and provided pre-release display drivers, Craig Kolb and Cass Everitt both added features to the Cg runtime for Glift and spent many extra hours providing support to the project, and David Kirk generously provided GPU hardware and was a strong advocate for my work. I want to give a special thanks to Randima Fernando at NVIDIA, the inventor of the adaptive shadow map algorithm, for being willing to answer many questions about his work and being supportive of our work. Having Randy as a resource proved absolutely invaluable as we first adapted, then revised and improved upon his groundbreaking work. In addition, a number of other people have contributed to the Glift data structure work. Ross Whitaker and Milan Ikits and the University of Utah provided early feedback and contributions to the adaptive data structure idea. James Ahrens and Patrick McCormick at the Advanced Computing Laboratory (ACL) at Los Alamos National Lab have contributed significantly to my appreciation and understanding of data-parallel algorithmic primitives. Similarly, I want to thank Mark Harris at NVIDIA for many stimulating discussions about data-parallel algorithms, data structures, and machine models. Daniel Horn at Stanford provided detailed support during our implementation of his stream compaction algorithm as well as early feedback and encouragement on our improved algorithm. I also thank Kayvon Fatahalian and Mike Houston at Stanford for their constructive crit- icism and encouragement on Glift and the adaptive/resolution-matched shadow work. I also want to thank Naga Govindaraju, at the University of North Carolina, for providing source code for a custom version of GPUSort amidst the SIGGRAPH crunch. I also thank the following additional people for providing feedback on the work throughout its development: David Blythe at Microsoft, Ian Buck at NVIDIA, Dominik Goddek¨ e at Dortmund University, Matt Papakipos at PeakStream, Matt Pharr at Neoptica, Mark Segal at ATI, Peter-Pike Sloan at Microsoft, Dan Wexler at NVIDIA, Yao Zhang at Beijing Institute of Technology, and the anonymous reviewers. I owe thanks to a number of my lab-mates in the Institution for Data Analysis and Visualization (IDAV) at UC Davis for their friendship and help throughout this work. Adam Moerschell at UC Davis selflessly agreed to make all of the movies for papers, sketches, and talks related to Glift. Yong –vi– Kil, Chris Co, Shubho Sengupta, Taylor Holliday, and Louis Feng provided various 3D models for the shadow and depth-of-field projects. I also want to thank, in no particular order, Chris Co, Yong Kil, Brian Budge, Serban Porumbescu, and Ben Gregorski for their friendship and support during my time in IDAV. A National Science Foundation (NSF) Graduate Fellowship funded all of the work presented in this dissertation. The NSF fellowship has given me pure and complete intellectual freedom throughout my Ph.D. Most of this work simply would not have been possible without the fellowship. I have truly reveled in the freedom gained by having my own research funding. Additional funding for travel, equipment, and conference costs were provided by grants from the Department of Energy, Chevron, and Los Alamos National Laboratories. Lastly, I want to thank my family. My parents, Allen and Phyllis Lefohn, continue to be my role models for living life with passion, creativity, and hard work. More than anyone else, however, I want to thank my wife, Karen, and daughter, Kaia, for their patience and support throughout this very long journey. They have sacrificed many nights without me, yet all of this would be for nothing without them. They provide me with indescribable wisdom, perspective, and love. –vii– Contents List of Figures xii List of Tables xiv Abstract xv I Introduction 1 1 Introduction 2 1.1 Graphics Processor Background . 3 1.2 Glift . 4 1.2.1 Example . 5 1.3 GPU Data Structures . 6 1.4 Applications . 6 1.4.1 Octree 3D Paint . 7 1.4.2 Adaptive Shadow Maps . 9 1.4.3 Resolution Matched Shadow Maps . 10 1.4.4 Depth Of Field . 11 1.5 Publications .
Recommended publications
  • Subdivision Surfaces Years of Experience at Pixar
    Subdivision Surfaces Years of Experience at Pixar - Recursively Generated B-Spline Surfaces on Arbitrary Topological Meshes Ed Catmull, Jim Clark 1978 Computer-Aided Design - Subdivision Surfaces in Character Animation Tony DeRose, Michael Kass, Tien Truong 1998 SIGGRAPH Proceedings - Feature Adaptive GPU Rendering of Catmull-Clark Subdivision Surfaces Matthias Niessner, Charles Loop, Mark Meyer, Tony DeRose 2012 ACM Transactions on Graphics Subdivision Advantages • Flexible Mesh Topology • Efficient Representation for Smooth Shapes • Semi-Sharp Creases for Fine Detail and Hard Surfaces • Open Source – Beta Available Now • It’s What We Use – Robust and Fast • Pixar Granting License to Necessary Subdivision Patents graphics.pixar.com Consistency • Exactly Matches RenderMan Internal Data Structures and Algorithms are the Same • Full Implementation Semi-Sharp Creases, Boundary Interpolation, Hierarchical Edits • Use OpenSubdiv for Your Projects! Custom and Third Party Animation, Modeling, and Painting Applications Performance • GPU Compute and GPU Tessellation • CUDA, OpenCL, GLSL, OpenMP • Linux, Windows, OS X • Insert Prman doc + hierarchical viewer GPU Performance • We use CUDA internally • Best Performance on CUDA and Kepler • NVIDIA Linux Profiling Tools OpenSubdiv On GPU Subdivision Mesh Topology Points CPU Subdivision VBO Tables GPU Patches Refine CUDA Kernels Tessellation Draw Improved Workflows • True Limit Surface Display • Interactive Manipulation • Animate While Displaying Full Surface Detail • New Sculpt and Paint Possibilities Sculpting & Ptex • Sculpt with Mudbox • Export to Ptex • Render with RenderMan • Insert toad demo Sculpt & Animate Too ! • OpenSubdiv Supports Ptex • OpenSubdiv Matches RenderMan • Enables Interactive Deformation • Insert rendered toad clip graphics.pixar.com Feature Adaptive GPU Rendering of Catmull-Clark Subdivision Surfaces Thursday – 2:00 pm Room 408a .
    [Show full text]
  • To Infinity and Back Again: Hand-Drawn Aesthetic and Affection for the Past in Pixar's Pioneering Animation
    To Infinity and Back Again: Hand-drawn Aesthetic and Affection for the Past in Pixar's Pioneering Animation Haswell, H. (2015). To Infinity and Back Again: Hand-drawn Aesthetic and Affection for the Past in Pixar's Pioneering Animation. Alphaville: Journal of Film and Screen Media, 8, [2]. http://www.alphavillejournal.com/Issue8/HTML/ArticleHaswell.html Published in: Alphaville: Journal of Film and Screen Media Document Version: Publisher's PDF, also known as Version of record Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights © 2015 The Authors. This is an open access article published under a Creative Commons Attribution-NonCommercial-NoDerivs License (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits distribution and reproduction for non-commercial purposes, provided the author and source are cited. General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact [email protected]. Download date:28. Sep. 2021 1 To Infinity and Back Again: Hand-drawn Aesthetic and Affection for the Past in Pixar’s Pioneering Animation Helen Haswell, Queen’s University Belfast Abstract: In 2011, Pixar Animation Studios released a short film that challenged the contemporary characteristics of digital animation.
    [Show full text]
  • MONSTERS INC 3D Press Kit
    ©2012 Disney/Pixar. All Rights Reserved. CAST Sullivan . JOHN GOODMAN Mike . BILLY CRYSTAL Boo . MARY GIBBS Randall . STEVE BUSCEMI DISNEY Waternoose . JAMES COBURN Presents Celia . JENNIFER TILLY Roz . BOB PETERSON A Yeti . JOHN RATZENBERGER PIXAR ANIMATION STUDIOS Fungus . FRANK OZ Film Needleman & Smitty . DANIEL GERSON Floor Manager . STEVE SUSSKIND Flint . BONNIE HUNT Bile . JEFF PIDGEON George . SAM BLACK Additional Story Material by . .. BOB PETERSON DAVID SILVERMAN JOE RANFT STORY Story Manager . MARCIA GWENDOLYN JONES Directed by . PETE DOCTER Development Story Supervisor . JILL CULTON Co-Directed by . LEE UNKRICH Story Artists DAVID SILVERMAN MAX BRACE JIM CAPOBIANCO Produced by . DARLA K . ANDERSON DAVID FULP ROB GIBBS Executive Producers . JOHN LASSETER JASON KATZ BUD LUCKEY ANDREW STANTON MATTHEW LUHN TED MATHOT Associate Producer . .. KORI RAE KEN MITCHRONEY SANJAY PATEL Original Story by . PETE DOCTER JEFF PIDGEON JOE RANFT JILL CULTON BOB SCOTT DAVID SKELLY JEFF PIDGEON NATHAN STANTON RALPH EGGLESTON Additional Storyboarding Screenplay by . ANDREW STANTON GEEFWEE BOEDOE JOSEPH “ROCKET” EKERS DANIEL GERSON JORGEN KLUBIEN ANGUS MACLANE Music by . RANDY NEWMAN RICKY VEGA NIERVA FLOYD NORMAN Story Supervisor . BOB PETERSON JAN PINKAVA Film Editor . JIM STEWART Additional Screenplay Material by . ROBERT BAIRD Supervising Technical Director . THOMAS PORTER RHETT REESE Production Designers . HARLEY JESSUP JONATHAN ROBERTS BOB PAULEY Story Consultant . WILL CSAKLOS Art Directors . TIA W . KRATTER Script Coordinators . ESTHER PEARL DOMINIQUE LOUIS SHANNON WOOD Supervising Animators . GLENN MCQUEEN Story Coordinator . ESTHER PEARL RICH QUADE Story Production Assistants . ADRIAN OCHOA Lighting Supervisor . JEAN-CLAUDE J . KALACHE SABINE MAGDELENA KOCH Layout Supervisor . EWAN JOHNSON TOMOKO FERGUSON Shading Supervisor . RICK SAYRE Modeling Supervisor . EBEN OSTBY ART Set Dressing Supervisor .
    [Show full text]
  • The Metaverse and Digital Realities Transcript Introduction Plenary
    [Scientific Innovation Series 9] The Metaverse and Digital Realities Transcript Date: 08/27/2021 (Released) Editors: Ji Soo KIM, Jooseop LEE, Youwon PARK Introduction Yongtaek HONG: Welcome to the Chey Institute’s Scientific Innovation Series. Today, in the 9th iteration of the series, we focus on the Metaverse and Digital Realities. I am Yongtaek Hong, a Professor of Electrical and Computer Engineering at Seoul National University. I am particularly excited to moderate today’s webinar with the leading experts and scholars on the metaverse, a buzzword that has especially gained momentum during the online-everything shift of the pandemic. Today, we have Dr. Michael Kass and Dr. Douglas Lanman joining us from the United States. And we have Professor Byoungho Lee and Professor Woontack Woo joining us from Korea. Now, I will introduce you to our opening Plenary Speaker. Dr. Michael Kass is a senior distinguished engineer at NVIDIA and the overall software architect of NVIDIA Omniverse, NVIDIA’s platform and pipeline for collaborative 3D content creation based on USD. He is also the recipient of distinguished awards, including the 2005 Scientific and Technical Academy Award and the 2009 SIGGRAPH Computer Graphics Achievement Award. Plenary Session Michael KASS: So, my name is Michael Kass. I'm a distinguished engineer from NVIDIA. And today we'll be talking about NVIDIA's view of the metaverse and how we need an open metaverse. And we believe that the core of that metaverse should be USD, Pixar's Universal Theme Description. Now, I don't think I have to really do much to introduce the metaverse to this group, but the original name goes back to Neal Stephenson's novel Snow Crash in 1992, and the original idea probably goes back further.
    [Show full text]
  • Bibliography
    BIBLIOGRAPHY William W. Armstrong and Mark W. Green. Visual Computer, chapter The dynamics of artic- ulated rigid bodies for purposes of animation, pages 231–240. Springer-Verlag, 1985. David Baraff. Analytical methods for dynamic simulation of non-penetrating rigid bodies. Computer Graphics, 23(3):223–232, July 1989. Alan H. Barr. Global and local deformations of solid primitives. Computer Graphics, 18:21– 29, 1984. Proc. SIGGRAPH 1984. Ronen Barzel and Alan H. Barr. Topics in Physically Based Modeling, Course Notes, vol- ume 16, chapter Dynamic Constraints. SIGGRAPH, 1987. Ronen Barzel and Alan H. Barr. A modeling system based on dynamic constaints. Computer Graphics, 22:179–188, 1988. Armin Bruderlin and Thomas W. Calvert. Goal-directed, dynanmic animation of human walk- ing. Computer Graphics, 23(3):233–242, July 1989. John E. Chadwick, David R. Haumann, and Richard E. Parent. Layered construction for de- formable animated characters. Computer Graphics, 23(3):243–252, 1989. Proc. SIGGRAPH 1989. J. S. Duff, A. M. Erisman, and J.K. Reid. Direct Methods for Sparse Matrices. Oxford University Press, Oxford, UK, 1986. Kurt Fleischer and Andrew Witkin. A modeling testbed. In Proc. Graphics Interface, pages 127–137, 1988. Phillip Gill, Walter Murray, and Margret Wright. Practical Optimization. Academic Press, New York, NY, 1981. Michael Girard and Anthony A. Maciejewski. Computational Modeling for the Computer Animation of Legged Figures. Proc. SIGGRAPH, pages 263–270, 1985. Michael Gleicher and Andrew Witkin. Creating and manipulating constrained models. to appear, 1991. Michael Gleicher and Andrew Witkin. Snap together mathematics. In Edwin Blake and Peter Weisskirchen, editors, Proceedings of the 1990 Eurographics Workshop onObject Oriented Graphics.
    [Show full text]
  • To Infinity and Back Again: Hand-Drawn Aesthetic and Affection for the Past in Pixar's Pioneering Animation
    To Infinity and Back Again: Hand-drawn Aesthetic and Affection for the Past in Pixar's Pioneering Animation Haswell, H. (2015). To Infinity and Back Again: Hand-drawn Aesthetic and Affection for the Past in Pixar's Pioneering Animation. Alphaville: Journal of Film and Screen Media, 8, [2]. http://www.alphavillejournal.com/Issue8/HTML/ArticleHaswell.html Published in: Alphaville: Journal of Film and Screen Media Document Version: Publisher's PDF, also known as Version of record Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal Publisher rights © 2015 The Authors. This is an open access article published under a Creative Commons Attribution-NonCommercial-NoDerivs License (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits distribution and reproduction for non-commercial purposes, provided the author and source are cited. General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact [email protected]. Download date:28. Sep. 2021 1 To Infinity and Back Again: Hand-drawn Aesthetic and Affection for the Past in Pixar’s Pioneering Animation Helen Haswell, Queen’s University Belfast Abstract: In 2011, Pixar Animation Studios released a short film that challenged the contemporary characteristics of digital animation.
    [Show full text]
  • An Introduction to Physically Based Modeling: an Introduction to Continuum Dynamics for Computer Graphics
    An Introduction to Physically Based Modeling: An Introduction to Continuum Dynamics for Computer Graphics Michael Kass Pixar Please note: This document is 1997 by Michael Kass. This chapter may be freely duplicated and distributed so long as no consideration is received in return, and this copyright notice remains intact. An Introduction to Continuum Dynamics for Computer Graphics Michael Kass Pixar 1 Introduction Mass-spring systems have been widely used in computer graphics because they provide a simple means of generating physically realistic motion for a wide range of situations of interest. Even though the actual mass of a real physical body is distributed through a volume, it is often possible to simu- late the motion of the body by lumping the mass into a collection of points. While the exact coupling between the motion of different points on a body may be extremely complex, it can frequently be ap- proximated by a set of springs. As a result, mass-spring systems provide a very versatile simulation technique. In many cases, however, there are much better simulation techniques than directly approximating a physical system with a set of mass points and springs. The motion of rigid bodies, for example, can be approximated by a small set of masses connected by very stiff springs. Unfortunately, the very stiff springs wreak havoc on the numerical solution methods, so it is much better to use a technique based on the rigid-body equations of motion. Another important special case arises with our subject here: elastic bodies and ¯uids. In many cases, they can be approximated by regular lattices of mass points and springs.
    [Show full text]
  • A Learning-Based Approach to Parametric Rotoscoping of Multi-Shape Systems
    A Learning-Based Approach to Parametric Rotoscoping of Multi-Shape Systems Luis Bermudez Nadine Dabby Yingxi Adelle Lin Intel Corporation Intel Corporation Intel Corporation [email protected] [email protected] [email protected] Sara Hilmarsdottir Narayan Sundararajan Swarnendu Kar Intel Corporation Intel Corporation Intel Corporation [email protected] [email protected] [email protected] Abstract artifacts to be imperceptible to the human eye, and they must be parametric so that an artist can iteratively modify Rotoscoping of facial features is often an integral part of them until the composite meets their rigorous standard. The Visual Effects post-production, where the parametric con- high standards and stringent requirements of state-of-the- tours created by artists need to be highly detailed, con- art rotoscoping still requires a significant amount of manual sist of multiple interacting components, and involve signif- labor from live action and animated films. icant manual supervision. Yet those assets are usually dis- We apply machine learning to automate rotoscoping in carded after compositing and hardly reused. In this paper, the post-production operations of stop-motion filmmaking, we present the first methodology to learn from these assets. in collaboration with LAIKA, a professional animation stu- With only a few manually rotoscoped shots, we identify and dio. LAIKA has a unique facial animation process in which extract semantically consistent and task specific landmark puppet expressions are comprised of multiple 3D printed points and re-vectorize the roto shapes based on these land- face parts that are snapped onto and off of the puppet over marks.
    [Show full text]
  • A System for Efficient 3D Printed Stop-Motion Face Animation
    A system for efficient 3D printed stop-motion face animation RINAT ABDRASHITOV, ALEC JACOBSON, KARAN SINGH, University of Toronto Fig. 1. Given an input mesh-animation sequence (a), our system segments and deforms the 3D mesh into parts that can be seamlessly joined (b). Each part is independently used to compute a replacement library, representative of the deforming part in the input (c). StopShop iteratively optimizes the library and a mapping from each frame of the input animation to the pieces in the library. Libraries are 3D printed and each frame is assembled according to the optimized mapping (d). Filmed in sequence, the animation is realized as a stop motion film (e). Computer animation in conjunction with 3D printing has the potential to used as the preferred technique for producing high-quality facial positively impact traditional stop-motion animation. As 3D printing every animation in stop motion film [Priebe 2011]. frame of a computer animation is prohibitively slow and expensive, 3D Faces and 3D models in general are created digitally (or physi- printed stop-motion can only be viable if animations can be faithfully re- cally sculpted and scanned) to produce a replacement library that produced using a compact library of 3D printed and efficiently assemblable covers the expressive range of the 3D model. This library, typically parts. We thus present the first system for processing computer animation containing thousands of variations of a deformable model is then 3D sequences (typically faces) to produce an optimal set of replacement parts for use in 3D printed stop-motion animation. Given an input animation printed and cataloged.
    [Show full text]
  • Meet Geri: the New Face of Animation
    SAVE THIS | EMAIL THIS | Close Meet Geri: The New Face of Animation Meet Geri: The New Face of Animation Pixar`s new short film advances the art and science of character animation Barbara Robertson The marriage of technology and art is not always an easy alliance. Taken to its extreme, the partnership pits left brain versus right brain, logic versus feeling, cold versus warm. Yet out of this conflict, a kind of creativity can emerge that neither side could produce alone. In the history of computer graphics, there are numerous beautiful examples of this creativity--and some of the most brilliant have been produced at Pixar Animation Studios. The most recent blending of Pixar`s state-of-the-art graphics technology with story-telling and animation, is Geri`s Game, a 4-1/2-minute animated short film that premiered in Los Angeles last November, in the nick of time for Academy Award Nominations. Academy Award honors are not new at Pixar: John Lasseter received a nomination in 1986 for Luxo, Jr. and an Oscar in 1988 for Tin Toy. What`s new is that this film marks the debut of a new director at Pixar, Jan Pinkava, and that Pinkava`s film is the first short animation to be produced by Pixar in eight years. Whether or not Geri`s Game joins its predecessors at Pixar by receiving an Oscar nomination this year, the film is likely to have as much impact on computer graphics animation as Pixar`s earlier shorts did--in terms of both art and technology.
    [Show full text]
  • Coherent Noise for Non-Photorealistic Rendering
    Coherent Noise for Non-Photorealistic Rendering Michael Kass∗ Davide Pesare † Pixar Animation Studios Abstract velocity fields. We then take a block of white noise as a function of (x,y,t) and filter it, taking into account the depth and velocity fields A wide variety of non-photorealistic rendering techniques make use and their consequent occlusion relationships. The result is what we of random variation in the placement or appearance of primitives. call a coherent noise field. Each frame alone looks like independent In order to avoid the “shower-door” effect, this random variation white noise, but the variation from frame to frame is consistent with should move with the objects in the scene. Here we present coher- the movement in the scene. The resulting noise can be queried by ent noise tailored to this purpose. We compute the coherent noise non-photorealistic rendering algorithms to create random variation with a specialized filter that uses the depth and velocity fields of a with uniform image-plane spatial properties that nonetheless appear source sequence. The computation is fast and suitable for interac- firmly attached to the 2d projections of the 3D objects. tive applications like games. 2 Previous Work CR Categories: I.3.3 [Computer Graphics]—Picture/Image Gen- eration; I.4.3 [Image Processing and Computer Vision]—contrast Bousseau et al. [2007] developed a technique for watercolor styl- enhancement, filtering ization that can be used for very similar purposes to the present work. Their method is based on the idea of advecting texture co- Keywords: Non-photorealistic rendering, noise, painterly render- ordinates.
    [Show full text]
  • Interactive Depth of Field Using Simulated DiUsion on a GPU
    Interactive Depth of Field Using Simulated Di®usion on a GPU Michael Kass, Pixar Animation Studios Aaron Lefohn, U.C. Davis John Owens, U.C. Davis Figure 1: Top: Pinhole camera image from an upcoming feature film. Bottom: Sample results of our depth-of-field algorithm based on simulated diffusion. We generate these results from a single color and depth value per pixel, and the above images render at 23–25 frames per second. The method is designed to produce film-preview quality at interactive rates on a GPU. Fast preview should allow greater artistic control of depth-of-field effects. Abstract Direction Implicit Methods, GPU, Tridiagonal Matrices, Cyclic Re- duction. Accurate computation of depth-of-field effects in computer graph- ics rendering is generally very time consuming, creating a problem- atic workflow for film authoring. The computation is particularly 1 Introduction challenging because it depends on large-scale spatially-varying fil- tering that must accurately respect complex boundaries. A variety of real-time algorithms have been proposed for games, but the com- Depth-of-field (DOF) effects are essential in producing computer promises required to achieve the necessary frame rates have made graphics imagery that achieves the look and feel of film. Unfor- them them unsuitable for film. Here we introduce an approximate tunately, the computations needed to compute these effects have depth-of-field computation that is good enough for film preview, traditionally been very slow and unwieldy. As a consequence, the yet can be computed interactively on a GPU. The computation cre- effects are both costly to create and difficult to direct.
    [Show full text]