Sony Pictures Imageworks Arnold
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
The Uses of Animation 1
The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator. -
L'animation Stylisée
Universite Paris 8 Master Creation Numerique parcours : Arts et Technologies de l'Image Virtuelle L’animation stylisée Clea Gonay Memoire de Master 2 2019 - 2020 1 résumé Le cinema d’animation est un medium propice a* l’abstraction du reel, a* la simplification des formes et du mouvement. Gra0ce a* un language visuel et une esthetique qui lui sont propre, l’animation stylisee reussit a* conserver le sens tout en amplifiant les emotions du spectateur. Pourquoi et comment realise t-on un film en animation stylisee ? En se basant sur la litterature disponible, mes experiences et une serie d’interviews realisees aupre*s d’acteurs de l’industrie, ce memoire propose une approche philosophique, anthropologique et technique des principes et codes de l’animation stylisee. Il cherche a* mettre en lumie*re les raisons qui poussent un realisateur a* choisir ce medium et explore les possibilites offertes par les outils disponibles a* travers des exemple d’oeuvres 2D, 3D et hybrides. Enfin, pour illustrer mon propos, j’y presente mes mises en pratiques realisees lors de mon etude du sujet. abstract Animation is a perfect medium to abstract reality and to simplify forms and movements. Thanks to its own visual language and aesthetical values, stylised animation conveys meaning as well as amplifies the spectator’s emotions. Why and how are stylized animated movies made ? Using available literature, my own experiences and a series of interviews made with people involved with the industry, this thesis takes a philosophical, anthropological and technical approach to the principles and codes of stylized animation. It highlights the reasons why a director would choose this route and explores the possibilities of the available tools while presenting related 2D, 3D and hybrid works. -
Surface Recovery: Fusion of Image and Point Cloud
Surface Recovery: Fusion of Image and Point Cloud Siavash Hosseinyalamdary Alper Yilmaz The Ohio State University 2070 Neil avenue, Columbus, Ohio, USA 43210 pcvlab.engineering.osu.edu Abstract construct surfaces. Implicit (or volumetric) representation of a surface divides the three dimensional Euclidean space The point cloud of the laser scanner is a rich source of to voxels and the value of each voxel is defined based on an information for high level tasks in computer vision such as indicator function which describes the distance of the voxel traffic understanding. However, cost-effective laser scan- to the surface. The value of every voxel inside the surface ners provide noisy and low resolution point cloud and they has negative sign, the value of the voxels outside the surface are prone to systematic errors. In this paper, we propose is positive and the surface is represented as zero crossing two surface recovery approaches based on geometry and values of the indicator function. Unfortunately, this repre- brightness of the surface. The proposed approaches are sentation is not applicable to open surfaces and some mod- tested in realistic outdoor scenarios and the results show ifications should be applied to reconstruct open surfaces. that both approaches have superior performance over the- The least squares and partial differential equations (PDE) state-of-art methods. based approaches have also been developed to implicitly re- construct surfaces. The moving least squares(MLS) [22, 1] and Poisson surface reconstruction [16], has been used in 1. Introduction this paper for comparison, are particularly popular. Lim and Haron review different surface reconstruction techniques in Point cloud is a valuable source of information for scene more details [20]. -
Experimental Validation of Autodesk 3Ds Max Design 2009 and Daysim
Experimental Validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0 NRC Project # B3241 Submitted to: Autodesk Canada Co. Media & Entertainment Submitted by: Christoph Reinhart1,2 1) National Research Council Canada - Institute for Research in Construction (NRC-IRC) Ottawa, ON K1A 0R6, Canada (2001-2008) 2) Harvard University, Graduate School of Design Cambridge, MA 02138, USA (2008 - ) February 12, 2009 B3241.1 Page 1 Table of Contents Abstract ....................................................................................................................................... 3 1 Introduction .......................................................................................................................... 3 2 Methodology ......................................................................................................................... 5 2.1 Daylighting Test Cases .................................................................................................... 5 2.2 Daysim Simulations ....................................................................................................... 12 2.3 Autodesk 3ds Max Design Simulations ........................................................................... 13 3 Results ................................................................................................................................ 14 3.1 Façade Illuminances ...................................................................................................... 14 3.2 Base Case (TC1) and Lightshelf (TC2) -
Open Source Software for Daylighting Analysis of Architectural 3D Models
19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011 http://mssanz.org.au/modsim2011 Open Source Software for Daylighting Analysis of Architectural 3D Models Terrance Mc Minn a a Curtin University of Technology School of Built Environment, Perth, Australia Email: [email protected] Abstract:This paper examines the viability of using open source software for the architectural analysis of solar access and over shading of building projects. For this paper open source software also includes freely available closed source software. The Computer Aided Design software – Google SketchUp (Free) while not open source, is included as it is freely available, though with restricted import and export abilities. A range of software tools are used to provide an effective procedure to aid the Architect in understanding the scope of sun penetration and overshadowing on a site and within a project. The technique can be also used lighting analysis of both external (to the building) as well as for internal spaces. An architectural model built in SketchUp (free) CAD software is exported in two different forms for the Radiance Lighting Simulation Suite to provide the lighting analysis. The different exports formats allow the 3D CAD model to be accessed directly via Radiance for full lighting analysis or via the Blender Animation program for a graphical user interface limited option analysis. The Blender Modelling Environment for Architecture (BlendME) add-on exports the model and runs Radiance in the background. Keywords:Lighting Simulation, Open Source Software 3226 McMinn, T., Open Source Software for Daylighting Analysis of Architectural 3D Models INTRODUCTION The use of daylight in buildings has the potential for reduction in the energy demands and increasing thermal comfort and well being of the buildings occupants (Cutler, Sheng, Martin, Glaser, et al., 2008), (Webb, 2006), (Mc Minn & Karol, 2010), (Yancey, n.d.) and others. -
An Advanced Path Tracing Architecture for Movie Rendering
RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan. -
Seattle Location Management and Scouting
Mark A. Freid – Seattle Location Management and Scouting. Field Producing. (310) 770-3930 [email protected] www.northernlocs.com QUALIFICATIONS Teamster Local 399 Location Manager. CSATF Safety Passport. LOCATION SCOUTING AND MANAGEMENT 2001-2017 FEATURE FILM/TELEVISION: War for Planet of the Apes. 20th Century Fox. VFX Plate Unit Location Manager Ryan Stafford, VFX Producer. 50 Shades Freed, 50 Shades Darker. Focus Features. 2nd Unit Location Manager Scott Ateah, 2nd Unit Dir. Barbra Kelly UPM. Marcus Viscidi Producer. Grey’s Anatomy. ABC/Disney. Location Scout Thomas Barg, Production Supervisor. Vacation. Warner Brothers/New Line. 2nd Unit Location Manager Peter Novak, 2nd Unit Production Supervisor. The Librarians . Electric Entertainment. Location Scout Bobby Warberg, Location Manager. David Drummond, Location Scout. 50 Shades of Grey. Focus Features. 2nd Unit Location Manager David Wasco Production Designer, Barbra Kelly, UPM, Marcus Viscidi Producer. Transformers 4. Paramount Pictures. Location Manager JJ Hook, Location Manager. Daren Hicks, Production Supervisor. They Die By Dawn. Location Manager Jaymes Seymor, Director. Peter Novak, Producer. America’s Most Wanted. STF Productions. Location Manager Miles Perman, Producer. Paranormal Activity The Marked One. Paramount Pictures Location Scout Christopher Landon, Director. Stephenson Crossley, Location Manager. Hayden Lake. Location Manager Ryan Page, Christopher Pomerenke Directors. Linette Shorr, Production Designer. Lacey Leavitt, Producer. Rampart. Lightstream Pictures. Location Manager Oren Moverman, Director. David Wasco Production Designer. Karen Getchell, Production Supervisor. Ben Foster, Lawrence Inglee, Clark Peterson, Ken Kao, Producers. Michael DiFranco and Lila Yacoub Executive Producers. Late Autumn. Location Manager Kim Tae-Yong, Director. Dave Drummond, Co-Location Manager Mischa Jakupcak, Producer. The Details. Key Assistant Location Manager Doug duMas, Location Manager. -
Getting Started (Pdf)
GETTING STARTED PHOTO REALISTIC RENDERS OF YOUR 3D MODELS Available for Ver. 1.01 © Kerkythea 2008 Echo Date: April 24th, 2008 GETTING STARTED Page 1 of 41 Written by: The KT Team GETTING STARTED Preface: Kerkythea is a standalone render engine, using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe. The target of Kerkythea is to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface. Reaching now the 4th year of development and gaining popularity, I want to believe that KT can now be considered among the top freeware/open source render engines and can be used for both academic and commercial purposes. In the beginning of 2008, we have a strong and rapidly growing community and a website that is more "alive" than ever! KT2008 Echo is very powerful release with a lot of improvements. Kerkythea has grown constantly over the last year, growing into a standard rendering application among architectural studios and extensively used within educational institutes. Of course there are a lot of things that can be added and improved. But we are really proud of reaching a high quality and stable application that is more than usable for commercial purposes with an amazing zero cost! Ioannis Pantazopoulos January 2008 Like the heading is saying, this is a Getting Started “step-by-step guide” and it’s designed to get you started using Kerkythea 2008 Echo. -
Production Global Illumination
CIS 497 Senior Capstone Design Project Project Proposal Specification Instructors: Norman I. Badler and Aline Normoyle Production Global Illumination Yining Karl Li Advisor: Norman Badler and Aline Normoyle University of Pennsylvania ABSTRACT For my project, I propose building a production quality global illumination renderer supporting a variety of com- plex features. Such features may include some or all of the following: texturing, subsurface scattering, displacement mapping, deformational motion blur, memory instancing, diffraction, atmospheric scattering, sun&sky, and more. In order to build this renderer, I propose beginning with my existing massively parallel CUDA-based pathtracing core and then exploring methods to accelerate pathtracing, such as GPU-based stackless KD-tree construction, and multiple importance sampling. I also propose investigating and possible implementing alternate GI algorithms that can build on top of pathtracing, such as progressive photon mapping and multiresolution radiosity caching. The final goal of this project is to finish with a renderer capable of rendering out highly complex scenes and anima- tions from programs such as Maya, with high quality global illumination, in a timespan measuring in minutes ra- ther than hours. Project Blog: http://yiningkarlli.blogspot.com While competing with powerhouse commercial renderers 1. INTRODUCTION like Arnold, Renderman, and Vray is beyond the scope of this project, hopefully the end result will still be feature- Global illumination, or GI, is one of the oldest rendering rich and fast enough for use as a production academic ren- problems in computer graphics. In the past two decades, a derer suitable for usecases similar to those of Cornell's variety of both biased and unbaised solutions to the GI Mitsuba render, or PBRT. -
Siggraph 97 Visual Procedings
Anaconda The Animation of M.C.Escher’s “Belvedere” Sony Pictures Imageworks digital- Head of Systems: Alberto Velez M.C.Escher’s lithograph entitled ly created two photo-realistic Systems Coordinator: Katya Culberg “Belvedere” is famous for its giant Anacondas which would SA (Resources 3rd Party): Ted Alexandre impossible objects. Generally, believably attack, coil, eat, and SA (Hardware): Dean Miya it isn’t possible to look at these regurgitate their prey. A digital kinds of objects from different actor is also featured in the water- perspectives. Because these fall sequence. This vivid CG impossible objects don’t lend imagery interacts with live action elements and actors on a level themselves to visual illusion, this never seen before. piece develops a method of rota- tion and drawing that simulates Producer: Sony Pictures Imageworks the expected visual representation FX Supervisor: John Nelson of objects. FX Producer: Robin Griffin FX Coordinator: Jacquie Barnbrook Production Assistant: Darcy Fray Producer: Sachiko Tsuruno CG Supervisor: John Mclaughlin Animation Director: Eric Armstrong Animator: Alex Sokoloff Lead TA: Rob Groome Lead Digital Artist: Colin Campbell Animator: David Vallone AL Match Mover: Michael Harbour Digital Artist: Gimo Chanphianamvong Lead Compositor: Jason Dowdeswell Technical Director: John Decker TION FESTIV 237 COMPUTER ANIMA Lead Technical Director: Jim Berney Painter: Jonn Shourt Lead Animator: Kelvin Lee Lead Modeller/Animator: Kevin Hudson Animator: Manny Wong VISUAL PROCEEDINGS Art Director: Marty Kline -
Boarding: Stories and • Visual Pacing and Storytelling • the Story Artist: What Are Studios Are Looking Snow" Workshop
: W G O N N I S & D S R E I A R O O T S B Come listen and learn from Through a series of hands-on activities, case studies, demos, and formal presentations, Featured artists: experienced story artists workshop speakers and instructors will focus from Hollywood's hottest on topics such as: Ronnie del Carmen, Story Supervisor and Story Artist at Pixar Animation Studios studios at the Banff New • The feature story process Kris Pearn, Story Arist and Storyboard Media Institute Accelerator’s • Story structure Artist at Sony Pictures Imageworks "Boarding: Stories and • Visual pacing and storytelling • The story artist: what are studios are looking Snow" workshop. for? Chris Tougas, Character Designer, Screenwriter, and Story Artist • Your place in line: what reality do you face Storyboarding is an essential skill out there in the market? Woody Woodman, Co-Founder and Head employed by filmmakers, designers, and of Story at Ravenwood Entertaiment animators to create compelling stories This four-day workshop will incorporate through various mediums, both digital business with pleasure by offering participants the opportunity to enhance their storyboarding Jorden Oliwa, Storyboard Artist, Animator and traditional. Geared towards media and snowboarding skills. The optional ski/ producers and artists, the workshop will snowboard days will take place at Banff’s Peter Hansen, Independent Producer, offer beginner and advanced world-class mountain resorts on Saturday, Program Chair, Digital & Interactive Media storyboarding techniques, and will March 26, 2005 and Monday, March 28, 2005. Design (NAIT) highlight future trends and technologies A snowboarding clinic will also be offered by for storyboarding. -
Semantic Segmentation of Surface from Lidar Point Cloud 3
Noname manuscript No. (will be inserted by the editor) Semantic Segmentation of Surface from Lidar Point Cloud Aritra Mukherjee1 · Sourya Dipta Das2 · Jasorsi Ghosh2 · Ananda S. Chowdhury2 · Sanjoy Kumar Saha1 Received: date / Accepted: date Abstract In the field of SLAM (Simultaneous Localization And Mapping) for robot navigation, mapping the environment is an important task. In this regard the Lidar sensor can produce near accurate 3D map of the environment in the format of point cloud, in real time. Though the data is adequate for extracting information related to SLAM, processing millions of points in the point cloud is computationally quite expensive. The methodology presented proposes a fast al- gorithm that can be used to extract semantically labelled surface segments from the cloud, in real time, for direct navigational use or higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of subsampled cloud points online. The generated mesh is further used for surface normal computation of those points on the basis of which surface segments are estimated. A novel descriptor to represent the surface segments is proposed and utilized to determine the surface class of the segments (semantic label) with the help of classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene, or can be used for optimized tra- jectory planning by a robot. The proposed methodology is compared with number of point cloud segmentation methods and state of the art semantic segmentation methods to emphasize its efficacy in terms of speed and accuracy.