Sony Pictures Imageworks Arnold

Total Page:16

File Type:pdf, Size:1020Kb

Sony Pictures Imageworks Arnold Sony Pictures Imageworks Arnold CHRISTOPHER KULLA, Sony Pictures Imageworks ALEJANDRO CONTY, Sony Pictures Imageworks CLIFFORD STEIN, Sony Pictures Imageworks LARRY GRITZ, Sony Pictures Imageworks Fig. 1. Sony Imageworks has been using path tracing in production for over a decade: (a) Monster House (©2006 Columbia Pictures Industries, Inc. All rights reserved); (b) Men in Black III (©2012 Columbia Pictures Industries, Inc. All Rights Reserved.) (c) Smurfs: The Lost Village (©2017 Columbia Pictures Industries, Inc. and Sony Pictures Animation Inc. All rights reserved.) Sony Imageworks’ implementation of the Arnold renderer is a fork of the and robustness of path tracing indicated to the studio there was commercial product of the same name, which has evolved independently potential to revisit the basic architecture of a production renderer since around 2009. This paper focuses on the design choices that are unique which had not evolved much since the seminal Reyes paper [Cook to this version and have tailored the renderer to the specic requirements of et al. 1987]. lm rendering at our studio. We detail our approach to subdivision surface After an initial period of co-development with Solid Angle, we tessellation, hair rendering, sampling and variance reduction techniques, decided to pursue the evolution of the Arnold renderer indepen- as well as a description of our open source texturing and shading language components. We also discuss some ideas we once implemented but have dently from the commercially available product. This motivation since discarded to highlight the evolution of the software over the years. is twofold. The rst is simply pragmatic: software development in service of lm production must be responsive to tight deadlines CCS Concepts: • Computing methodologies → Ray tracing; (less the lm release date than internal deadlines determined by General Terms: Graphics, Systems, Rendering the production schedule). As an example, when we implemented Additional Key Words and Phrases: Ray-Tracing, Rendering, Monte-Carlo, heterogeneous volume rendering we did so knowing that it would Path-Tracing be used in a particular sequence of a lm (see Figure 1), and had to rene the implementation as artists were making use of our early ACM Reference format: prototype. Such rapid iteration is particularly problematic when Christopher Kulla, Alejandro Conty, Cliord Stein, and Larry Gritz. 2017. Sony Pictures Imageworks Arnold. ACM Trans. Graph. 9, 4, Article 39 (March 2017), these decisions impact API and ABI compatibility which commercial 18 pages. software must be beholden to on much longer time frames. The DOI: 0000001.0000001 second motivation to fork from the commercial project is more strategic: by tailoring the renderer solely towards a single user base, 1 INTRODUCTION we can make assumptions that may not be valid for all domains or all workows. By contrast, a commercial product must retain 1.1 History some amount of exibility to dierent ways of working and cannot Sony Imageworks rst experimented with path tracing during the optimize with only a single use case in mind. production of the lm Monster House. The creative vision for the While our renderer and its commercial sibling share a common lm called for mimicking the look of stop motion miniatures, a ancestry, the simple fact that development priorities changed the look which had already been successfully explored in early shorts timeline around various code refactors have rendered the codebases rendered by the Arnold renderer [Jensen et al. 2001]. Despite some structurally incompatible despite many shared concepts. limitations that were worked around for the lm, the simplicity This paper describes the current state of our system, particularly focusing on areas where we have diverged from the system described Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed in the companion paper [Georgiev et al. 2018]. We will also catalog for prot or commercial advantage and that copies bear this notice and the full citation some of the lessons learned by giving examples of features we on the rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). removed from our system. © 2017 Copyright held by the owner/author(s). 0730-0301/2017/3-ART39 $15.00 DOI: 0000001.0000001 ACM Transactions on Graphics, Vol. 9, No. 4, Article 39. Publication date: March 2017. 39:2 • Christopher Kulla, Alejandro Conty, Cliord Stein, and Larry Gritz 1.2 Design Principles 2.1 Subdivision Surfaces The most important guiding principle in the design of our system Displaced subdivision surfaces are probably the most common geo- has been simplicity. For the artist, this means removing as many metric primitive at Sony Imageworks. Therefore, we needed a fast unnecessary choices as possible, to let them focus on the creative and highly parallel subdivision engine to minimize time-to-rst choices they must make. For us as software developers, this has pixel as well as a compact storage representation of highly subdi- meant intentionally minimizing our feature set to the bare necessi- vided surfaces to reduce memory consumption. We opted to focus ties, aggressively removing unused experimental code when it falls on tessellation as opposed to native ray tracing of patches for the out of favor, and minimizing code duplication. simple reason that our geometric models are usually quite dense Perhaps as a direct consequence of this goal, we have always which makes storing higher-order patches more expensive than focused on ray tracing as the only building block in our system. simply storing triangles. Moreover, methods based on higher-order Because the natural expression of geometric optics leads one to patches typically fail in the presence of displacements requiring think in terms of rays of light bouncing around the scene, we felt extra geometric representations and going against our principle all approximate representations of such eects (shadow maps, re- of code simplicity and orthogonality. Our system also focuses on ection maps, etc.) were simply removing the artist from what they up-front tessellation which greatly simplies the implementation really wanted to express. compared to deferred or on-demand systems that try to interleave tessellation and rendering. We further motivate this choice in Sec- 1.3 System Overview tion 7.4. We divide the architecture of our system into three main areas to A number of criteria dictated the approach we take to subdivi- highlight how we meet the challenge of modern feature lm pro- sion. Firstly, the subdivision engine must be able to handle arbitrary duction rendering: geometry, shading and integration (see Figure 2). n-gons. Even though quads are the primary modeling primitive at the facility, other polygons do creep in to the system for a variety Geometry. The rst and most important design choice in our ren- of reasons. Secondly, we chose to ignore support for weight driven derer has been to focus on in-core ray tracing of massive scenes. The creases to maximize interoperability between software packages. natural expression of path tracing leads to incoherent ray queries We anticipate that as the rules of the OpenSubdiv library become that can visit any part of the scene in random order. Observing more widely supported, we may need to revisit this decision, but that the amount of memory available on a modern workstation can ignoring this requirement signicantly simplies both the imple- comfortably store several hundred million polygons in RAM, we mentation and the data transport across the production pipeline. explicitly craft our data structures to represent as much unique de- While creases can make models slightly more lightweight when used tail as possible in as few bytes as possible. We also heavily leverage optimally, our modelers are quite adept at explicitly adding tight instancing as its implementation is well suited to ray tracing but bevels to mimic the inuence of creases without needing special also closely matches how we author large environments by re-using rules in the subdivision code itself. Thirdly, we wanted a patch-based existing models. This design choice also has the benet of keeping subdivision engine where patches could be processed in parallel the geometric subsystem orthogonal to the rest of the system. independent of their neighbors. With these criteria in mind, we chose to evaluate all top-level regular quads with bicubic b-spline Shading. The role of the shading system is to eciently manage patches, and all “irregular” top-level n-gons with n Gregory Patches large quantities of texture data (terabytes of texture data per frame (one patch per sub-quad) with Loop’s method [Loop et al. 2009]. is not uncommon) and eciently execute shading networks that Patches are processed in parallel independently of each other, and describe the basic material properties to the renderer. Unlike older we control the writing of shared vertices by prioritizing based on rendering architectures, shaders are only able to specify the BSDF patch ID number. per shading point rather than be responsible for its evaluation. Subdivision is parallelized rst by treating multiple objects in parallel and then within objects by processing patches in parallel. Integration. Turning shaded points into pixel colors is the role This is further discussed in Section 6.5. of the integrator which implements
Recommended publications
  • The Uses of Animation 1
    The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator.
    [Show full text]
  • L'animation Stylisée
    Universite Paris 8 Master Creation Numerique parcours : Arts et Technologies de l'Image Virtuelle L’animation stylisée Clea Gonay Memoire de Master 2 2019 - 2020 1 résumé Le cinema d’animation est un medium propice a* l’abstraction du reel, a* la simplification des formes et du mouvement. Gra0ce a* un language visuel et une esthetique qui lui sont propre, l’animation stylisee reussit a* conserver le sens tout en amplifiant les emotions du spectateur. Pourquoi et comment realise t-on un film en animation stylisee ? En se basant sur la litterature disponible, mes experiences et une serie d’interviews realisees aupre*s d’acteurs de l’industrie, ce memoire propose une approche philosophique, anthropologique et technique des principes et codes de l’animation stylisee. Il cherche a* mettre en lumie*re les raisons qui poussent un realisateur a* choisir ce medium et explore les possibilites offertes par les outils disponibles a* travers des exemple d’oeuvres 2D, 3D et hybrides. Enfin, pour illustrer mon propos, j’y presente mes mises en pratiques realisees lors de mon etude du sujet. abstract Animation is a perfect medium to abstract reality and to simplify forms and movements. Thanks to its own visual language and aesthetical values, stylised animation conveys meaning as well as amplifies the spectator’s emotions. Why and how are stylized animated movies made ? Using available literature, my own experiences and a series of interviews made with people involved with the industry, this thesis takes a philosophical, anthropological and technical approach to the principles and codes of stylized animation. It highlights the reasons why a director would choose this route and explores the possibilities of the available tools while presenting related 2D, 3D and hybrid works.
    [Show full text]
  • Surface Recovery: Fusion of Image and Point Cloud
    Surface Recovery: Fusion of Image and Point Cloud Siavash Hosseinyalamdary Alper Yilmaz The Ohio State University 2070 Neil avenue, Columbus, Ohio, USA 43210 pcvlab.engineering.osu.edu Abstract construct surfaces. Implicit (or volumetric) representation of a surface divides the three dimensional Euclidean space The point cloud of the laser scanner is a rich source of to voxels and the value of each voxel is defined based on an information for high level tasks in computer vision such as indicator function which describes the distance of the voxel traffic understanding. However, cost-effective laser scan- to the surface. The value of every voxel inside the surface ners provide noisy and low resolution point cloud and they has negative sign, the value of the voxels outside the surface are prone to systematic errors. In this paper, we propose is positive and the surface is represented as zero crossing two surface recovery approaches based on geometry and values of the indicator function. Unfortunately, this repre- brightness of the surface. The proposed approaches are sentation is not applicable to open surfaces and some mod- tested in realistic outdoor scenarios and the results show ifications should be applied to reconstruct open surfaces. that both approaches have superior performance over the- The least squares and partial differential equations (PDE) state-of-art methods. based approaches have also been developed to implicitly re- construct surfaces. The moving least squares(MLS) [22, 1] and Poisson surface reconstruction [16], has been used in 1. Introduction this paper for comparison, are particularly popular. Lim and Haron review different surface reconstruction techniques in Point cloud is a valuable source of information for scene more details [20].
    [Show full text]
  • Experimental Validation of Autodesk 3Ds Max Design 2009 and Daysim
    Experimental Validation of Autodesk® 3ds Max® Design 2009 and Daysim 3.0 NRC Project # B3241 Submitted to: Autodesk Canada Co. Media & Entertainment Submitted by: Christoph Reinhart1,2 1) National Research Council Canada - Institute for Research in Construction (NRC-IRC) Ottawa, ON K1A 0R6, Canada (2001-2008) 2) Harvard University, Graduate School of Design Cambridge, MA 02138, USA (2008 - ) February 12, 2009 B3241.1 Page 1 Table of Contents Abstract ....................................................................................................................................... 3 1 Introduction .......................................................................................................................... 3 2 Methodology ......................................................................................................................... 5 2.1 Daylighting Test Cases .................................................................................................... 5 2.2 Daysim Simulations ....................................................................................................... 12 2.3 Autodesk 3ds Max Design Simulations ........................................................................... 13 3 Results ................................................................................................................................ 14 3.1 Façade Illuminances ...................................................................................................... 14 3.2 Base Case (TC1) and Lightshelf (TC2)
    [Show full text]
  • Open Source Software for Daylighting Analysis of Architectural 3D Models
    19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011 http://mssanz.org.au/modsim2011 Open Source Software for Daylighting Analysis of Architectural 3D Models Terrance Mc Minn a a Curtin University of Technology School of Built Environment, Perth, Australia Email: [email protected] Abstract:This paper examines the viability of using open source software for the architectural analysis of solar access and over shading of building projects. For this paper open source software also includes freely available closed source software. The Computer Aided Design software – Google SketchUp (Free) while not open source, is included as it is freely available, though with restricted import and export abilities. A range of software tools are used to provide an effective procedure to aid the Architect in understanding the scope of sun penetration and overshadowing on a site and within a project. The technique can be also used lighting analysis of both external (to the building) as well as for internal spaces. An architectural model built in SketchUp (free) CAD software is exported in two different forms for the Radiance Lighting Simulation Suite to provide the lighting analysis. The different exports formats allow the 3D CAD model to be accessed directly via Radiance for full lighting analysis or via the Blender Animation program for a graphical user interface limited option analysis. The Blender Modelling Environment for Architecture (BlendME) add-on exports the model and runs Radiance in the background. Keywords:Lighting Simulation, Open Source Software 3226 McMinn, T., Open Source Software for Daylighting Analysis of Architectural 3D Models INTRODUCTION The use of daylight in buildings has the potential for reduction in the energy demands and increasing thermal comfort and well being of the buildings occupants (Cutler, Sheng, Martin, Glaser, et al., 2008), (Webb, 2006), (Mc Minn & Karol, 2010), (Yancey, n.d.) and others.
    [Show full text]
  • An Advanced Path Tracing Architecture for Movie Rendering
    RenderMan: An Advanced Path Tracing Architecture for Movie Rendering PER CHRISTENSEN, JULIAN FONG, JONATHAN SHADE, WAYNE WOOTEN, BRENDEN SCHUBERT, ANDREW KENSLER, STEPHEN FRIEDMAN, CHARLIE KILPATRICK, CLIFF RAMSHAW, MARC BAN- NISTER, BRENTON RAYNER, JONATHAN BROUILLAT, and MAX LIANI, Pixar Animation Studios Fig. 1. Path-traced images rendered with RenderMan: Dory and Hank from Finding Dory (© 2016 Disney•Pixar). McQueen’s crash in Cars 3 (© 2017 Disney•Pixar). Shere Khan from Disney’s The Jungle Book (© 2016 Disney). A destroyer and the Death Star from Lucasfilm’s Rogue One: A Star Wars Story (© & ™ 2016 Lucasfilm Ltd. All rights reserved. Used under authorization.) Pixar’s RenderMan renderer is used to render all of Pixar’s films, and by many 1 INTRODUCTION film studios to render visual effects for live-action movies. RenderMan started Pixar’s movies and short films are all rendered with RenderMan. as a scanline renderer based on the Reyes algorithm, and was extended over The first computer-generated (CG) animated feature film, Toy Story, the years with ray tracing and several global illumination algorithms. was rendered with an early version of RenderMan in 1995. The most This paper describes the modern version of RenderMan, a new architec- ture for an extensible and programmable path tracer with many features recent Pixar movies – Finding Dory, Cars 3, and Coco – were rendered that are essential to handle the fiercely complex scenes in movie production. using RenderMan’s modern path tracing architecture. The two left Users can write their own materials using a bxdf interface, and their own images in Figure 1 show high-quality rendering of two challenging light transport algorithms using an integrator interface – or they can use the CG movie scenes with many bounces of specular reflections and materials and light transport algorithms provided with RenderMan.
    [Show full text]
  • Seattle Location Management and Scouting
    Mark A. Freid – Seattle Location Management and Scouting. Field Producing. (310) 770-3930 [email protected] www.northernlocs.com QUALIFICATIONS Teamster Local 399 Location Manager. CSATF Safety Passport. LOCATION SCOUTING AND MANAGEMENT 2001-2017 FEATURE FILM/TELEVISION: War for Planet of the Apes. 20th Century Fox. VFX Plate Unit Location Manager Ryan Stafford, VFX Producer. 50 Shades Freed, 50 Shades Darker. Focus Features. 2nd Unit Location Manager Scott Ateah, 2nd Unit Dir. Barbra Kelly UPM. Marcus Viscidi Producer. Grey’s Anatomy. ABC/Disney. Location Scout Thomas Barg, Production Supervisor. Vacation. Warner Brothers/New Line. 2nd Unit Location Manager Peter Novak, 2nd Unit Production Supervisor. The Librarians . Electric Entertainment. Location Scout Bobby Warberg, Location Manager. David Drummond, Location Scout. 50 Shades of Grey. Focus Features. 2nd Unit Location Manager David Wasco Production Designer, Barbra Kelly, UPM, Marcus Viscidi Producer. Transformers 4. Paramount Pictures. Location Manager JJ Hook, Location Manager. Daren Hicks, Production Supervisor. They Die By Dawn. Location Manager Jaymes Seymor, Director. Peter Novak, Producer. America’s Most Wanted. STF Productions. Location Manager Miles Perman, Producer. Paranormal Activity The Marked One. Paramount Pictures Location Scout Christopher Landon, Director. Stephenson Crossley, Location Manager. Hayden Lake. Location Manager Ryan Page, Christopher Pomerenke Directors. Linette Shorr, Production Designer. Lacey Leavitt, Producer. Rampart. Lightstream Pictures. Location Manager Oren Moverman, Director. David Wasco Production Designer. Karen Getchell, Production Supervisor. Ben Foster, Lawrence Inglee, Clark Peterson, Ken Kao, Producers. Michael DiFranco and Lila Yacoub Executive Producers. Late Autumn. Location Manager Kim Tae-Yong, Director. Dave Drummond, Co-Location Manager Mischa Jakupcak, Producer. The Details. Key Assistant Location Manager Doug duMas, Location Manager.
    [Show full text]
  • Getting Started (Pdf)
    GETTING STARTED PHOTO REALISTIC RENDERS OF YOUR 3D MODELS Available for Ver. 1.01 © Kerkythea 2008 Echo Date: April 24th, 2008 GETTING STARTED Page 1 of 41 Written by: The KT Team GETTING STARTED Preface: Kerkythea is a standalone render engine, using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe. The target of Kerkythea is to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface. Reaching now the 4th year of development and gaining popularity, I want to believe that KT can now be considered among the top freeware/open source render engines and can be used for both academic and commercial purposes. In the beginning of 2008, we have a strong and rapidly growing community and a website that is more "alive" than ever! KT2008 Echo is very powerful release with a lot of improvements. Kerkythea has grown constantly over the last year, growing into a standard rendering application among architectural studios and extensively used within educational institutes. Of course there are a lot of things that can be added and improved. But we are really proud of reaching a high quality and stable application that is more than usable for commercial purposes with an amazing zero cost! Ioannis Pantazopoulos January 2008 Like the heading is saying, this is a Getting Started “step-by-step guide” and it’s designed to get you started using Kerkythea 2008 Echo.
    [Show full text]
  • Production Global Illumination
    CIS 497 Senior Capstone Design Project Project Proposal Specification Instructors: Norman I. Badler and Aline Normoyle Production Global Illumination Yining Karl Li Advisor: Norman Badler and Aline Normoyle University of Pennsylvania ABSTRACT For my project, I propose building a production quality global illumination renderer supporting a variety of com- plex features. Such features may include some or all of the following: texturing, subsurface scattering, displacement mapping, deformational motion blur, memory instancing, diffraction, atmospheric scattering, sun&sky, and more. In order to build this renderer, I propose beginning with my existing massively parallel CUDA-based pathtracing core and then exploring methods to accelerate pathtracing, such as GPU-based stackless KD-tree construction, and multiple importance sampling. I also propose investigating and possible implementing alternate GI algorithms that can build on top of pathtracing, such as progressive photon mapping and multiresolution radiosity caching. The final goal of this project is to finish with a renderer capable of rendering out highly complex scenes and anima- tions from programs such as Maya, with high quality global illumination, in a timespan measuring in minutes ra- ther than hours. Project Blog: http://yiningkarlli.blogspot.com While competing with powerhouse commercial renderers 1. INTRODUCTION like Arnold, Renderman, and Vray is beyond the scope of this project, hopefully the end result will still be feature- Global illumination, or GI, is one of the oldest rendering rich and fast enough for use as a production academic ren- problems in computer graphics. In the past two decades, a derer suitable for usecases similar to those of Cornell's variety of both biased and unbaised solutions to the GI Mitsuba render, or PBRT.
    [Show full text]
  • Siggraph 97 Visual Procedings
    Anaconda The Animation of M.C.Escher’s “Belvedere” Sony Pictures Imageworks digital- Head of Systems: Alberto Velez M.C.Escher’s lithograph entitled ly created two photo-realistic Systems Coordinator: Katya Culberg “Belvedere” is famous for its giant Anacondas which would SA (Resources 3rd Party): Ted Alexandre impossible objects. Generally, believably attack, coil, eat, and SA (Hardware): Dean Miya it isn’t possible to look at these regurgitate their prey. A digital kinds of objects from different actor is also featured in the water- perspectives. Because these fall sequence. This vivid CG impossible objects don’t lend imagery interacts with live action elements and actors on a level themselves to visual illusion, this never seen before. piece develops a method of rota- tion and drawing that simulates Producer: Sony Pictures Imageworks the expected visual representation FX Supervisor: John Nelson of objects. FX Producer: Robin Griffin FX Coordinator: Jacquie Barnbrook Production Assistant: Darcy Fray Producer: Sachiko Tsuruno CG Supervisor: John Mclaughlin Animation Director: Eric Armstrong Animator: Alex Sokoloff Lead TA: Rob Groome Lead Digital Artist: Colin Campbell Animator: David Vallone AL Match Mover: Michael Harbour Digital Artist: Gimo Chanphianamvong Lead Compositor: Jason Dowdeswell Technical Director: John Decker TION FESTIV 237 COMPUTER ANIMA Lead Technical Director: Jim Berney Painter: Jonn Shourt Lead Animator: Kelvin Lee Lead Modeller/Animator: Kevin Hudson Animator: Manny Wong VISUAL PROCEEDINGS Art Director: Marty Kline
    [Show full text]
  • Boarding: Stories and • Visual Pacing and Storytelling • the Story Artist: What Are Studios Are Looking Snow" Workshop
    : W G O N N I S & D S R E I A R O O T S B Come listen and learn from Through a series of hands-on activities, case studies, demos, and formal presentations, Featured artists: experienced story artists workshop speakers and instructors will focus from Hollywood's hottest on topics such as: Ronnie del Carmen, Story Supervisor and Story Artist at Pixar Animation Studios studios at the Banff New • The feature story process Kris Pearn, Story Arist and Storyboard Media Institute Accelerator’s • Story structure Artist at Sony Pictures Imageworks "Boarding: Stories and • Visual pacing and storytelling • The story artist: what are studios are looking Snow" workshop. for? Chris Tougas, Character Designer, Screenwriter, and Story Artist • Your place in line: what reality do you face Storyboarding is an essential skill out there in the market? Woody Woodman, Co-Founder and Head employed by filmmakers, designers, and of Story at Ravenwood Entertaiment animators to create compelling stories This four-day workshop will incorporate through various mediums, both digital business with pleasure by offering participants the opportunity to enhance their storyboarding Jorden Oliwa, Storyboard Artist, Animator and traditional. Geared towards media and snowboarding skills. The optional ski/ producers and artists, the workshop will snowboard days will take place at Banff’s Peter Hansen, Independent Producer, offer beginner and advanced world-class mountain resorts on Saturday, Program Chair, Digital & Interactive Media storyboarding techniques, and will March 26, 2005 and Monday, March 28, 2005. Design (NAIT) highlight future trends and technologies A snowboarding clinic will also be offered by for storyboarding.
    [Show full text]
  • Semantic Segmentation of Surface from Lidar Point Cloud 3
    Noname manuscript No. (will be inserted by the editor) Semantic Segmentation of Surface from Lidar Point Cloud Aritra Mukherjee1 · Sourya Dipta Das2 · Jasorsi Ghosh2 · Ananda S. Chowdhury2 · Sanjoy Kumar Saha1 Received: date / Accepted: date Abstract In the field of SLAM (Simultaneous Localization And Mapping) for robot navigation, mapping the environment is an important task. In this regard the Lidar sensor can produce near accurate 3D map of the environment in the format of point cloud, in real time. Though the data is adequate for extracting information related to SLAM, processing millions of points in the point cloud is computationally quite expensive. The methodology presented proposes a fast al- gorithm that can be used to extract semantically labelled surface segments from the cloud, in real time, for direct navigational use or higher level contextual scene reconstruction. First, a single scan from a spinning Lidar is used to generate a mesh of subsampled cloud points online. The generated mesh is further used for surface normal computation of those points on the basis of which surface segments are estimated. A novel descriptor to represent the surface segments is proposed and utilized to determine the surface class of the segments (semantic label) with the help of classifier. These semantic surface segments can be further utilized for geometric reconstruction of objects in the scene, or can be used for optimized tra- jectory planning by a robot. The proposed methodology is compared with number of point cloud segmentation methods and state of the art semantic segmentation methods to emphasize its efficacy in terms of speed and accuracy.
    [Show full text]