The Pixar Renderman Walking Teapot

Total Page:16

File Type:pdf, Size:1020Kb

The Pixar Renderman Walking Teapot THE PIXAR RENDERMAN WALKING TEAPOT A DESIGN STUDY BY Craig Harkness Pixar RenderMan walking teapot HISTORY observations The Pixar RenderMan Walking Teapot is a limited edition toy Attending ACM SIGGRAPH in 2010 I was amazed about the buzz created by Pixar to promote their industry standard computer for “teapots”, I wasn’t quite sure what they meant, I certainly graphics rendering software, RenderM an. Each year since didn’t think Americans we that into tea! Even after someone 2003, when Pixar employee Dylan Sisson designed and pitched explained it to me I couldn’t quite grasp the big appeal. That was the first walking teapot, it has been a recurring theme at the until the first day that the exhibition hall opened. I saw people ACM SIGGRAPH conference, sporting a new finish and tin box starting to queue up an hour before Pixar started to give them every time, each one coming with a numbered certificate away at their booth. Not wanting to miss out on what was seem- proving its authenticity. ingly the hot item of the conference, and after some cajoling from more experience friends, I too joined the line which went on to wind its way through the exhibition hall, in some cases completely blocking off other companies booths. As the Pixar staff started to hand out the Walking Teapots there was a palpable excitement throughout the hall as people waited to see this year’s design, and every year I have attend- ed since it has been the same. There’s now even a Facebook fan club set up for the teapots. More than just being a novelty toy, it’s become part of the SIGGRAPH culture, a badge of honour, a way of saying “I was there”. 2 Words & Definitions Words selected more words As part of the initial stage of this project I have identified and Other words that were considered included limited edition, wind-up, novelty, Utah, Newell teapot, selected a number of words that can be associated with the fun, teapot, feet, winder, John Lassetter, Ed Catmull, Steve Jobs, play, ACM SIGGRAPH Pixar RenderMan Walking Teapots and supplied definitions to aid understanding: These words fall broadly into two categories; those that further describe the teapot: limited edition Walking Teapot: a clockwork mechanism toy in the form of wind-up the Newell teapot that when wound will walk across a flat surface novelty fun an item designed for play Toy: teapot feet Collectible: an item that holds value for an individual or group, winder this may be intrinsic, monetary, aesthetic etc. and those that give historical context: Tin: metallic cube container used to store the teapot, with a painted design to compliment the teapot itself ACM SIGGRAPH Newell teapot Design: the “look” applied to the teapot, influenced by the most John Lassetter recent Pixar release or latest developments in RenderMan Ed Catmull Steve Jobs Pixar: animation and software development company founded Utah in 1986. Have created such films as Toy Story, Up, and Brave RenderMan: Pixar’s suite of software tools that have become the industry standard in rendering 3D computer graphics and visual effects 3 Word Matrix Walking Teapot Toy Collectible Tin Design Pixar RenderMan Walking Teapot wind-up mechanical toy Toy made of plastic, formed in the shape of a teapot people collect the different teapot comes with a Collectible teapot designs from across certificate showing it’s the years number in a limited run container in which the the design of the tin is identifies itself as a teapot is stored; styled to Tin related to the toy design “novelty item” match the teapot based on CG mesh teapot based on the latest Pixar has a number of variants styled to reflect annual created in 1974 by Martin film or innovation in on theme (e.g. red, green, teapot design, promotes Design Newell, used in testing CG RenderMan blue hats) RenderMan software rendering techniques expression of Pixar’s started giving the teapot original toy was made by fan base both inside and tin carries the Pixar playfilness and creativ ity away at ACM SIGGRAPH Pixar employee and toy outside industry gives a Pixar website on the base in the application of their conference in 2003 maker Dylan Sisson broad base of collectors technology attendees of the ACM rendering engine capable developers of the industry used walking teapot as a toy created to market SIGGRAPH RenderMan user large RenderMan brand- of stylised outputs based standard rendering tools RenderMan promotional item RenderMan group receive an exclusive ing on sides and top on art direction of projects for CG and VFX design 4 Interaction continuum design Category USers (years) (teapot) (parts) 2003 silver winder Pixar fans 2004 red, green, blue, alpha mechanism RenderMan users 2005 bronze, glow in the dark feet CG industry pros 2006 flaming, thermal spout VFX industry pros 2007 chef (white, black) lid game developers 2008 20th anniv. (blue, gold) handle students 2009 glasses (stereo, mirrored) glasses ACM SIGGRAPH 2010 potato head (r,g,b,a) helmet 2011 roadster (red, black), canada eyes 2012 cloudy, clear mouth 5 CONTENT MAP Content maps are a sense-making tool that connect large numbers of ideas, aiding us by giving us a picture of our understanding of a given topic by showing the relationships between concepts. The process I undertook in creating this map was to first create a list of words connected to my topic, then selectively edit that list down, define the terms and indentify the terms based on percieved importance. I than used these terms to identify what I believed to be the core concept; “Pixar makes a toyshaped like a teapot, with new designs every year, to promote RenderMan while at ACM SIGGRAPH”. Using this core concept I then framed my remaining words around this, indicating the relationships between the different elements. graduated from Martin Newell Cornell Box by Co-founders University of Utah d m te fro h a ed om re uat e c Steve Jobs ad o gr f s a th h i e hic common 3d test model Alvy Ray Smith w industry standard Ed Catmull ge to homa Utah Teapot Stanford Bunny Stanford Dragon Happy Buddha by d 3d rendering engine Dylan Sisson re spi y in b attends d y 3d animation e r b d fo d n ks e u or n o w ig f ho s w e d develops pixar makes atoytshaped like a eapot with newdesigns everytyear o promote rendermanawhile at cm siggraph silver s te 2003 ea red, green, blue, alpha cr 2004 bronze 2005 Association for Computer glow in the dark Machinery Special Interest Group 2006 on Computer Graphics and insp ired Interactive Techniques 2007 thermal Toy Story 2008 chef (white, black) A Bug’s Life 2009 International Conference and Toy Story 2 20th anniv. (blue, gold) Exhibition on Computer Graphics 2010 and Interactive Techniques Monsters, Inc. ed glasses (stereo, mirrored) spir in 2011 products Finding Nemo potato head (r,g,b,alpha) 2012 The Incredibles roadster (red, black) pired Cars ins Canada Ratatouille pired ins clear/cloudy RenderMan Studio WALL-E Up Features Toy Story 3 RenderMan Pro Server Cars 2 RenderMan on Demand Brave Tractor Slim it inspired renderfarm tool shader tool image tool cloud based rendering inspired 7 PERSONAS Personas are a descriptive model by which we can humanise our designs allowing us to consolidate common attributes of users into profiles that are easy to empathise with, giving us context of use, and helping us to communicate how users behave. Ideally personas are created from user research; however the three personas here are an amalgam of people whom I have encountered at ACM SIGGRAPH and reflective of those I feel would be most likely to be inter ested in the RenderMan Walking Teapots. Samuel Geoffries “Party hard, siggraph harder!” Skill set: Location: Vancouver, Canada Originally from: London, UK Business Age: 30 Education: BSc (Hons) Computer Animation Employer: EA Sports Artistic Technical Occupation: Technical Motion Capture Animator Salary: $48,000 Left brain (logic) vs. right brain (creativity): Sam is a Technical Motion Capture After graduation Sam became a Left brain Right brain Animator working for EA Sports, Researcher with the Intelligent based at their Vancouver studio Systems & Biomedical Robotics where he spends his time processing Group Research group within the motion capture data for games like Creative Technologies department, the FIFA and TIger Woods games, as as well as an Associate Instructor for well as developing new tools and a number of classes. Use of RenderMan: ideas to increase the studio’s flexibily and data output quality. Sam also spent four years as a Low High SIGGRAPH student volunteer. For While working in games, Sam’s real Sam the RenderMan teapots are not ambition is to transfer into the VFX just a promotional iten for one of industry as a Technical Director, one the many softwares he has come of his most prized posessions is a into contact with, they are also a book on the VFX of the Lord of the reminder of all the hard work he has Rings films signed by Andy Serkis. put in to go to SIGGRAPH, the talks, the contacts, the parties, and all the While studying for his degree Sam experiences he has had there. was responsible for leading the university motion capture studio After work Sam likes to head to the team and developing the pipleine bars with colleagues and friends, and toolset of the studio. go for runs and working on one of numerous side projects. 9 ANdrew Moffat “I saw Star Wars in its first run at Grauman’s Chinese Theater.
Recommended publications
  • Neural Scene Graph Rendering
    Neural Scene Graph Rendering JONATHAN GRANSKOG, NVIDIA TILL N. SCHNABEL, NVIDIA FABRICE ROUSSELLE, NVIDIA JAN NOVÁK, NVIDIA We present a neural scene graph—a modular and controllable representa- · translation Tg,1 (root node) tion of scenes with elements that are learned from data. We focus on the forward rendering problem, where the scene graph is provided by the user and references learned elements. The elements correspond to geometry and material definitions of scene objects and constitute the leaves of thegraph; we store them as high-dimensional vectors. The position and appearance of encoded scene objects can be adjusted in an artist-friendly manner via familiar trans- transformations 3 × 3 formations, e.g. translation, bending, or color hue shift, which are stored in 1 dgg x dgg 2 Tg,21: matrixmatrix Tg,2 4 the inner nodes of the graph. In order to apply a (non-linear) transforma- Tm,1 translation diuse hue tion to a learned vector, we adopt the concept of linearizing a problem by color lifting it into higher dimensions: we first encode the transformation into a shift T1 3g × 3g T1 3m × 3m T2 T3 T3 T4 high-dimensional matrix and then apply it by standard matrix-vector mul- g,3 : matrix m,2 : matrix g,3 g,2 m,1 g,2 tiplication. The transformations are encoded using neural networks. We deformation rotation scaling render the scene graph using a streaming neural renderer, which can handle graphs with a varying number of objects, and thereby facilitates scalability. Our results demonstrate a precise control over the learned object repre- g1 : m1: g2 m2 m3 g4 m4 sentations in a number of animated 2D and 3D scenes.
    [Show full text]
  • Lecture 2 3D Modeling
    Lecture 2 3D Modeling Dr. Shuang LIANG School of Software Engineering Tongji University Fall 2012 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Lecturer Dr. Shuang LIANG • Assisstant professor, SSE, Tongji – Education » B.Sc in Computer Science, Zhejiang University, 1999-2003 » PhD in Computer Science, Nanjing Univerisity, 2003-2008 » Visit in Utrecht University, 2007, 2008 – Research Fellowship » The Chinese University of Hong Kong, 2009 » The Hong Kong Polytechnic University, 2010-2011 » The City University of Hong Kong, 2012 • Contact – Office: Room 442, JiShi Building, JiaDing Campus, TongJi (Temporary) – Email: [email protected] 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Outline • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 Outline • What is a 3D model? • Usage of 3D models • Classic models in computer graphics • 3D model representations • Raw data • Solids • Surfaces 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 What is a 3D model? 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. It is a collection of data (points and other information) 3D Modeling, Advanced Computer Graphics Shuang LIANG, SSE, Fall 2012 What is a 3D modeling? The process of developing a mathematical representation of any three-dimensional
    [Show full text]
  • Matching 3D Models with Shape Distributions
    Matching 3D Models with Shape Distributions Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin Princeton University Abstract Cad models is a simple example), the vast majority of 3D objects available via the World Wide Web will not have them, and there Measuring the similarity between 3D shapes is a fundamental prob- are few standards regarding their use. In general, 3D models will lem, with applications in computer vision, molecular biology, com- be acquired with scanning devices, or output from geometric ma- puter graphics, and a variety of other fields. A challenging aspect nipulation tools (file format conversion programs), and thus they of this problem is to find a suitable shape signature that can be con- will have only geometric and appearance information, usually com- structed and compared quickly, while still discriminating between pletely void of structure or semantic information. Automatic shape- similar and dissimilar shapes. based matching algorithms will be useful for recognition, retrieval, In this paper, we propose and analyze a method for computing clustering, and classification of 3D models in such databases. shape signatures for arbitrary (possibly degenerate) 3D polygonal Databases of 3D models have several new and interesting charac- models. The key idea is to represent the signature of an object as a teristics that significantly affect shape-based matching algorithms. shape distribution sampled from a shape function measuring global Unlike images and range scans, 3D models do not depend on the geometric properties of an object. The primary motivation for this configuration of cameras, light sources, or surrounding objects approach is to reduce the shape matching problem to the compar- (e.g., mirrors).
    [Show full text]
  • Plenoptic Imaging and Vision Using Angle Sensitive Pixels
    PLENOPTIC IMAGING AND VISION USING ANGLE SENSITIVE PIXELS A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Suren Jayasuriya January 2017 c 2017 Suren Jayasuriya ALL RIGHTS RESERVED This document is in the public domain. PLENOPTIC IMAGING AND VISION USING ANGLE SENSITIVE PIXELS Suren Jayasuriya, Ph.D. Cornell University 2017 Computational cameras with sensor hardware co-designed with computer vision and graph- ics algorithms are an exciting recent trend in visual computing. In particular, most of these new cameras capture the plenoptic function of light, a multidimensional function of ra- diance for light rays in a scene. Such plenoptic information can be used for a variety of tasks including depth estimation, novel view synthesis, and inferring physical properties of a scene that the light interacts with. In this thesis, we present multimodal plenoptic imaging, the simultaenous capture of multiple plenoptic dimensions, using Angle Sensitive Pixels (ASP), custom CMOS image sensors with embedded per-pixel diffraction gratings. We extend ASP models for plenoptic image capture, and showcase several computer vision and computational imaging applica- tions. First, we show how high resolution 4D light fields can be recovered from ASP images, using both a dictionary-based machine learning method as well as deep learning. We then extend ASP imaging to include the effects of polarization, and use this new information to image stress-induced birefringence and remove specular highlights from light field depth mapping. We explore the potential for ASPs performing time-of-flight imaging, and in- troduce the depth field, a combined representation of time-of-flight depth with plenoptic spatio-angular coordinates, which is used for applications in robust depth estimation.
    [Show full text]
  • CUDA-Based Global Illumination Aaron Jensen San Jose State
    Running head: CUDA-BASED GLOBAL ILLUMINATION 1 CUDA-Based Global Illumination Aaron Jensen San Jose State University 13 May, 2014 CS 180H: Independent Research for Department Honors Author Note Aaron Jensen, Undergraduate, Department of Computer Science, San Jose State University. Research was conducted under the guidance of Dr. Pollett, Department of Computer Science, San Jose State University. CUDA-BASED GLOBAL ILLUMINATION 2 Abstract This paper summarizes a semester of individual research on NVIDIA CUDA programming and global illumination. What started out as an attempt to update a CPU-based radiosity engine from a previous graphics class evolved into an exploration of other lighting techniques and models (collectively known as global illumination) and other graphics-based languages (OpenCL and GLSL). After several attempts and roadblocks, the final software project is a CUDA-based port of David Bucciarelli's SmallPt GPU, which itself is an OpenCL-based port of Kevin Beason's smallpt. The paper concludes with potential additions and alterations to the project. CUDA-BASED GLOBAL ILLUMINATION 3 CUDA-Based Global Illumination Accurately representing lighting in computer graphics has been a topic that spans many fields and applications: mock-ups for architecture, environments in video games and computer generated images in movies to name a few (Dutré). One of the biggest issues with performing lighting calculations is that they typically take an enormous amount of time and resources to calculate accurately (Teoh). There have been many advances in lighting algorithms to reduce the time spent in calculations. One common approach is to approximate a solution rather than perform an exhaustive calculation of a true solution.
    [Show full text]
  • Standard Elevation Models for Evaluating Terrain Representation
    Standard elevation models for evaluating terrain representation a, b c d e Patrick Kennelly *, Tom Patterson , Alexander Tait , Bernhard Jenny , Daniel Huffman , Sarah Bell f, Brooke Marston g a Long Island University, [email protected] b US National Parks Service (Ret.), [email protected] c National Geographic Society, [email protected] d Monash University, [email protected] e somethingaboutmaps, [email protected] f Esri, [email protected] g Brooke Marston, [email protected] * Corresponding author Keywords: Terrain representation, digital elevation model, relief shading, standard data models Standard Elevation Models We propose the use of standard elevation models to evaluate and compare the quality of various relief shading and other terrain rendering techniques. These datasets will cover various landforms, be available at no cost to the user, and be free of common data imperfections such as missing data values, resampling artifacts, and seams. Datasets will be available at multiple map scales over the same geographic area for multi-scale analysis. Utilizing a standard data model for testing and comparing methods is a common practice among many disciplines. Furthermore, the use of digital models to test rendering techniques based on qualitative visual production has been well established for decades. Following the generation of the first digital image in 1957, image processing and analysis required standard test images upon which methods could be tested and compared. During the 1960s and 1970s, many well-known standard test images emerged from this need for a comparative technique model. Today, institutions like the University of Southern California’s Signal and Image Processing Institute maintain digital image databases for the primary purpose of supporting image processing and analysis research, including geographic imaging processes.
    [Show full text]
  • Vol. 3 Issue 4 July 1998
    Vol.Vol. 33 IssueIssue 44 July 1998 Adult Animation Late Nite With and Comics Space Ghost Anime Porn NYC: Underground Girl Comix Yellow Submarine Turns 30 Frank & Ollie on Pinocchio Reviews: Mulan, Bob & Margaret, Annecy, E3 TABLE OF CONTENTS JULY 1998 VOL.3 NO.4 4 Editor’s Notebook Is it all that upsetting? 5 Letters: [email protected] Dig This! SIGGRAPH is coming with a host of eye-opening films. Here’s a sneak peak. 6 ADULT ANIMATION Late Nite With Space Ghost 10 Who is behind this spandex-clad leader of late night? Heather Kenyon investigates with help from Car- toon Network’s Michael Lazzo, Senior Vice President, Programming and Production. The Beatles’Yellow Submarine Turns 30: John Coates and Norman Kauffman Look Back 15 On the 30th anniversary of The Beatles’ Yellow Submarine, Karl Cohen speaks with the two key TVC pro- duction figures behind the film. The Creators of The Beatles’Yellow Submarine.Where Are They Now? 21 Yellow Submarine was the start of a new era of animation. Robert R. Hieronimus, Ph.D. tells us where some of the creative staff went after they left Pepperland. The Mainstream Business of Adult Animation 25 Sean Maclennan Murch explains why animated shows targeted toward adults are becoming a more popular approach for some networks. The Anime “Porn” Market 1998 The misunderstood world of anime “porn” in the U.S. market is explored by anime expert Fred Patten. Animation Land:Adults Unwelcome 28 Cedric Littardi relates his experiences as he prepares to stand trial in France for his involvement with Ani- meLand, a magazine focused on animation for adults.
    [Show full text]
  • Gaboury 10/01/13
    Jacob Gaboury 10/01/13 Object Standards, Standard Objects In December 1949 Martin Heidegger gave a series of four lectures in the city of Bremen, then an isolated part of the American occupation zone following the Second World War. The event marked Heidegger’s first speaking engagement following his removal from his Freiburg professorship by the denazification authorities in 1946, and his first public lecture since his foray into university administration and politics in the early 1930s. Titled Insight Into That Which Is [Einblick in das, was ist],1 the lectures mark the debut of a new direction in Heidegger’s thought and introduce a number of major themes that would be explored in his later work.2 Heidegger opened the Bremen lectures with a work simply titled “The Thing” which begins with a meditation on the collapsing of distance, enabled by modern technology. “Physical distance is dissolved by aircraft. The radio makes information instantly available that once went unknown. The formerly slow and mysterious growth of plants is laid bare through stop-action photography.”3 Yet Heidegger argues that despite all conquest of distances the nearness of things remains absent. What about nearness? How can we come to know its nature? Nearness, it seems, cannot be encountered directly. We succeed in reaching it rather by attending to what is near. Near to us are what we usually call things. But what is a thing?4 This question motivates the lecture, and indeed much of Heidegger’s later thought. In 1 Heidegger, Martin. trans. Andrew J. Mitchell. Bremen and Freiburg Lectures: Insight Into That Which Is and Basic Principals of Thinking.
    [Show full text]
  • A Practical Analytic Model for the Radiosity of Translucent Scenes
    A Practical Analytic Model for the Radiosity of Translucent Scenes Yu Sheng∗1, Yulong Shi2, Lili Wang2, and Srinivasa G. Narasimhan1 1The Robotics Institute, Carnegie Mellon University 2State Key Laboratory of Virtual Reality Technology and Systems, Beihang University a) b) c) Figure 1: Inter-reflection and subsurface scattering are closely intertwined for scenes with translucent objects. The main contribution of this work is an analytic model of combining diffuse inter-reflection and subsurface scattering (see Figure2). One bounce of specularities are added in a separate pass. a) Two translucent horses (63k polygons) illuminated by a point light source. The three zoomed-in regions show that our method can capture both global illumination effects. b) The missing light transport component if only subsurface scattering is simulated. c) The same mesh rendered with a different lighting and viewing position. Our model supports interactive rendering of moving camera, scene relighting, and changing translucencies. Abstract 1 Introduction Light propagation in scenes with translucent objects is hard to Accurate rendering of translucent materials such as leaves, flowers, model efficiently for interactive applications. The inter-reflections marble, wax, and skin can greatly enhance realism. The interac- between objects and their environments and the subsurface scatter- tions of light within translucent objects and in between the objects ing through the materials intertwine to produce visual effects like and their environments produce pleasing visual effects like color color bleeding, light glows and soft shading. Monte-Carlo based bleeding (Figure1), light glows and soft shading. The two main approaches have demonstrated impressive results but are computa- mechanisms of light transport — (a) scattering beneath the surface tionally expensive, and faster approaches model either only inter- of the materials and (b) inter-reflection between surface locations reflections or only subsurface scattering.
    [Show full text]
  • Photometric Registration of Indoor Real Scenes Using an RGB-D Camera with Application to Mixed Reality Salma Jiddi
    Photometric registration of indoor real scenes using an RGB-D camera with application to mixed reality Salma Jiddi To cite this version: Salma Jiddi. Photometric registration of indoor real scenes using an RGB-D camera with application to mixed reality. Computer Vision and Pattern Recognition [cs.CV]. Université Rennes 1, 2019. English. NNT : 2019REN1S015. tel-02167109 HAL Id: tel-02167109 https://tel.archives-ouvertes.fr/tel-02167109 Submitted on 27 Jun 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE DE DOCTORAT DE L'UNIVERSITE DE RENNES 1 COMUE UNIVERSITE BRETAGNE LOIRE ECOLE DOCTORALE N° 601 Mathématiques et Sciences et Technologies de l'Information et de la Communication Spécialité : Informatique Par Salma JIDDI Photometric Registration of Indoor Real Scenes using an RGB-D Camera with Application to Mixed Reality Thèse présentée et soutenue à Rennes, le 11/01/2019 Unité de recherche : IRISA – UMR6074 Thèse N° : Rapporteurs avant soutenance : Vincent Lepetit Professeur à l’Université de Bordeaux Alain Trémeau Professeur à l’Université Jean Monnet Composition du Jury : Rapporteurs : Vincent Lepetit Professeur à l’Université de Bordeaux Alain Trémeau Professeur à l’Université Jean Monnet Examinateurs : Kadi Bouatouch Professeur à l’Université de Rennes 1 Michèle Gouiffès Maître de Conférences à l’Université Paris Sud Co-encadrant : Philippe Robert Docteur Ingénieur à Technicolor Dir.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • Dossier [email protected] 0. Introduction (Standards) 1
    Dossier [email protected] 0. Introduction (Standards) 1. Recognition 2. Classification 3. Bias & Noise 4. Unknown Known (New Teapots) 0. Introduction (Standards) I’ve been thinking about the problems of standards and categories in computer image recognition, how their abstractions result in hegemonic form that averages and normalizes, and how drawing and modeling might be used to critique and resist these tendencies. The Utah Teapot rendered four ways by Martin Newell To first talk about standards, I’m going to return to some work I did earlier in the semester on the Utah Teapot. The Utah Teapot was created by Martin Newell, a computer graphics researcher in the Graphics Lab at the University of Utah Computer Science Department, in 1975. Newell needed an object for testing 3D scenes, and his wife suggested their Melitta teapot. It was useful to computer graphics researchers primarily because it met certain geometric criteria. “It was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture” (per Wikipedia). But it also was useful because it met some contextual or cultural criteria: it was a familiar, everyday object. The Utah Teapot thus became a standard in the computer graphics world. Original drawing of the Utah Teapot by Martin Newell Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about the y-axis, Alex Bodkin Outline of the Utah Teapot rotating about
    [Show full text]