System and Method for Generating and Playback of Three-Dimensional Movies

Total Page:16

File Type:pdf, Size:1020Kb

System and Method for Generating and Playback of Three-Dimensional Movies Europäisches Patentamt (19) European Patent Office Office européen des brevets (11) EP 1 037 167 A2 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: (51) Int. Cl.7: G06T 15/20 20.09.2000 Bulletin 2000/38 (21) Application number: 00301953.6 (22) Date of filing: 09.03.2000 (84) Designated Contracting States: (72) Inventor: Deering, Michael F. AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU Los Altos, California 94024 (US) MC NL PT SE Designated Extension States: (74) Representative: AL LT LV MK RO SI Harris, Ian Richard D. Young & Co., (30) Priority: 15.03.1999 US 268397 21 New Fetter Lane London EC4A 1DA (GB) (71) Applicant: SUN MICROSYSTEMS, INC. Palo Alto, California 94303 (US) (54) System and method for generating and playback of three-dimensional movies (57) A system and method for generating and play- ing back three-dimensional (3D) movies are disclosed. The system is capable of partially rendering frames without relying upon exact viewpoint information. The partially rendered frames may be rendered to the extent possible without performing viewpoint-dependent proc- esses, and then compressed and stored to a carrier media To reduce the amount of data to be stored, the viewer's possible viewpoints may be restricted (e.g., by defining a viewpoint-limiting volume or region). The resulting partially-rendered geometry data may be com- pressed using geometry compression. During playback, the compressed frames are read as a stream, and decompressed. Any final viewpoint-dependent render- ing operations may then be performed (e.g., some light- ing calculations and atmospheric effects, some fogging, specular highlighting; and reflections). A sensor such as a head-tracker may provide real-time viewpoint informa- tion that may be used by the playback system. After ren- dering, the frames are rasterized and then displayed in stereo. EP 1 037 167 A2 Printed by Xerox (UK) Business Services 2.16.7 (HRS)/3.6 12EP 1 037 167 A2 Description to display a 3D football game that viewers can view from almost any angle they desire (e.g., from a bird's eye BACKGROUND OF THE INVENTION view directly overhead or from the side of the fifty yard line). To change their viewpoint, viewers simply move 1. Field of the Invention 5around the coffee table. The head-tracking sensors will determine where the viewer is and display different [0001] This invention relates generally to the field of component images accordingly. When used in connec- computer graphics and, more particularly, to three- tion with a head-mounted display, head tracking allows dimensional (3D) movies. panoramic scenes that "wrap-around" viewers without 10 the need for large projection screens that physically sur- 2. Description of the Related Art round the viewer. Such head-tracked stereo systems produce a look-around holographic feel qualitatively dif- [0002] Traditional motion pictures provide the audi- ferent from older fixed image displays. ence with what may called a "two-dimensional" experi- [0007] However, these new systems have yet to see ence. Most motion pictures are projected on flat screens 15 wide-spread acceptance as movie playback devices. with the audience observing the movie from afar. Aside One reason is that these systems typically rely upon re- from improvements in sound playback and larger rendering each scene every time the viewer changes film/screen formats, the movie going experience has not their viewpoint. For example, as the viewer's head changed significantly for a number of years. changes position (or orientation, in some systems), the [0003] Some films have been produced in visual 20 scene being displayed is re-rendered from scratch using stereo, e.g., with the audience wearing red-blue or the new viewpoint To produce a realistic computer-gen- polarized glasses. Similarly, head-mounted displays erated image, a tremendous number of calculations have also allowed viewers to see visual stereo images must be performed. Even with current advances in and films. But visual stereo has not seen widespread microprocessors, most systems still fall significantly success relative to standard motion pictures. This may 25 short of being able to render detailed realistic scenes in be because the stereo-effect is typically limited to a few real-time. Given the high frame rates need for smooth special effects where a few objects appear to have movie-like animation, it may be many years before dimension or "leap out" at the audience. It is likely that processing power approaches the levels necessary to the effects that simple visual stereo provides does not completely render complex images (e.g., like those in outweigh the added cost of production and the hassle of 30 the movie Toy Story) a frame rates high enough to rival wearing glasses or head-mounted displays to view the current motion pictures. image. [0008] Thus, an efficient system and method for [0004] However, recent advances in computer generating and playing back realistic 3D movies with graphics and display technologies have hinted at the viewer-changeable viewpoints in real-time is desired. In possibly of a more realistic experience for movie goers. 35 particular, a system and method capable of reducing New sensing technologies such as head-tracking may the number of calculations to be performed for real-time increase realism from simple visual stereo to a truly playback of 3D movies is desired. three-dimensional (3D) viewing experience. [0005] As used herein, the term "visual stereo" SUMMARY OF THE INVENTION refers to the process of generating two images (i.e., one 40 for the viewer's left eye and another for the viewer's right [0009] Particular and preferred aspects of the eye). These images may be referred to herein as stereo invention are set out in the accompanying independent component images. In contrast to simple visual stereo, and dependent claims. Features of the dependent 3D images and movies go one step further by dynami- claims may be combined with those of the independent cally changing the viewpoint (ideally on a frame-by- 45 claims as appropriate and in combinations other than frame basis) for each of the two stereo component those explicitly set out in the claims. images. In this way, an objected viewed in 3D will not [0010] The problems outlined above may at least in only appear to leap out of the screen, but objects behind part be solved by a system capable of partially render- it will become obscured or un-obscured as the viewer ing frames without relying upon exact viewpoint infor- changes their viewpoint to different positions around the 50 mation. The partially rendered frames may be rendered object. In other words, a traditional stereo image of a to the extent possible without performing viewpoint- globe will appear to have depth, but a tree-dimensional dependent processes, and then compressed and stored image of the globe will allow the viewer to move around to a carrier medium. To reduce the amount of data to be the globe to see more than one side of it. stored, the viewer's possible viewpoints may be [0006] Thus, head tracking sensors allow graphics 55 restricted (e.g., by defining a viewpoint-limiting volume systems to recalculate the component images accord- or region). ing to where the viewer is in relation to the display [0011] During playback, the compressed frames device. Head tracking may allow a coffee table display are read as a stream, decompressed, and then raster- 2 34EP 1 037 167 A2 ized for display. As part of the rasterization process, the viewpoint-independent) calculations, such as some viewer's viewpoint may be determined and used to per- lighting calculations (e.g., viewpoint-independent reflec- form viewpoint-dependent effects such as some lighting tions, viewpoint-independent shadows, viewpoint-inde- and atmospheric effects (e.g., fogging, specular high- pendent depth of field effects, viewpoint-independent lighting, and reflections). 5bump mapping, and viewpoint-independent displace- [0012] As used herein the term "real-time" refers to ment mapping) and tessellating the three-dimensional performing an action, task, or process rapidly enough graphics data into micropolygons to meet one or more so that the user or viewer will not be substantially dis- predetermined criteria. These predetermined criteria tracted by the amount of time taken to perform the task. may include smoothness and micropolygon size in As used herein, the term "including" means "including, 10 world space. In other embodiments they may also but not limited to." include estimated worst-case micropolygon size in [0013] Movies may be thought of as linear story tell- screen space. This estimated worst-case micropolygon ing. Thus, in 3D movies the viewer may be like Scrooge size in screen space may be estimated using the dis- (i.e., dragged along by ghosts of Christmas past, tance from a closest permitted viewpoint to each micro- present, and future). The viewer can see everything in 15 polygon and or the worst-case angle from all permitted partial or full three dimensions, but generally may not viewpoints to the micropolygon. interact with anything (in some embodiments interaction [0018] In addition to geometry compression, in may be allowed). Furthermore, the viewer can only go some embodiments the micropolygons may be culled to the places where the ghost goes. Note that this cov- by deleting micropolygons that are not visible from any ers not only pre-scripted linear story telling, but also 20 valid viewpoint (i.e., from any viewpoint that satisfies the most forms of remote viewing (e.g., live sporting specified viewpoint limitations. The compressed frames events). are configured to be rasterized in real-time by applying [0014] Looked at another way, current general pur- viewpoint-dependent lighting effects.
Recommended publications
  • 3D Graphics Fundamentals
    11BegGameDev.qxd 9/20/04 5:20 PM Page 211 chapter 11 3D Graphics Fundamentals his chapter covers the basics of 3D graphics. You will learn the basic concepts so that you are at least aware of the key points in 3D programming. However, this Tchapter will not go into great detail on 3D mathematics or graphics theory, which are far too advanced for this book. What you will learn instead is the practical implemen- tation of 3D in order to write simple 3D games. You will get just exactly what you need to 211 11BegGameDev.qxd 9/20/04 5:20 PM Page 212 212 Chapter 11 ■ 3D Graphics Fundamentals write a simple 3D game without getting bogged down in theory. If you have questions about how matrix math works and about how 3D rendering is done, you might want to use this chapter as a starting point and then go on and read a book such as Beginning Direct3D Game Programming,by Wolfgang Engel (Course PTR). The goal of this chapter is to provide you with a set of reusable functions that can be used to develop 3D games. Here is what you will learn in this chapter: ■ How to create and use vertices. ■ How to manipulate polygons. ■ How to create a textured polygon. ■ How to create a cube and rotate it. Introduction to 3D Programming It’s a foregone conclusion today that everyone has a 3D accelerated video card. Even the low-end budget video cards are equipped with a 3D graphics processing unit (GPU) that would be impressive were it not for all the competition in this market pushing out more and more polygons and new features every year.
    [Show full text]
  • Critical Review of Open Source Tools for 3D Animation
    IJRECE VOL. 6 ISSUE 2 APR.-JUNE 2018 ISSN: 2393-9028 (PRINT) | ISSN: 2348-2281 (ONLINE) Critical Review of Open Source Tools for 3D Animation Shilpa Sharma1, Navjot Singh Kohli2 1PG Department of Computer Science and IT, 2Department of Bollywood Department 1Lyallpur Khalsa College, Jalandhar, India, 2 Senior Video Editor at Punjab Kesari, Jalandhar, India ABSTRACT- the most popular and powerful open source is 3d Blender is a 3D computer graphics software program for and animation tools. blender is not a free software its a developing animated movies, visual effects, 3D games, and professional tool software used in animated shorts, tv adds software It’s a very easy and simple software you can use. It's show, and movies, as well as in production for films like also easy to download. Blender is an open source program, spiderman, beginning blender covers the latest blender 2.5 that's free software anybody can use it. Its offers with many release in depth. we also suggest to improve and possible features included in 3D modeling, texturing, rigging, skinning, additions to better the process. animation is an effective way of smoke simulation, animation, and rendering. Camera videos more suitable for the students. For e.g. litmus augmenting the learning of lab experiments. 3d animation is paper changing color, a video would be more convincing not only continues to have the advantages offered by 2d, like instead of animated clip, On the other hand, camera video is interactivity but also advertisement are new dimension of not adequate in certain work e.g. like separating hydrogen from vision probability.
    [Show full text]
  • 3D Modeling, Animation, and Special Effects
    3D Modeling, Animation, and Special Effects ITP 215 (2 Units) Catalogue Developing a 3D animation from modeling to rendering: basics of surfacing, Description lighting, animation, and modeling techniques. Advanced topics: compositing, particle systems, and character animation. Objective Fundamentals of 3D modeling, animation, surfacing, and special effects: Understanding the processes involved in the creation of 3D animation and the interaction of vision, budget, and time constraints. Developing an understanding of diverse methods for achieving similar results and decision-making processes involved at various stages of project development. Gaining insight into the differences among the various animation tools. Understanding the opportunities and tracks in the field of 3D animation. Prerequisites Knowledge of any 2D paint, drawing, or CAD program Instructor Lance S. Winkel E-mail: [email protected] Tel: 213/740.9959 Office: OHE 530 H Office Hours: Tue/Thur 8am-10am Lab Assistants: Qingzhou Tang: [email protected] Hours 4 hours Course Structure The Final Exam will be conducted at the time dictated in the Schedule of Classes. Details and instructions for all projects will be available on Blackboard. For grading criteria of each assignment, project, and exam, see the Grading section below. Textbook(s) Blackboard Autodesk Maya Documentation Resources online and at Lynda.com and knowledge.autodesk.com Adobe online resources where necessary for Photoshop and After Effects Grading Planets = 10 points Cityscape 1 of 7 = 10 points Cityscape 2 of
    [Show full text]
  • 3D Modeling and the Role of 3D Modeling in Our Life
    ISSN 2413-1032 COMPUTER SCIENCE 3D MODELING AND THE ROLE OF 3D MODELING IN OUR LIFE 1Beknazarova Saida Safibullaevna 2Maxammadjonov Maxammadjon Alisher o’g’li 2Ibodullayev Sardor Nasriddin o’g’li 1Uzbekistan, Tashkent, Tashkent University of Informational Technologies, Senior Teacher 2Uzbekistan, Tashkent, Tashkent University of Informational Technologies, student Abstract. In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of an object (either inanimate or living) via specialized software. The product is called a 3D model. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices. Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications or modelers. Key words: 3D, modeling, programming, unity, 3D programs. Nowadays 3D modeling impacts in every sphere of: computer programming, architecture and so on. Firstly, we will present basic information about 3D modeling. 3D models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created by hand, algorithmically (procedural modeling), or scanned. 3D models are widely used anywhere in 3D graphics.
    [Show full text]
  • Custom Shader and 3D Rendering for Computationally Efficient Sonar
    Custom Shader and 3D Rendering for computationally efficient Sonar Simulation Romuloˆ Cerqueira∗y, Tiago Trocoli∗, Gustavo Neves∗, Luciano Oliveiray, Sylvain Joyeux∗ and Jan Albiez∗z ∗Brazilian Institute of Robotics, SENAI CIMATEC, Salvador, Bahia, Brazil, Email: romulo.cerqueira@fieb.org.br yIntelligent Vision Research Lab, Federal University of Bahia, Salvador, Bahia, Brazil zRobotics Innovation Center, DFKI GmbH, Bremen, Germany Sonar Parameters opening angle, direction, range osg viewport select rendering area osg world 3D Shader rendering of three channel picture Beam Angle in Camera Surface Angle to Camera Distance from Camera n° 90° Near 0° calculation -n° 0° Far select beam select bin Distance Histogramm # f(x) 1 Return Intensity Data Structure of Sonar Beam for n° return value return normalisation Bin Val 0,5 Bin # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Near Far 0 0,5 0,77 1 x Fig. 1. A graphical representation of the individual steps to get from the OpenSceneGraph scene to a sonar beam data structure. Abstract—This paper introduces a novel method for simulating is that the simulation has to be good enough to test the decision underwater sonar sensors by vertex and fragment processing. making algorithms in the control system. The virtual scenario used is composed of the integration between When dealing with autonomous underwater vehicles the Gazebo simulator and the Robot Construction Kit (ROCK) framework. A 3-channel matrix with depth and intensity buffers (AUVs) a real-time simulation plays a key role. Since an AUV and angular distortion values is extracted from OpenSceneGraph can only scarcely communicate back via mostly unreliable 3D scene frames by shader rendering, and subsequently fused acoustic communication, the robot has to be able to make and processed to generate the synthetic sonar data.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • An Evaluation Study of Preferences Between Combinations of 2D-Look Shading and Limited Animation in 3D Computer Animation
    Received May 1, 2015; Accepted October 5, 2015 An Evaluation Study of preferences between combinations of 2D-look shading and Limited Animation in 3D computer animation Matsuda,Tomoyo Kim, Daewoong Ishii, Tatsuro Kyushu University, Kyushu University, Kyushu University, Graduate school of Design Faculty of Design Faculty of Design [email protected] [email protected] [email protected] Abstract 3D computer animation has become popular all over the world, and different styles have emerged. However, 3D animation styles vary within Japan because of its 2D animation culture. There has been a trend to flatten 3D animation into 2D animation by using 2D-look shading and limited animation techniques to create 2D looking 3D computer animation to attract the Japanese audience. However, the effect of these flattening trends in the audience’s satisfaction is still unclear and no research has been done officially. Therefore, this research aims to evaluate how the combinations of the flattening techniques affect the audience’s preference and the sense of depth. Consequently, we categorized shadings and animation styles used to create 2D-look 3D animation, created sample movies, and finally evaluated each combination with Thurston’s method of paired comparisons. We categorized shadings into three types; 3D rendering with realistic shadow, 2D rendering with flat shadow and outline, and 2.5D rendering which is between 3D rendering and 2D rendering and has semi-realistic shadow and outline. We also prepared two different animations that have the same key frames; 24fps full animation and 12fps limited animation, and tested combinations of each of them for the evaluation experiment.
    [Show full text]
  • Computer Graphics Master in Computer Science Master In
    Computer Graphics Computer Graphics Master in Computer Science Master in Electrical Engineering ... 1 Computer Graphics The service... Eric Béchet (it's me !) Engineering Studies in Nancy (Fr.) Ph.D. in Montréal (Can.) Academic career in Nantes and Metz (Fr.) then Liège... Christophe Leblanc Assistant at ULg Web site http://www.cgeo.ulg.ac.be/infographie Emails: { eric.bechet, christophe.leblanc }@uliege.be 2 Computer Graphics Course schedule 6-7 theory lessons ( ~ 4 hours) may be split in 2x2 hours and mixed with labs This room in building B52 (+2/441) (or my office) 7 practice lessons on computer (~ 4 hours ) may be split in 2x2 hours and mixed with lessons Room +0/413 (B52, floor 0) or +2/441 (own PC) Practical evaluation (labs / on computer) Written exam about theory Project Availability time: Monday PM (or on appointment) 3 Computer Graphics Course schedule Project Implementation of realistic rendering techniques in a small in-house ray-tracing software Your own topics 4 Computer Graphics Course schedule The lectures begin on February 6 and: Feb 27, March 13, etc. (up-to-date agenda on the website) The labs begin on February 20th . Alternating with the theoretical courses. 5 Computer Graphics Introduction Computer graphics : The study of the creation, manipulation and the use of images in computers 6 Computer Graphics Introduction Some bibliography: Computer graphics: principles and practice in C James Foley et al. Old but there exists a french translation Computer graphics: Theory into practice Jeffrey McConnell Fundamentals of computer graphics Peter Shirley et al. Algorithmes pour la synthèse d'images (in French) R.
    [Show full text]
  • Production Rendering Ian Stephenson (Ed.) Production Rendering
    Production Rendering Ian Stephenson (Ed.) Production Rendering Design and Implementation 12 3 Ian Stephenson, DPhil National Centre for Computer Animation, Bournemouth, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. ISBN 1-85233-821-0 Springer-Verlag London Berlin Heidelberg Springer ScienceϩBusiness Media springeronline.com © Springer London Limited 2005 Printed in the United States of America The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Typeset by Gray Publishing, Tunbridge Wells, UK Printed and bound in the United States of America 34/3830-543210 Printed on acid-free paper SPIN 10951729 We dedicate this book to: Amanda Rochfort, Cheryl M.
    [Show full text]
  • The Convergence of Graphics and Vision
    The Convergence of Graphics Cover Feature and Vision Approaching similar problems from opposite directions, graphics and vision researchers are reaching a fertile middle ground. The goal is to find the best possible tools for the imagination. This overview describes cutting- edge work, some of which will debut at Siggraph 98. Jed Lengyel t Microsoft Research, the computer vision sampled representations. This use of captured Microsoft and graphics groups used to be on opposite scenes (enhanced by vision research) yields richer Research sides of the building. Now we have offices rendering and modeling methods (for graphics) along the same hallway, and we see each other than methods that synthesize everything from A every day. This reflects the larger trend in our scratch. field as graphics and vision close in on similar problems. • Exploiting temporal and spatial coherence (sim- Computer graphics and computer vision are inverse ilarities in images) via the use of layers and other problems. Traditional computer graphics starts with techniques is boosting runtime performance. input geometric models and produces image se- • The explosion in PC graphics performance is quences. Traditional computer vision starts with input making powerful computational techniques more image sequences and produces geometric models. practical. Lately, there has been a meeting in the middle, and the center—the prize—is to create stunning images in real VISION AND GRAPHICS CROSSOVER: time. IMAGE-BASED RENDERING AND MODELING Vision researchers now work from images back- What are vision and graphics learning from each ward, just as far backward as necessary to create mod- other? Both deal with the image streams that result els that capture a scene without going to full geometric when a real or virtual camera is exposed to the phys- models.
    [Show full text]
  • Basics of 3D Rendering
    Basics of 3D Rendering CS 148: Summer 2016 Introduction of Graphics and Imaging Zahid Hossain http://www.pling.org.uk/cs/cgv.html What We Have So Far 3D geometry 2D pipeline CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 2 Handy Fact Matrices preserve flat geometry CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 3 Handy Fact Matrices preserve flat geometry CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 4 So What? 3D triangles look like 2D triangles under camera transformations. CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 5 So What? 3D triangles look like 2D triangles under camera transformations. Use 2D pipeline for 3D Rendering ! CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 6 Side Note Only true for flat shapes http://drawntothis.com/wp-content/uploads/2010/09/Random_Guy.jpg CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 7 Side Note Only true for flat shapes http://drawntothis.com/wp-content/uploads/2010/09/Random_Guy.jpg CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 8 Frame Buffering CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 9 Double and Triple Buffering Tearing : Data from multiple frames appear on the screen at the same time. This happens when GPU rendering rate and monitor refresh rate are not synced. http://www.newcluster.com/wp-content/uploads/2015/01/g-sync_diagram_0.jpgitokgxy9kpos CS 148: Introduction to Computer Graphics and Imaging (Summer 2016) – Zahid Hossain 10 Double Buffering with V-Sync Front Buffer Back Buffer Front Buffer (video memory) SwapBuffer: Either copy Back Buffer to Front Buffer, or Swap Pointers (Usually in Fullscreen mode).
    [Show full text]
  • 3D Rendering
    3D Rendering Thomas Funkhouser Princeton University C0S 426, Fall 2000 Course Syllabus I. Image processing II. Rendering III. Modeling Rendering IV. Animation (Michael Bostock, CS426, Fall99) Image Processing (Rusty Coleman, CS426, Fall99) Modeling (Dennis Zorin, CalTech) Animation (Angel, Plate 1) 1 Where Are We Now? I. Image processing II. Rendering III. Modeling Rendering IV. Animation (Michael Bostock, CS426, Fall99) Image Processing (Rusty Coleman, CS426, Fall99) Modeling (Dennis Zorin, CalTech) Animation (Angel, Plate 1) Rendering • Generate an image from geometric primitives Rendering Geometric Raster Primitives Image 2 3D Rendering Example What issues must be addressed by a 3D rendering system? Overview • 3D scene representation • 3D viewer representation • Visible surface determination • Lighting simulation 3 Overview » 3D scene representation • 3D viewer representation • Visible surface determination • Lighting simulation How is the 3D scene described in a computer? 3D Scene Representation • Scene is usually approximated by 3D primitives Point ¡ Line segment ¢ Polygon £ Polyhedron ¤ Curved surface ¥ Solid object ¦ etc. 4 3D Point • Specifies a location § Represented by three coordinates ¨ Infinitely small typedef struct { Coordinate x; Coordinate y; Coordinate z; (x,y,z) } Point; 3D Vector • Specifies a direction and a magnitude © Represented by three coordinates Magnitude ||V|| = sqrt(dxdx + dydy + dzdz) Has no location typedef struct { (dx ,dy ,dz ) Coordinate dx; 1 1 1 Coordinate dy; Coordinate dz; } Vector; Θ (dx2,dy2 ,dz2)
    [Show full text]