View Frustum Clipping

Total Page:16

File Type:pdf, Size:1020Kb

View Frustum Clipping View Frustum Clipping Don Fussell Computer Science Department The University of Texas at Austin University of Texas at Austin CS354 - Computer Graphics Don Fussell A Simplified Graphics Pipeline Application Vertex batching & assembly Triangle assembly Triangle clipping NDC to window space Triangle rasterization Fragment shading Depth testing Depth buffer Color update Framebuffer University of Texas at Austin CS354 - Computer Graphics Don Fussell A few more steps expanded Application Vertex batching & assembly Vertex transformation Lighting Texture coordinate generation Triangle assembly User defined clipping View frustum clipping Perspective divide NDC to window space Back face culling Triangle rasterization Fragment shading Depth testing Depth buffer Color update Framebuffer University of Texas at Austin CS354 - Computer Graphics Don Fussell Conceptual Vertex Transformation glVertex* object-space coordinates Modelview User-defined API eye-space coordinates (xo,yo,zo,wo) matrix (x ,y ,z ,w ) clip planes commands e e e e (xe,ye,ze,we) clipped eye-space coordinates (xe,ye,ze,we) clip-space clipped clip-space Projection coordinates View-frustum coordinates Perspective (x ,y ,z ,w ) matrix (xc,yc,zc,wc) clip planes c c c c division normalized device coordinates (NDC) (xn,yn,zn,1/wc) window-space Viewport + Depth Range coordinates to primitive transformation (x ,y ,z ,1/w ) w w w c rasterization University of Texas at Austin CS354 - Computer Graphics Don Fussell OpenGL Perspective glFrustum(left,right,bottom,top,near,far ) University of Texas at Austin CS354 - Computer Graphics Don Fussell Simple Perspective Consider a simple perspective with the COP at the origin, the near clipping plane at z = -near, and a 90 degree field of view determined by the planes x = ±z, y = ±z z = -f (1,1,-n) (-1,-1,-n) University of Texas at Austin CS354 - Computer Graphics Don Fussell Generalization " 1 0 0 0 % $ ' $ 0 1 0 0 ' N = $ 0 0 α β ' $ ' # 0 0 −1 0 & after perspective division, the point (x, y, z, 1) goes to x’ = x/z y’ = y/z z’ = -(α+β/z) which projects orthogonally to the desired point regardless of α and β University of Texas at Austin CS354 - Computer Graphics Don Fussell Picking α and β " 1 0 0 0 % If we pick $ ' f + n 0 1 0 0 α = − $ ' f − n N = $ f + n 2nf ' $ 0 0 − − ' 2nf f − n f − n β = − $ ' f − n #$ 0 0 −1 0 &' the near plane is mapped to z = -1 the far plane is mapped to z =1 and the sides are mapped to x = ± 1, y = ± 1 Hence the new clipping volume is the default clipping volume University of Texas at Austin CS354 - Computer Graphics Don Fussell General Perspective Frustum l + r x' = x + z 2n t + b y' = y + z 2n z' = z ! l + r $ # 1 0 0 & # 2n & # t + b & H = # 0 1 0 & # 2n & # 0 0 1 0 & Step 1: Shear to center on –z axis "# 0 0 0 1 %& University of Texas at Austin CS354 - Computer Graphics Don Fussell General Perspective Frustum 2n x' = x r − l 2n y' = y t − b z' = z " 2n % $ 0 0 0 ' $ r − l ' $ 2n ' S = $ 0 0 0 ' $ t − b ' $ 0 0 1 0 ' Step 2: Scale so boundary slopes are ± 1 #$ 0 0 0 1 &' University of Texas at Austin CS354 - Computer Graphics Don Fussell Normalization Transformation distorted object projects correctly original clipping volume original object new clipping volume University of Texas at Austin CS354 - Computer Graphics Don Fussell OpenGL Perspective Matrix The normalization in glFrustum requires an initial shear to form a right viewing pyramid, followed by a scaling to get the normalized perspective volume. Finally, the perspective matrix results in needing only a final orthogonal transformation P = NSH our previously defined shear and scale perspective matrix University of Texas at Austin CS354 - Computer Graphics Don Fussell Normalization Rather than derive a different projection matrix for each type of projection, we can convert all projections to orthogonal projections with the default view volume This strategy allows us to use standard transformations in the pipeline and makes for efficient clipping University of Texas at Austin CS354 - Computer Graphics Don Fussell Oblique Projections The OpenGL projection functions cannot produce general parallel projections such as However if we look at the example of the cube it appears that the cube has been sheared Oblique Projection = Shear + Orthogonal Projection University of Texas at Austin CS354 - Computer Graphics Don Fussell General Shear top view side view University of Texas at Austin CS354 - Computer Graphics Don Fussell Shear Matrix xy shear (z values unchanged) " 1 0 −cotθ 0 % $ ' H( , ) = 0 1 −cotφ 0 θ φ $ ' $ 0 0 1 0 ' $ ' # 0 0 0 1 & Projection matrix P = Morth H(θ,φ) General case: P = Morth STH(θ,φ) University of Texas at Austin CS354 - Computer Graphics Don Fussell Equivalency University of Texas at Austin CS354 - Computer Graphics Don Fussell Effect on Clipping The projection matrix P = STH transforms the original clipping volume to the default clipping volume object top view z = 1 DOP DOP x = -1 far plane x = 1 z = -1 clipping near plane volume distorted object (projects correctly) University of Texas at Austin CS354 - Computer Graphics Don Fussell Using Field of View With glFrustum it is often difficult to get the desired view gluPerpective(fovy, aspect, near, far) often provides a better interface front plane aspect = w/h University of Texas at Austin CS354 - Computer Graphics Don Fussell OpenGL Perspective glFrustum allows for an unsymmetric viewing frustum (although gluPerspective does not) University of Texas at Austin CS354 - Computer Graphics Don Fussell Frustum Transform Prototype glFrustum(GLdouble left, GLdouble right, GLdouble bottom, GLdouble top, GLdouble near, GLdouble far) Post-concatenates a frustum matrix 2n r + l ⎡ 0 0 ⎤ ⎢r − l r − l ⎥ ⎢ 2n t + b ⎥ ⎢ 0 0 ⎥ ⎢ t − b t − b ⎥ − ( f + n) − 2 fn ⎢ 0 0 ⎥ ⎢ f − n f − n ⎥ ⎢ ⎥ ⎣ 0 0 −1 0 ⎦ University of Texas at Austin CS354 - Computer Graphics Don Fussell glFrustum Matrix Projection specification 5 glLoadIdentity(); 80 glFrustum(-4, +4, -3, +3, 5, 80) -Z axis left=-4, right=4, bottom=-3, top=3, near=5, far=80 Matrix symmetric left/right & top/bottom so zero ⎡ 2n r + l ⎤ 5 0 0 ⎡ 0 0 0 ⎤ ⎢r − l r − l ⎥ ⎢4 ⎥ ⎢ 2n t + b ⎥ ⎢ 5 ⎥ ⎢ 0 0 ⎥ ⎢0 0 0 ⎥ ⎢ t − b t − b ⎥ = ⎢ 3 ⎥ ⎢ − ( f + n) − 2 fn⎥ 85 800 0 0 ⎢0 0 − − ⎥ ⎢ f − n f − n ⎥ ⎢ 75 75 ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 0 0 −1 0 ⎦ ⎣0 0 −1 0 ⎦ University of Texas at Austin CS354 - Computer Graphics Don Fussell glFrustum Example Consider glLoadIdentity(); glFrustum(-30, 30, -20, 20, 1, 1000) -Z axis left=-30, right=30, bottom=-20, top=20, near=1, far=1000 Matrix symmetric left/right & top/bottom so zero ⎡ 2n r + l ⎤ 1 0 0 ⎡ 0 0 0 ⎤ ⎢r − l r − l ⎥ ⎢30 ⎥ ⎢ 2n t + b ⎥ ⎢ 1 ⎥ ⎢ 0 0 ⎥ ⎢ 0 0 0 ⎥ ⎢ t − b t − b ⎥ = ⎢ 20 ⎥ ⎢ − ( f + n) − 2 fn⎥ 1001 2000 0 0 ⎢ 0 0 − − ⎥ ⎢ f − n f − n ⎥ ⎢ 999 999 ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 0 0 −1 0 ⎦ ⎣ 0 0 −1 0 ⎦ University of Texas at Austin CS354 - Computer Graphics Don Fussell glOrtho and glFrustum These OpenGL commands provide a parameterized transform mapping eye space into the “clip cube” Each command glOrtho is orthographic glFrustum is single-point perspective University of Texas at Austin CS354 - Computer Graphics Don Fussell Handedness of Coordinate Systems When Object coordinate system is right-handed, Modelview transform is generated from one or more of the commands glTranslate, glRotate, and glScale with positive scaling values, Projection transform is loaded with glLoadIdentity followed by exactly one of glOrtho or glFrustum, Near value specified for glDepthRange is less than the far value; Then Eye coordinate system is right-handed Clip, NDC, and window coordinate systems are left-handed University of Texas at Austin CS354 - Computer Graphics Don Fussell Conventional OpenGL Handedness Right-handed Left-handed Object space Clip space Eye space Normalized Device Coordinate (NDC) space Window space In eye space, eye Positive depth is “looking down” the is further from viewer negative Z axis University of Texas at Austin CS354 - Computer Graphics Don Fussell Affine Frustum Clip Equations The idea of a [-1,+1]3 view frustum cube Regions outside this cube get clipped Regions inside the cube get rasterized Equations -1 ≤ xc ≤ +1 -1 ≤ yc ≤ +1 -1 ≤ zc ≤ +1 University of Texas at Austin CS354 - Computer Graphics Don Fussell Projective Frustum Clip Equations Generalizes clip cube as a projective space Uses (xc,yc,zc,wc) clip-space coordinates Equations -wc ≤ xc ≤ +wc -wc ≤ yc ≤ +wc -wc ≤ zc ≤ +wc Notice Impossible for wc < 0 to survive clipping Interpretation: wc is distance in front of the eye So negative wc values are “behind your head” University of Texas at Austin CS354 - Computer Graphics Don Fussell NDC Space Clip Cube (-1,+1,+1) (+1,+1,+1) Post-perspective divide puts the region surviving (-1,+1,-1) (+1,+1,-1) clipping within the [-1,+1]3 (-1,-1,+1) (+1,-1,+1) (-1,-1,-1) (+1,-1,-1) University of Texas at Austin CS354 - Computer Graphics Don Fussell Clip Space Clip Cube Constraints (xmin/w,ymax/w,zmax/w) (xmax/w,ymax/w,zmax/w) xmin = -w xmax = w y = -w min (xmin/w,ymax/w,zmin/w) (xmax/w,ymax/w,zmin/w) ymax = w zmin = -w z = w max (x /w,y /w,z /w) w>0 min min max (xmax/w,ymin/w,zmax/w) (xmax/w,ymin/w,zmin/w) (xmin/w,ymin/w,zmin/w) Pre-perspective divide puts the region surviving clipping within -w ≤ x ≤ w, -w ≤ y ≤ w, -w ≤ z ≤ w University of Texas at Austin CS354 - Computer Graphics Don Fussell Window Space Clip Cube (x,y+h,zFar) (x+w,y+h,zFar) (x,y+h,zNear) (x+w,y+h,zNear) (x+w,y,zFar) Constraints (x,y,zFar) w>0 h>0 0 ≤ zNear ≤ 1 0 ≤ zFar ≤ 1 (x,y,zNear) (x+w,y,zNear) Assuming glViewport(x,y,w,h) and glDepthRange(zNear,zFar) University of Texas
Recommended publications
  • A Technology to Synthesize 360-Degree Video Based on Regular Dodecahedron in Virtual Environment Systems*
    A Technology to Synthesize 360-Degree Video Based on Regular Dodecahedron in Virtual Environment Systems* Petr Timokhin 1[0000-0002-0718-1436], Mikhail Mikhaylyuk2[0000-0002-7793-080X], and Klim Panteley3[0000-0001-9281-2396] Federal State Institution «Scientific Research Institute for System Analysis of the Russian Academy of Sciences», Moscow, Russia 1 [email protected], 2 [email protected], 3 [email protected] Abstract. The paper proposes a new technology of creating panoramic video with a 360-degree view based on virtual environment projection on regular do- decahedron. The key idea consists in constructing of inner dodecahedron surface (observed by the viewer) composed of virtual environment snapshots obtained by twelve identical virtual cameras. A method to calculate such cameras’ projec- tion and orientation parameters based on “golden rectangles” geometry as well as a method to calculate snapshots position around the observer ensuring synthe- sis of continuous 360-panorama are developed. The technology and the methods were implemented in software complex and tested on the task of virtual observing the Earth from space. The research findings can be applied in virtual environment systems, video simulators, scientific visualization, virtual laboratories, etc. Keywords: Visualization, Virtual Environment, Regular Dodecahedron, Video 360, Panoramic Mapping, GPU. 1 Introduction In modern human activities the importance of the visualization of virtual prototypes of real objects and phenomena is increasingly grown. In particular, it’s especially required for training specialists in space and aviation industries, medicine, nuclear industry and other areas, where the mistake cost is extremely high [1-3]. The effectiveness of work- ing with virtual prototypes is significantly increased if an observer experiences a feeling of being present in virtual environment.
    [Show full text]
  • Cones, Pyramids and Spheres
    The Improving Mathematics Education in Schools (TIMES) Project MEASUREMENT AND GEOMETRY Module 12 CONES, PYRAMIDS AND SPHERES A guide for teachers - Years 9–10 June 2011 YEARS 910 Cones, Pyramids and Spheres (Measurement and Geometry : Module 12) For teachers of Primary and Secondary Mathematics 510 Cover design, Layout design and Typesetting by Claire Ho The Improving Mathematics Education in Schools (TIMES) Project 2009‑2011 was funded by the Australian Government Department of Education, Employment and Workplace Relations. The views expressed here are those of the author and do not necessarily represent the views of the Australian Government Department of Education, Employment and Workplace Relations. © The University of Melbourne on behalf of the International Centre of Excellence for Education in Mathematics (ICE‑EM), the education division of the Australian Mathematical Sciences Institute (AMSI), 2010 (except where otherwise indicated). This work is licensed under the Creative Commons Attribution‑NonCommercial‑NoDerivs 3.0 Unported License. 2011. http://creativecommons.org/licenses/by‑nc‑nd/3.0/ The Improving Mathematics Education in Schools (TIMES) Project MEASUREMENT AND GEOMETRY Module 12 CONES, PYRAMIDS AND SPHERES A guide for teachers - Years 9–10 June 2011 Peter Brown Michael Evans David Hunt Janine McIntosh Bill Pender Jacqui Ramagge YEARS 910 {4} A guide for teachers CONES, PYRAMIDS AND SPHERES ASSUMED KNOWLEDGE • Familiarity with calculating the areas of the standard plane figures including circles. • Familiarity with calculating the volume of a prism and a cylinder. • Familiarity with calculating the surface area of a prism. • Facility with visualizing and sketching simple three‑dimensional shapes. • Facility with using Pythagoras’ theorem. • Facility with rounding numbers to a given number of decimal places or significant figures.
    [Show full text]
  • Canonical View Volumes
    University of British Columbia View Volumes Canonical View Volumes Why Canonical View Volumes? CPSC 314 Computer Graphics Jan-Apr 2016 • specifies field-of-view, used for clipping • standardized viewing volume representation • permits standardization • restricts domain of z stored for visibility test • clipping Tamara Munzner perspective orthographic • easier to determine if an arbitrary point is perspective view volume orthographic view volume orthogonal enclosed in volume with canonical view y=top parallel volume vs. clipping to six arbitrary planes y=top x=left x=left • rendering y y x or y x or y = +/- z x or y back • projection and rasterization algorithms can be Viewing 3 z plane z x=right back 1 VCS front reused front plane VCS y=bottom z=-near z=-far x -z x z=-far plane -z plane -1 x=right y=bottom z=-near http://www.ugrad.cs.ubc.ca/~cs314/Vjan2016 -1 2 3 4 Normalized Device Coordinates Normalized Device Coordinates Understanding Z Understanding Z near, far always positive in GL calls • convention left/right x =+/- 1, top/bottom y =+/- 1, near/far z =+/- 1 • z axis flip changes coord system handedness THREE.OrthographicCamera(left,right,bot,top,near,far); • viewing frustum mapped to specific mat4.frustum(left,right,bot,top,near,far, projectionMatrix); • RHS before projection (eye/view coords) parallelepiped Camera coordinates NDC • LHS after projection (clip, norm device coords) • Normalized Device Coordinates (NDC) x x perspective view volume orthographic view volume • same as clipping coords x=1 VCS NDCS y=top • only objects
    [Show full text]
  • Using History of Mathematics to Teach Volume Formula of Frustum Pyramids: Dissection Method
    Universal Journal of Educational Research 3(12): 1034-1048, 2015 http://www.hrpub.org DOI: 10.13189/ujer.2015.031213 Using History of Mathematics to Teach Volume Formula of Frustum Pyramids: Dissection Method Suphi Onder Butuner Department of Elementary Mathematics Education, Bozok University, Turkey Copyright © 2015 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Abstract Within recent years, history of mathematics dynamic structure of mathematics [1, 2, 3, 4, 5, 6, 7]. It is (HoM) has become an increasingly popular topic. Studies stressed that history of mathematics will enrich the repertoire have shown that student reactions to it depend on the ways of teachers and develop their pedagogical content knowledge they use history of mathematics. The present action research [7, 8]. Jankvist [5] (2009) explains the use of history of study aimed to make students deduce volume rules of mathematics both as a tool and as a goal. History-as-a-tool frustum pyramids using the dissection method. Participants arguments are mainly concerned with learning of the inner were 24 grade eight students from Trabzon. Observations, issues of mathematics (mathematical concepts, theories, informal interviews and feedback forms were used as data methods, formulas), whereas history-as-a-goal arguments collection tools. Worksheets were distributed to students and are concerned with the use of history as a self-contained goal. the research was conducted in 3 class hours. Student views From the history-as-a-goal point of view, knowing about the on the activities were obtained through a written feedback history of mathematics is not a primary tool for learning form consisting of 4 questions.
    [Show full text]
  • COMPUTER GRAPHICS COURSE Viewing and Projections
    COMPUTER GRAPHICS COURSE Viewing and Projections Georgios Papaioannou - 2014 VIEWING TRANSFORMATION The Virtual Camera • All graphics pipelines perceive the virtual world Y through a virtual observer (camera), also positioned in the 3D environment “eye” (virtual camera) Eye Coordinate System (1) • The virtual camera or “eye” also has its own coordinate system, the eyeY coordinate system Y Eye coordinate system (ECS) Z eye X Y X Z Global (world) coordinate system (WCS) Eye Coordinate System (2) • Expressing the scene’s geometry in the ECS is a natural “egocentric” representation of the world: – It is how we perceive the user’s relationship with the environment – It is usually a more convenient space to perform certain rendering tasks, since it is related to the ordering of the geometry in the final image Eye Coordinate System (3) • Coordinates as “seen” from the camera reference frame Y ECS X Eye Coordinate System (4) • What “egocentric” means in the context of transformations? – Whatever transformation produced the camera system its inverse transformation expresses the world w.r.t. the camera • Example: If I move the camera “left”, objects appear to move “right” in the camera frame: WCS camera motion Eye-space object motion Moving to Eye Coordinates • Moving to ECS is a change of coordinates transformation • The WCSECS transformation expresses the 3D environment in the camera coordinate system • We can define the ECS transformation in two ways: – A) Invert the transformations we applied to place the camera in a particular pose – B) Explicitly define the coordinate system by placing the camera at a specific location and setting up the camera vectors WCSECS: Version A (1) • Let us assume that we have an initial camera at the origin of the WCS • Then, we can move and rotate the “eye” to any pose (rigid transformations only: No sense in scaling a camera): 퐌푐 퐨푐, 퐮, 퐯, 퐰 = 퐑1퐑2퐓1퐑ퟐ … .
    [Show full text]
  • Uniform Panoploid Tetracombs
    Uniform Panoploid Tetracombs George Olshevsky TETRACOMB is a four-dimensional tessellation. In any tessellation, the honeycells, which are the n-dimensional polytopes that tessellate the space, Amust by definition adjoin precisely along their facets, that is, their ( n!1)- dimensional elements, so that each facet belongs to exactly two honeycells. In the case of tetracombs, the honeycells are four-dimensional polytopes, or polychora, and their facets are polyhedra. For a tessellation to be uniform, the honeycells must all be uniform polytopes, and the vertices must be transitive on the symmetry group of the tessellation. Loosely speaking, therefore, the vertices must be “surrounded all alike” by the honeycells that meet there. If a tessellation is such that every point of its space not on a boundary between honeycells lies in the interior of exactly one honeycell, then it is panoploid. If one or more points of the space not on a boundary between honeycells lie inside more than one honeycell, the tessellation is polyploid. Tessellations may also be constructed that have “holes,” that is, regions that lie inside none of the honeycells; such tessellations are called holeycombs. It is possible for a polyploid tessellation to also be a holeycomb, but not for a panoploid tessellation, which must fill the entire space exactly once. Polyploid tessellations are also called starcombs or star-tessellations. Holeycombs usually arise when (n!1)-dimensional tessellations are themselves permitted to be honeycells; these take up the otherwise free facets that bound the “holes,” so that all the facets continue to belong to two honeycells. In this essay, as per its title, we are concerned with just the uniform panoploid tetracombs.
    [Show full text]
  • About This Booklet on Surface Areas and Volumes
    About This Booklet on Surface1 Areas and Volumes About This Booklet on Surface Areas and Volumes This Particular Booklet has been designed for the students of Math Class 10 (CBSE Board). However, it will also help those who have these chapters in their curriculum or want to gain the knowledge and explore the concepts. This Booklet explains: 1. Cube, Cuboid and Cylinder 2. Cone and Frustum 3. Sphere and Hemisphere 4. Combination of solids This Booklet also covers: 1. MCQs 2. Questions with Solutions 3. Questions for practice and 4. QR Codes to scan and watch videos on Surface Areas and Volumes QR codes when scanned with mobile scanner take you to our YouTube channel Let’sTute (www.youtube.com/letstute) where you can watch our free videos (need internet connection) on the topic. For Surface Areas and Volumes, in total, we have 8 videos which are accessible on several other platforms as described on the back cover of this Booklet. However, if there is any query, feel free to connect with us on the details given on the last page. Some other documents are also available: About Let’s Tute Let’sTute (Universal Learning Aid) is an E-learning company based in Mumbai, India. (www.letstute.com) We create content for Mathematics, Biology, Physics, Environmental Science, Book-Keeping & Accountancy and also a series on value education known as V2lead 2 13 .1 Cube, Cuboid and Cylinder Scan to watch the video on INTRODUCTION Surface area and Volume Surface Area: It is the sum of total exposed area of a three dimensional solid object.
    [Show full text]
  • Interactive Volume Navigation
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 4, NO. 3, JULY-SEPTEMBER 1998 243 Interactive Volume Navigation Martin L. Brady, Member, IEEE, Kenneth K. Jung, Member, IEEE, H.T. Nguyen, Member, IEEE, and Thinh PQ Nguyen Member, IEEE Abstract—Volume navigation is the interactive exploration of volume data sets by “flying” the viewpoint through the data, producing a volume rendered view at each frame. We present an inexpensive perspective volume navigation method designed to be run on a PC platform with accelerated 3D graphics hardware. The heart of the method is a two-phase perspective raycasting algorithm that takes advantage of the coherence inherent in adjacent frames during navigation. The algorithm generates a sequence of approximate volume-rendered views in a fraction of the time that would be required to compute them individually. The algorithm handles arbitrarily large volumes by dynamically swapping data within the current view frustum into main memory as the viewpoint moves through the volume. We also describe an interactive volume navigation application based on this algorithm. The application renders gray-scale, RGB, and labeled RGB volumes by volumetric compositing, allows trilinear interpolation of sample points, and implements progressive refinement during pauses in user input. Index Terms—Volume navigation, volume rendering, 3D medical imaging, scientific visualization, texture mapping. ——————————F—————————— 1INTRODUCTION OLUME-rendering techniques can be used to create in- the views should be produced at interactive rates. The V formative two-dimensional (2D) rendered views from complete data set may exceed the system’s memory, but the large three-dimensional (3D) images, or volumes, such as amount of data enclosed by the view frustum is assumed to those arising in scientific and medical applications.
    [Show full text]
  • Computer Graphicsgraphics -- Weekweek 33
    ComputerComputer GraphicsGraphics -- WeekWeek 33 Bengt-Olaf Schneider IBM T.J. Watson Research Center Questions about Last Week ? Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 Overview of Week 3 Viewing transformations Projections Camera models Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Transformation Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Compared to Taking Pictures View Vector & Viewplane: Direct camera towards subject Viewpoint: Position "camera" in scene Up Vector: Level the camera Field of View: Adjust zoom Clipping: Select content Projection: Exposure Field of View Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Transformation Position and orient camera Set viewpoint, viewing direction and upvector Use geometric transformation to position camera with respect to the scene or ... scene with respect to the camera. Both approaches are mathematically equivalent. Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Pipeline and Coordinate Systems Model Modeling Transformation Coordinates (MC) Modeling Position objects in world coordinates Transformation Viewing Transformation World Coordinates (WC) Position objects in eye/camera coordinates Viewing EC a.k.a. View Reference Coordinates Transformation Eye (Viewing) Perspective Transformation Coordinates (EC) Perspective Convert view volume to a canonical view volume Transformation Also used for clipping Perspective PC a.k.a. clip coordinates Coordinates (PC) Perspective Perspective Division Division Normalized Device Perform mapping from 3D to 2D Coordinates (NDC) Viewport Mapping Viewport Mapping Map normalized device coordinates into Device window/screen coordinates Coordinates (DC) Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 Why a special eye coordinate systemsystem ? In prinicipal it is possible to directly project on an arbitray viewplane. However, this is computationally very involved.
    [Show full text]
  • CSCI 445: Computer Graphics
    CSCI 445: Computer Graphics Qing Wang Assistant Professor at Computer Science Program The University of Tennessee at Martin Angel and Shreiner: Interactive Computer Graphics 7E © 1 Addison-Wesley 2015 Classical Viewing Angel and Shreiner: Interactive Computer Graphics 7E © 2 Addison-Wesley 2015 Objectives • Introduce the classical views • Compare and contrast image formation by computer with how images have been formed by architects, artists, and engineers • Learn the benefits and drawbacks of each type of view Angel and Shreiner: Interactive Computer Graphics 7E © 3 Addison-Wesley 2015 Classical Viewing • Viewing requires three basic elements • One or more objects • A viewer with a projection surface • Projectors that go from the object(s) to the projection surface • Classical views are based on the relationship among these elements • The viewer picks up the object and orients it how she would like to see it • Each object is assumed to constructed from flat principal faces • Buildings, polyhedra, manufactured objects Angel and Shreiner: Interactive Computer Graphics 7E © 4 Addison-Wesley 2015 Planar Geometric Projections • Standard projections project onto a plane • Projectors are lines that either • converge at a center of projection • are parallel • Such projections preserve lines • but not necessarily angles • Nonplanar projections are needed for applications such as map construction Angel and Shreiner: Interactive Computer Graphics 7E © 5 Addison-Wesley 2015 Classical Projections Angel and Shreiner: Interactive Computer Graphics 7E
    [Show full text]
  • Openscenegraph 3.0 Beginner's Guide
    OpenSceneGraph 3.0 Beginner's Guide Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines Rui Wang Xuelei Qian BIRMINGHAM - MUMBAI OpenSceneGraph 3.0 Beginner's Guide Copyright © 2010 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: December 2010 Production Reference: 1081210 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849512-82-4 www.packtpub.com Cover Image by Ed Maclean ([email protected]) Credits Authors Editorial Team Leader Rui Wang Akshara Aware Xuelei Qian Project Team Leader Reviewers Lata Basantani Jean-Sébastien Guay Project Coordinator Cedric Pinson
    [Show full text]
  • Opensg Starter Guide 1.2.0
    OpenSG Starter Guide 1.2.0 Generated by Doxygen 1.3-rc2 Wed Mar 19 06:23:28 2003 Contents 1 Introduction 3 1.1 What is OpenSG? . 3 1.2 What is OpenSG not? . 4 1.3 Compilation . 4 1.4 System Structure . 5 1.5 Installation . 6 1.6 Making and executing the test programs . 6 1.7 Making and executing the tutorials . 6 1.8 Extending OpenSG . 6 1.9 Where to get it . 7 1.10 Scene Graphs . 7 2 Base 9 2.1 Base Types . 9 2.2 Log . 9 2.3 Time & Date . 10 2.4 Math . 10 2.5 System . 11 2.6 Fields . 11 2.7 Creating New Field Types . 12 2.8 Base Functors . 13 2.9 Socket . 13 2.10 StringConversion . 16 3 Fields & Field Containers 17 3.1 Creating a FieldContainer instance . 17 3.2 Reference counting . 17 3.3 Manipulation . 18 3.4 FieldContainer attachments . 18 ii CONTENTS 3.5 Data separation & Thread safety . 18 4 Image 19 5 Nodes & NodeCores 21 6 Groups 23 6.1 Group . 23 6.2 Switch . 23 6.3 Transform . 23 6.4 ComponentTransform . 23 6.5 DistanceLOD . 24 6.6 Lights . 24 7 Drawables 27 7.1 Base Drawables . 27 7.2 Geometry . 27 7.3 Slices . 35 7.4 Particles . 35 8 State Handling 37 8.1 BlendChunk . 39 8.2 ClipPlaneChunk . 39 8.3 CubeTextureChunk . 39 8.4 LightChunk . 39 8.5 LineChunk . 39 8.6 PointChunk . 39 8.7 MaterialChunk . 40 8.8 PolygonChunk . 40 8.9 RegisterCombinersChunk .
    [Show full text]