Texture Mapping

Total Page:16

File Type:pdf, Size:1020Kb

Texture Mapping texture mapping computer graphics • texture mapping © 2006 fabio pellacini • 1 why texture mapping? • objects have spatially varying details • represent as geometry: correct, but very expensive computer graphics • texture mapping © 2006 fabio pellacini • 2 why texture mapping? • use simple geometry • store varying properties in images • map to objects [Wolfe / SG97 Slide set] [Wolfe / SG97 computer graphics • texture mapping © 2006 fabio pellacini • 3 why texture mapping? • produces compelling results [Jeremy Birn] computer graphics • texture mapping © 2006 fabio pellacini • 4 why texture mapping? • easily change object appearance [Praun et al., 2001] computer graphics • texture mapping © 2006 fabio pellacini • 5 mapping function • surfaces are 2d domains • determine a function that maps them to images Mapping Function Surface Image computer graphics • texture mapping © 2006 fabio pellacini • 6 mapping functions – projections • maps 3d surface points to 2d image coordinates f ℜ3 → ]1,0[: 2 • different types of projections – often corresponding to simple shapes – useful for simple object [Wolfe / SG97 Slide set] [Wolfe / SG97 computer graphics • texture mapping © 2006 fabio pellacini • 7 projections – planar computer graphics • texture mapping © 2006 fabio pellacini • 8 projections – cubical computer graphics • texture mapping © 2006 fabio pellacini • 9 projections – cylindrical computer graphics • texture mapping © 2006 fabio pellacini • 10 projections – spherical computer graphics • texture mapping © 2006 fabio pellacini • 11 projections • planar projection along xy plane of size (w,h) – use affine transform to orient the plane differently ()(/,/)f p p= x w p y h • spherical projection of unit sphere – consider point in spherical coordinates ()(,)f p = φ θ • cylindrical projection of unit cylinder of height h – consider point in cylindrical coordinates – treat caps separately ()(,/)f p = φpy h computer graphics • texture mapping © 2006 fabio pellacini • 12 looking up texture values • normal: do not repeat texture – clamp image coordinates to [0,1] then lookup • tiled: repeat texture multiple times – take mod of image coordinates then lookup normal tiled computer graphics • texture mapping © 2006 fabio pellacini • 13 texture mapping artifacts • tiling textures might introduce seems – discontinuities in the mapping function – change texture to be “tileable” when possible computer graphics • texture mapping © 2006 fabio pellacini • 14 texture mapping artifacts • mapping textures will introduce distortions – unavoidable artifacts • local scale and rotation differences distorted undistorted computer graphics • texture mapping © 2006 fabio pellacini • 15 mapping function – explicit coordinates • store texture coordinates on control points • interpolate as any other parameter – follow interpolation rule defined by surface type • parametric surfaces: can use parameters directly • known as UV mapping computer graphics • texture mapping © 2006 fabio pellacini • 16 uv mapping subdivision surfaces level 0 level 1 level 2 computer graphics • texture mapping © 2006 fabio pellacini • 17 uv mapping vs. projection parameterization projection computer graphics • texture mapping © 2006 fabio pellacini • 18 uv mapping parametric surfaces [Wolfe / SG97 Slide set] [Wolfe / SG97 computer graphics • texture mapping © 2006 fabio pellacini • 19 uv mapping polygon meshes computer graphics • texture mapping © 2006 fabio pellacini • 20 uv mapping polygon meshes [Piponi et al., 2000] computer graphics • texture mapping © 2006 fabio pellacini • 21 uv mapping polygon meshes • break up model intro single texture [© Discreet] computer graphics • texture mapping © 2006 fabio pellacini • 22 interpolating uv coordinates on meshes • pay attention when rasterizing triangles – for raytracing just use baricentric coordinates texture linear interp. perspective interp. [MIT OpenCourseware] used also for colors computer graphics • texture mapping © 2006 fabio pellacini • 23 painting textures on models • if painting is required, paint directly on surfaces – system determines inverse mapping to update image – seems/distortions present, but user does not know computer graphics • texture mapping © 2006 fabio pellacini • 24 texture magnification • linearly interpolate closest pixels in texture [MIT OpenCourseware] texture rendered image computer graphics • texture mapping © 2006 fabio pellacini • 25 texture minification • compute average of texture pixels projected onto each view pixels [MIT OpenCourseware] texture rendered image computer graphics • texture mapping © 2006 fabio pellacini • 26 texture minification • remember point-sampling introduces artifacts – need average of texture below a pixel [MIT OpenCourseware] computer graphics • texture mapping © 2006 fabio pellacini • 27 mip-mapping • approximate algorithm for computing filters • store texture at different resolution • look up the appropriate image based on its projected size [MIT OpenCourseware] computer graphics • texture mapping © 2006 fabio pellacini • 28 3d solid texturing • define a 3D field of values, indexed using P – in-memory array: too much memory – procedurally: hard to define • often add noisy-like details on 2d images [Wolfe / SG97 Slide set] [Wolfe / SG97 computer graphics • texture mapping © 2006 fabio pellacini • 29 types of mapping computer graphics • texture mapping © 2006 fabio pellacini • 30 texture mapping material parameters • diffuse coefficient computer graphics • texture mapping © 2006 fabio pellacini • 31 texture mapping material parameters • specular coefficient computer graphics • texture mapping © 2006 fabio pellacini • 32 displacement mapping • variations of surface positions, thus normals – requires fine tessellation of object geometry computer graphics • texture mapping © 2006 fabio pellacini • 33 displacement mapping • update position by displacing points along normal PPNd = + h • recompute normals by evaluating derivatives – no closed form solution: do it numerically PPPP∂ ∂ Δ Δ N ∝ d× d ≈ d× d d u∂ ∂ vΔ uΔ v computer graphics • texture mapping © 2006 fabio pellacini • 34 bump mapping • variation of surface normals – apply normal perturbation without updating positions computer graphics • texture mapping © 2006 fabio pellacini • 35 bump mapping • simple example: bump mapping xy plane u(,)(,)(,)PP vd = u v+ hN u= v u=x v + y(,) h + u z v PP∂ ∂ ⎛ ∂h ⎞ ⎛ ∂h ⎞ N ∝ d× = dx⎜ + z×⎟ ⎜ y + ⎟ = z d ∂u∂ v ⎝ ∂u ⎠ ⎝ ∂v ⎠ ∂h ∂h =z − x− y ∂u ∂v computer graphics • texture mapping © 2006 fabio pellacini • 36 bump vs. displacement mapping bump map displacement map computer graphics • texture mapping © 2006 fabio pellacini • 37 bump vs. displacement mapping bump map displacement map computer graphics • texture mapping © 2006 fabio pellacini • 38 combining maps types • combine multiple maps to achieve realistic effects computer graphics • texture mapping © 2006 fabio pellacini • 39 lighting effects using texture mapping computer graphics • texture mapping © 2006 fabio pellacini • 40 shadow mapping • graphics pipeline does not allow shadow queries • we can use texturing and a multipass algorithm [NVIDIA/Everitt al.] et project a color texture “project” a depth texture computer graphics • texture mapping © 2006 fabio pellacini • 41 shadow mapping algorithm • pass 1: render scene from light view • pass 1: copy depth buffer in a new texture • pass 2: render scene from camera view • pass 2: transform each pixel to light space • pass 2: compare value to depth buffer • pass 2: if current < buffer depth then shadow computer graphics • texture mapping © 2006 fabio pellacini • 42 shadow mapping algorithm [NVIDIA/Everitt al.] et camera view light view shadow buffer computer graphics • texture mapping © 2006 fabio pellacini • 43 shadow mapping algorithm [NVIDIA/Everitt al.] et camera view light distance projected shadow buffer computer graphics • texture mapping © 2006 fabio pellacini • 44 shadow mapping limitations • not enough resolution: blocky shadows – pixels in shadow buffer too large when projected [Fernando et al., 2002] • biasing: surfaces shadow themselves – remember the epsilon in raytracing – made much worst by resolution limitation computer graphics • texture mapping © 2006 fabio pellacini • 45 environment mapping • graphics pipeline does not allow reflections • we can use texturing and a multipass algorithm [Wolfe / SG97 Slide set] [Wolfe / SG97 computer graphics • texture mapping © 2006 fabio pellacini • 46 environment mapping algorithm • pass 1: render scene 6 times from object center • pass 1: store images onto a cube • pass 2: render scene from the camera view • pass 2: use cube projection to look up values • variation of this works also for refraction computer graphics • texture mapping © 2006 fabio pellacini • 47 environment map limitations • incorrect reflections – objects in incorrect positions: better for distant objs – “rays” go through objects • inefficient: need one map for each object computer graphics • texture mapping © 2006 fabio pellacini • 48 light effects take home message • pipeline not suitable for lighting computations – algorithms are complex to implement and not robust • lots of tricks and special cases – but fast • interactive graphics: use pipeline algorithms • high-quality graphics: use pipeline for view, raytracing for lighting computer graphics • texture mapping © 2006 fabio pellacini • 49 texturing demos OpenGL tutor: texture.exe NVidia samples: bumpy_shiny_patch.exe hw_shadowmap_simple.exe simple_soft_shadows.exe computer graphics • texture mapping © 2006 fabio pellacini • 50.
Recommended publications
  • Seamless Texture Mapping of 3D Point Clouds
    Seamless Texture Mapping of 3D Point Clouds Dan Goldberg Mentor: Carl Salvaggio Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology Rochester, NY November 25, 2014 Abstract The two similar, quickly growing fields of computer vision and computer graphics give users the ability to immerse themselves in a realistic computer generated environment by combining the ability create a 3D scene from images and the texture mapping process of computer graphics. The output of a popular computer vision algorithm, structure from motion (obtain a 3D point cloud from images) is incomplete from a computer graphics standpoint. The final product should be a textured mesh. The goal of this project is to make the most aesthetically pleasing output scene. In order to achieve this, auxiliary information from the structure from motion process was used to texture map a meshed 3D structure. 1 Introduction The overall goal of this project is to create a textured 3D computer model from images of an object or scene. This problem combines two different yet similar areas of study. Computer graphics and computer vision are two quickly growing fields that take advantage of the ever-expanding abilities of our computer hardware. Computer vision focuses on a computer capturing and understanding the world. Computer graphics con- centrates on accurately representing and displaying scenes to a human user. In the computer vision field, constructing three-dimensional (3D) data sets from images is becoming more common. Microsoft's Photo- synth (Snavely et al., 2006) is one application which brought attention to the 3D scene reconstruction field. Many structure from motion algorithms are being applied to data sets of images in order to obtain a 3D point cloud (Koenderink and van Doorn, 1991; Mohr et al., 1993; Snavely et al., 2006; Crandall et al., 2011; Weng et al., 2012; Yu and Gallup, 2014; Agisoft, 2014).
    [Show full text]
  • Chapter 3 Image Formation
    This is page 44 Printer: Opaque this Chapter 3 Image Formation And since geometry is the right foundation of all painting, I have de- cided to teach its rudiments and principles to all youngsters eager for art... – Albrecht Durer¨ , The Art of Measurement, 1525 This chapter introduces simple mathematical models of the image formation pro- cess. In a broad figurative sense, vision is the inverse problem of image formation: the latter studies how objects give rise to images, while the former attempts to use images to recover a description of objects in space. Therefore, designing vision algorithms requires first developing a suitable model of image formation. Suit- able, in this context, does not necessarily mean physically accurate: the level of abstraction and complexity in modeling image formation must trade off physical constraints and mathematical simplicity in order to result in a manageable model (i.e. one that can be inverted with reasonable effort). Physical models of image formation easily exceed the level of complexity necessary and appropriate for this book, and determining the right model for the problem at hand is a form of engineering art. It comes as no surprise, then, that the study of image formation has for cen- turies been in the domain of artistic reproduction and composition, more so than of mathematics and engineering. Rudimentary understanding of the geometry of image formation, which includes various models for projecting the three- dimensional world onto a plane (e.g., a canvas), is implicit in various forms of visual arts. The roots of formulating the geometry of image formation can be traced back to the work of Euclid in the fourth century B.C.
    [Show full text]
  • Poseray Handbuch 3.10.3
    Das PoseRay Handbuch 3.10.3 Zusammengestellt von Steely. Angelehnt an die PoseRay Hilfedatei. PoseRay Handbuch V 3.10.3 Seite 1 Yo! Hör genau zu: Dies ist das deutsche Handbuch zu PoseRay, basierend auf dem Helpfile zum Programm. Es ist keine wörtliche Übersetzung, und FlyerX trifft keine Schuld an diesem Dokument (wenn man davon absieht, daß er PoseRay geschrieben hat). Dies ist ein Handbuch, kein Tutorial. Es erklärt nicht, wie man mit Poser tolle Frauen oder mit POV- Ray tolle Bilder macht. Es ist nur eine freie Übersetzung der poseray.html, die PoseRay beiliegt. Ich will mich bemühen, dieses Dokument aktuell zu halten, und es immer dann überarbeiten und erweitern, wenn FlyerX sichtbar etwas am Programm verändert. Das ist zumindest der Plan. Damit keine Verwirrung aufkommt, folgt das Handbuch in seinen Versionsnummern dem Programm. Die jeweils neueste Version findest Du auf meiner Homepage: www.blackdepth.de. Sei dankbar, daß Schwedenmann und Tom33 von www.POVray-forum.de meinen Prolltext auf Fehler gecheckt haben, sonst wäre das Handbuch noch grausiger. POV-Ray, Poser, DAZ, und viele andere Programm- und Firmennamen in diesem Handbuch sind geschützte Warenzeichen oder zumindest wie solche zu behandeln. Daß kein TM dahinter steht, bedeutet nicht, daß der Begriff frei ist. Unser Markenrecht ist krank, bevor Du also mit den Namen und Begriffen dieses Handbuchs rumalberst, mach dich schlau, ob da einer die Kralle drauf hat. Noch was: dieses Handbuch habe ich geschrieben, es ist mein Werk und ich kann damit machen, was ich will. Deshalb bestimme ich, daß es nicht geschützt ist. Es gibt schon genug Copyright- und IPR- Idioten; ich muß nicht jeden Blödsinn nachmachen.
    [Show full text]
  • POV-Ray Reference
    POV-Ray Reference POV-Team for POV-Ray Version 3.6.1 ii Contents 1 Introduction 1 1.1 Notation and Basic Assumptions . 1 1.2 Command-line Options . 2 1.2.1 Animation Options . 3 1.2.2 General Output Options . 6 1.2.3 Display Output Options . 8 1.2.4 File Output Options . 11 1.2.5 Scene Parsing Options . 14 1.2.6 Shell-out to Operating System . 16 1.2.7 Text Output . 20 1.2.8 Tracing Options . 23 2 Scene Description Language 29 2.1 Language Basics . 29 2.1.1 Identifiers and Keywords . 30 2.1.2 Comments . 34 2.1.3 Float Expressions . 35 2.1.4 Vector Expressions . 43 2.1.5 Specifying Colors . 48 2.1.6 User-Defined Functions . 53 2.1.7 Strings . 58 2.1.8 Array Identifiers . 60 2.1.9 Spline Identifiers . 62 2.2 Language Directives . 64 2.2.1 Include Files and the #include Directive . 64 2.2.2 The #declare and #local Directives . 65 2.2.3 File I/O Directives . 68 2.2.4 The #default Directive . 70 2.2.5 The #version Directive . 71 2.2.6 Conditional Directives . 72 2.2.7 User Message Directives . 75 2.2.8 User Defined Macros . 76 3 Scene Settings 81 3.1 Camera . 81 3.1.1 Placing the Camera . 82 3.1.2 Types of Projection . 86 3.1.3 Focal Blur . 88 3.1.4 Camera Ray Perturbation . 89 3.1.5 Camera Identifiers . 89 3.2 Atmospheric Effects .
    [Show full text]
  • A Geometric Correction Method Based on Pixel Spatial Transformation
    2020 International Conference on Computer Intelligent Systems and Network Remote Control (CISNRC 2020) ISBN: 978-1-60595-683-1 A Geometric Correction Method Based on Pixel Spatial Transformation Xiaoye Zhang, Shaobin Li, Lei Chen ABSTRACT To achieve the non-planar projection geometric correction, this paper proposes a geometric correction method based on pixel spatial transformation for non-planar projection. The pixel spatial transformation relationship between the distorted image on the non-planar projection plane and the original projection image is derived, the pre-distorted image corresponding to the projection surface is obtained, and the optimal view position is determined to complete the geometric correction. The experimental results show that this method can achieve geometric correction of projective plane as cylinder plane and spherical column plane, and can present the effect of visual distortion free at the optimal view point. KEYWORDS Geometric Correction, Image Pre-distortion, Pixel Spatial Transformation. INTRODUCTION Projection display technology can significantly increase visual range, and projection scenes can be flexibly arranged according to actual needs. Therefore, projection display technology has been applied to all aspects of production, life and learning. With the rapid development of digital media technology, modern projection requirements are also gradually complicated, such as virtual reality presentation with a high sense of immersion and projection performance in various forms, which cannot be met by traditional plane projection, so non-planar projection technology arises at the right moment. Compared with traditional planar projection display technology, non-planar projection mainly includes color compensation of projected surface[1], geometric splicing and brightness fusion of multiple projectors[2], and geometric correction[3].
    [Show full text]
  • CS 4204 Computer Graphics 3D Views and Projection
    CS 4204 Computer Graphics 3D views and projection Adapted from notes by Yong Cao 1 Overview of 3D rendering Modeling: * Topic we’ve already discussed • *Define object in local coordinates • *Place object in world coordinates (modeling transformation) Viewing: • Define camera parameters • Find object location in camera coordinates (viewing transformation) Projection: project object to the viewplane Clipping: clip object to the view volume *Viewport transformation *Rasterization: rasterize object Simple teapot demo 3D rendering pipeline Vertices as input Series of operations/transformations to obtain 2D vertices in screen coordinates These can then be rasterized 3D rendering pipeline We’ve already discussed: • Viewport transformation • 3D modeling transformations We’ll talk about remaining topics in reverse order: • 3D clipping (simple extension of 2D clipping) • 3D projection • 3D viewing Clipping: 3D Cohen-Sutherland Use 6-bit outcodes When needed, clip line segment against planes Viewing and Projection Camera Analogy: 1. Set up your tripod and point the camera at the scene (viewing transformation). 2. Arrange the scene to be photographed into the desired composition (modeling transformation). 3. Choose a camera lens or adjust the zoom (projection transformation). 4. Determine how large you want the final photograph to be - for example, you might want it enlarged (viewport transformation). Projection transformations Introduction to Projection Transformations Mapping: f : Rn Rm Projection: n > m Planar Projection: Projection on a plane.
    [Show full text]
  • Texture Mapping There Are Limits to Geometric Modeling
    Texture Mapping There are limits to geometric modeling http://www.beinteriordecorator.com National Geographic Although modern GPUs can render millions of triangles/sec, that’s not enough sometimes... Use texture mapping to increase realism through detail Rosalee Wolfe This image is just 8 polygons! [Angel and Shreiner] [Angel and No texture With texture Pixar - Toy Story Store 2D images in buffers and lookup pixel reflectances procedural photo Other uses of textures... [Angel and Shreiner] [Angel and Light maps Shadow maps Environment maps Bump maps Opacity maps Animation 99] [Stam Texture mapping in the OpenGL pipeline • Geometry and pixels have separate paths through pipeline • meet in fragment processing - where textures are applied • texture mapping applied at end of pipeline - efficient since relatively few polygons get past clipper uv Mapping Tschmits Wikimedia Commons • 2D texture is parameterized by (u,v) • Assign polygon vertices texture coordinates • Interpolate within polygon Texture Calibration Cylindrical mapping (x,y,z) -> (theta, h) -> (u,v) [Rosalee Wolfe] Spherical Mapping (x,y,z) -> (latitude,longitude) -> (u,v) [Rosalee Wolfe] Box Mapping [Rosalee Wolfe] Parametric Surfaces 32 parametric patches 3D solid textures [Dong et al., 2008] al., et [Dong can map object (x,y,z) directly to texture (u,v,w) Procedural textures Rosalee Wolfe e.g., Perlin noise Triangles Texturing triangles • Store (u,v) at each vertex • interpolate inside triangles using barycentric coordinates Texture Space Object Space v 1, 1 (u,v) = (0.2, 0.8)
    [Show full text]
  • Documenting Carved Stones by 3D Modelling
    Documenting carved stones by 3D modelling – Example of Mongolian deer stones Fabrice Monna, Yury Esin, Jérôme Magail, Ludovic Granjon, Nicolas Navarro, Josef Wilczek, Laure Saligny, Sébastien Couette, Anthony Dumontet, Carmela Chateau Smith To cite this version: Fabrice Monna, Yury Esin, Jérôme Magail, Ludovic Granjon, Nicolas Navarro, et al.. Documenting carved stones by 3D modelling – Example of Mongolian deer stones. Journal of Cultural Heritage, El- sevier, 2018, 34 (November–December), pp.116-128. 10.1016/j.culher.2018.04.021. halshs-01916706 HAL Id: halshs-01916706 https://halshs.archives-ouvertes.fr/halshs-01916706 Submitted on 22 Jul 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/328759923 Documenting carved stones by 3D modelling-Example of Mongolian deer stones Article in Journal of Cultural Heritage · May 2018 DOI: 10.1016/j.culher.2018.04.021 CITATIONS READS 3 303 10 authors, including: Fabrice Monna Yury Esin University
    [Show full text]
  • What You Seam Is What You Get: Automatic and Interactive UV Unwrapping
    Online Submission ID: 0270 What you seam is what you get: automatic and interactive UV unwrapping Figure 1: Left: from a meshed model, our system automatically proposes an initial set of seams (black lines) and a valid texture mapping that the user starts with. Right: the user can interactively improve the mapping, by sewing charts and constraining seams (blue lines). As shown in the video, each user interaction is systematically echoed with instant visual feedback. Abstract • (2) Parameterization: each chart in 3-space is put into one to one 2 correspondence with a subset of R ; 3D paint systems opened the door to new texturing tools, directly operating on 3D objects. However, although time and effort was • (3) Packing: the charts are arranged in texture space to minimize devoted to mesh parameterization, UV unwrapping is still known storage requirements. to be a tedious and time-consuming process in Computer Graphics Time and effort was devoted to the problem of mesh parameteriza- production. We think that this is mainly due to the lack of well- tion (see e.g. SIGGRAPH course notes [Hormann et al. 2007] for adapted segmentation method. To make UV unwrapping easier, we an overview). Such mesh parameterization techniques are now well propose a new system, based on three components : • A novel spectral segmentation method that proposes reasonable known and broadly used in the Computer Graphics industry. Re- initial seams to the user; cently, formalizing the relations between deformations and curva- ture lead to both efficient and provably correct methods [Ben-Chen • Several tools to edit and constrain the seams.
    [Show full text]
  • Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface
    Robert Collins CSE486, Penn State Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface y x Robert Collins CSE486, Penn State Review : Forward Projection World Camera Film Pixel Coords Coords Coords Coords U X x u M M V ext Y proj y Maff v W Z U X Mint u V Y v W Z U M u V m11 m12 m13 m14 v W m21 m22 m23 m24 m31 m31 m33 m34 Robert Collins CSE486, PennWorld State to Camera Transformation PC PW W Y X U R Z C V Rotate to Translate by - C align axes (align origins) PC = R ( PW - C ) = R PW + T Robert Collins CSE486, Penn State Perspective Matrix Equation X (Camera Coordinates) x = f Z Y X y = f x' f 0 0 0 Z Y y' = 0 f 0 0 Z z' 0 0 1 0 1 p = M int ⋅ PC Robert Collins CSE486, Penn State Film to Pixel Coords 2D affine transformation from film coords (x,y) to pixel coordinates (u,v): X u’ a11 a12xa'13 f 0 0 0 Y v’ a21 a22 ya'23 = 0 f 0 0 w’ Z 0 0z1' 0 0 1 0 1 Maff Mproj u = Mint PC = Maff Mproj PC Robert Collins CSE486, Penn StateProjection of Points on Planar Surface Perspective projection y Film coordinates x Point on plane Rotation + Translation Robert Collins CSE486, Penn State Projection of Planar Points Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Punchline: For planar surfaces, 3D to 2D perspective projection reduces to a 2D to 2D transformation.
    [Show full text]
  • Projections in Context
    Projections in Context Kaye Mason and Sheelagh Carpendale and Brian Wyvill Department of Computer Science, University of Calgary Calgary, Alberta, Canada g g ffkatherim—sheelagh—blob @cpsc.ucalgary.ca Abstract sentation is the technical difficulty in the more mathematical as- pects of projection. This is exactly where computers can be of great This examination considers projections from three space into two assistance to the authors of representations. It is difficult enough to space, and in particular their application in the visual arts and in get all of the angles right in a planar geometric projection. Get- computer graphics, for the creation of image representations of the ting the curves right in a non-planar projection requires a great deal real world. A consideration of the history of projections both in the of skill. An algorithm, however, could provide a great deal of as- visual arts, and in computer graphics gives the background for a sistance. A few computer scientists have realised this, and begun discussion of possible extensions to allow for more author control, work on developing a broader conceptual framework for the imple- and a broader range of stylistic support. Consideration is given mentation of projections, and projection-aiding tools in computer to supporting user access to these extensions and to the potential graphics. utility. This examination considers this problem of projecting a three dimensional phenomenon onto a two dimensional media, be it a Keywords: Computer Graphics, Perspective, Projective Geome- canvas, a tapestry or a computer screen. It begins by examining try, NPR the nature of the problem (Section 2) and then look at solutions that have been developed both by artists (Section 3) and by com- puter scientists (Section 4).
    [Show full text]
  • Projector Pixel Redirection Using Phase-Only Spatial Light Modulator
    Projector Pixel Redirection Using Phase-Only Spatial Light Modulator Haruka Terai, Daisuke Iwai, and Kosuke Sato Abstract—In projection mapping from a projector to a non-planar surface, the pixel density on the surface becomes uneven. This causes the critical problem of local spatial resolution degradation. We confirmed that the pixel density uniformity on the surface was improved by redirecting projected rays using a phase-only spatial light modulator. Index Terms—Phase-only spatial light modulator (PSLM), pixel redirection, pixel density uniformization 1 INTRODUCTION Projection mapping is a technology for superimposing an image onto a three-dimensional object with an uneven surface using a projector. It changes the appearance of objects such as color and texture, and is used in various fields, such as medical [4] or design support [5]. Projectors are generally designed so that the projected pixel density is uniform when an image is projected on a flat screen. When an image is projected on a non-planar surface, the pixel density becomes uneven, which leads to significant degradation of spatial resolution locally. Previous works applied multiple projectors to compensate for the local resolution degradation. Specifically, projectors are placed so that they illuminate a target object from different directions. For each point on the surface, the previous techniques selected one of the projectors that can project the finest image on the point. In run-time, each point is illuminated by the selected projector [1–3]. Although these techniques Fig. 1. Phase modulation principle by PSLM-LCOS. successfully improved the local spatial degradation, their system con- figurations are costly and require precise alignment of projected images at sub-pixel accuracy.
    [Show full text]