Texture Mapping Modeling an Orange

Total Page:16

File Type:pdf, Size:1020Kb

Texture Mapping Modeling an Orange Texture Mapping Modeling an Orange q Problem: Model an orange (the fruit) q Start with an orange-colored sphere – Too smooth q Replace sphere with a more complex shape – Takes too many polygons to model all the dimples q Soluon: retain the simple model, but add detail as a part of the rendering process. 1 Modeling an orange q Surface texture: we want to model the surface features of the object not just the overall shape q Soluon: take an image of a real orange’s surface, scan it, and “paste” onto a simple geometric model q The texture image is used to alter the color of each point on the surface q This creates the ILLUSION of a more complex shape q This is efficient since it does not involve increasing the geometric complexity of our model Recall: Graphics Rendering Pipeline Application Geometry Rasterizer 3D 2D input CPU GPU output scene image 2 Rendering Pipeline – Rasterizer Application Geometry Rasterizer Image CPU GPU output q The rasterizer stage does per-pixel operaons: – Turn geometry into visible pixels on screen – Add textures – Resolve visibility (Z-buffer) From Geometry to Display 3 Types of Texture Mapping q Texture Mapping – Uses images to fill inside of polygons q Bump Mapping – Alter surface normals q Environment Mapping – Use the environment as texture Texture Mapping - Fundamentals q A texture is simply an image with coordinates (s,t) q Each part of the surface maps to some part of the texture q The texture value may be used to set or modify the pixel value t (1,1) (199.68, 171.52) Interpolated (0.78,0.67) s (0,0) 256x256 4 2D Texture Mapping q How to map surface coordinates to texture coordinates? 1. Set the coordinates manually for each vertex (1,1) v1 t1 t v 0 t2 t 0 (0,0) v s 2 2. Automacally compute the coordinates – Use an algorithm that sets the texture coordinates q What about the surface interior? Linear interpolaon? Manually Specifying the Coordinates q Specify the texture coordinates at each vertex: 5 Manually Specifying the Coordinates q Can choose alternate texture coordinates Manually Specifying the Coordinates q Can choose alternate texture coordinates 6 Hands-On Session The OpenGL tutor programs § Go to the class website, click on the Links secBon § Download and compile the OpenGL tutors § Run the texture tutor § Play with the parameters of glTexCoord § Enlarge / shrink the texture map and observe the changes in the texture of the screen object § Play with the parameters of glVertex § Change the shape of the screen object and observe the changes in its appearance q By default, the texture is projected orthogonal on the xy plane, on a square 1x1 in dimensions – and repeats. 7 Naïve Texture Mapping Rasterizer q For each pixel: – Interpolate s & t along edges and across spans t – Look up nearest texel in texture map s – Color pixel according to texel Texture map color (possibly modulated by lighBng calculaons) Triangle in world space Interpolang Coordinates (x3, y3), (s3, t3) y y y y ⎛ y − y ⎞ ⎛ y − y ⎞ ⎛ − 1 ⎞ ⎛ − 1 ⎞ 2 2 sR = ⎜1− ⎟s1 + ⎜ ⎟s3 sL = ⎜1− ⎟s2 + ⎜ ⎟s3 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ y3 − y1 y3 − y1 ⎝ y3 − y2 ⎠ ⎝ y3 − y2 ⎠ ⎝ ⎠ ⎝ ⎠ ⎛ x − x ⎞ ⎛ x − x ⎞ s 1 L s L s = ⎜ − ⎟ L + ⎜ ⎟ R ⎝ xR − xL ⎠ ⎝ xR − xL ⎠ (x2, y2), (s2, t2) (x1, y1), (s1, t1) 8 InterpolaIon – What Goes Wrong? q Texture mapping is done in screen space as the polygon is rasterized texture source what we get what we want Check out hbp://graphics.lcs.mit.edu/classes/6.837/F98/Lecture21/Slide06.html What arBfacts do you see? Recall: PerspecIve ProjecIon (xw, yw, zw) World Image point " xw yw % I = $− ,− ,1' # zw zw & y Screen Camera f=1 z x 9 Visualizing the Problem • NoBce that uniform steps on the image plane do not correspond to uniform steps along the edge. q PerspecBve correct interpolaon: “translate step size in image plane correctly to step size on object” Linear Interpolaon in Screen Space " % Interpolation (α) x1 x2 x1 p(α) = p1 +α(p2 − p1) = +α$ − ' in screen space z1 # z2 z1 & ! $ ! $ (! $ ! $+ x x1 x2 x1 x1 + β(x2 − x1) Interpolation (β) # & = # &+ β *# &−# &- p(β) = z z * z z - z1 + β(z2 − z1) in world 3-space " % "# 1 %& )"# 2 %& "# 1 %&, 10 Perspecve Correct Interpolaon We need a mapping from α values to β values: x " x x % x + β(x − x ) 1 +α$ 2 − 1 ' = 1 2 1 z1 # z2 z1 & z1 + β(z2 − z1) Solve for β in terms of α: αz β = 1 z2 +α(z1 − z2 ) At this point in the pipeline (after projection) we no longer have z, but we do have w1= 1/z1 and w2 = 1/z2, so: 1 α w αw β = 1 = 2 " % 1 1 1 w1 +α (w2 − w1) +α$ − ' w2 # w1 w2 & Perspecve Correct Interpolaon αw β = 2 w1 +α (w2 − w1) We can now use this expression for β to interpolate texture indices (s, t), over our 3-space texture. This is accomplished substituting our solution for β given α into the parameter interpolation: s = s1 + β(s2 − s1) αw2 s = s1 + (s2 − s1) w1 +α (w2 − w1) s1w1 +α(s2w2 − s1w1) s = s1 + w1 +α (w2 − w1) 11 Perspecve-Correct Texturing q Demo at hbp://groups.csail.mit.edu/graphics/classes/6.837/F98/Lecture21/Slide14.html Texture and Aliasing q Textures are subject to aliasing: – A polygon point maps into a texture image, essenBally sampling the texture at a point – If a textured object is far away, one screen pixel may map to many texture pixels. The problem is how to combine them Texture map Polygon far from the viewer in perspective projection Rasterized and textured 12 Texture Aliasing q Occurs when screen resoluBon differs from texture resoluBon q Magnificaon aliasing – Screen resoluBon finer than texture resoluBon – MulBple pixels per texel q Minificaon aliasing – Screen resoluBon coarser than texture resoluBon – MulBple texels per pixel MagnificaIon Texels Pixels on screen q One texel maps to mulBple pixels 13 MinificaIon Color? Texels Pixels on screen q MulBple texels map to one pixel MagnificaIon – Nearest Filtering Texels Pixels on screen q Choose the texel nearest the pixel’s center 14 MinificaIon – Nearest Filtering Pixels on screen q Choose the texel nearest the pixel’s center q Expensive to compute which texels every pixel covers Nearest Filtering Example @bottom of screen 15 MagnificaIon – Bilinear Filtering R=247 R=255 G=237 G=204 B=141 B=102 R=253 R=214 G=230 R=102 B=145 G=193 G=102 B=110 B=51 Texels Pixels on screen q Average for the 2x2 texels surrounding a given pixel MinificaIon – Bilinear Filtering R=234 R=252 G=189 G=219 B=0 B=96 R=0 G=0 B=0 R=155 R=135 G=132 G=119 B=30 B=23 Pixels on screen q Average for the 2x2 texels surrounding a given pixel 16 Bilinear vs. Nearest Filtering Bilinear Nearest What is Going On? q Texture Filtering so far: – For a given texture coordinate, pick the nearest texel or average the nearest 4 texels q Problem – For distant areas of the surface, texture space area covered is larger than 4 texels across q Soluon: – Use scaled-down texture versions – Also known as MIP-maps, or MIP levels 17 MIP-Mapping (Pre-Filtering) q MIP = Multum in Parvo (many things in a small place) q A MIP-map is a low resoluBon version of a texture n n q For a texture of 2 x 2 pixels, compute n-1 textures Use on near objects Use on middle objects Use on far objects GeneraIng MIP-maps q Generang a MIP-map from a texture is easy q For each texel in level i, average the values of the 2x2 corresponding texels in level i-1 q If a texture requires n bytes of storage, how much storage will a MIP-map require? R G Answer: 4n/3 R G R G R G B B B B 18 Visualizing MIP-Maps q Pyramid of progressively lower- average resoluBon images, filtered down 2x2 texels 32x32 64x64 from original q Each level of the MIP-map 128x128 represents a pre-blurred version of mulBple texels 256x256 – A texel at level d represents 2d original texels q Which level of mip-map to use? – Index into MIP-map with 3 coordinates: s, t, d (depth) Using MIP-maps q Based on distance from camera 19 Using MIP-Maps q Use the resoluBon that maintains pixel to texel rao close to 1 q Red box represents the pixel area q Green box represents nearest texels q Nearest filtering in each MIP-map level yields a white pixel q Linear filtering on a lower level MIP-map yields a shade of grey Using MIP-Maps q When rendering: – Figure out how large (or small) a projected pixel is (in texels of the original map) – Find the level of the MIP map in which texels average approximately that many original texels – Interpolate the value of the four nearest texels (bilinear interpola-on) 20 Using MIP-Maps q Even beer: – Likely, the coverage of the pixel will fall somewhere between the coverage of texels in two adjacent levels of the MIP map – Find the pixel’s value in each of the two textures using two bilinear interpolaons – Using a third interpolaon, find a value in between these two values, based on the coverage of the pixel versus each of the MIP-map levels – This is called trilinear filtering Trilinear Filtering using MIP-maps R=155 R=229 R=147 R=58 G=132 G=208 G=114 G=0 B=30 B=119 B=117 B=0 R=178 R=233 R=106 R=66 G=179 G=227 G=80 G=0 B=90 B=143 B=74 B=0 Screen Lower Res.
Recommended publications
  • Texture Mapping Textures Provide Details Makes Graphics Pretty
    Texture Mapping Textures Provide Details Makes Graphics Pretty • Details creates immersion • Immersion creates fun Basic Idea Paint pictures on all of your polygons • adds color data • adds (fake) geometric and texture detail one of the basic graphics techniques • tons of hardware support Texture Mapping • Map between region of plane and arbitrary surface • Ensure “right things” happen as textured polygon is rendered and transformed Parametric Texture Mapping • Texture size and orientation tied to polygon • Texture can modulate diffuse color, specular color, specular exponent, etc • Separation of “texture space” from “screen space” • UV coordinates of range [0…1] Retrieving Texel Color • Compute pixel (u,v) using barycentric interpolation • Look up texture pixel (texel) • Copy color to pixel • Apply shading How to Parameterize? Classic problem: How to parameterize the earth (sphere)? Very practical, important problem in Middle Ages… Latitude & Longitude Distorts areas and angles Planar Projection Covers only half of the earth Distorts areas and angles Stereographic Projection Distorts areas Albers Projection Preserves areas, distorts aspect ratio Fuller Parameterization No Free Lunch Every parameterization of the earth either: • distorts areas • distorts distances • distorts angles Good Parameterizations • Low area distortion • Low angle distortion • No obvious seams • One piece • How do we achieve this? Planar Parameterization Project surface onto plane • quite useful in practice • only partial coverage • bad distortion when normals perpendicular Planar Parameterization In practice: combine multiple views Cube Map/Skybox Cube Map Textures • 6 2D images arranged like faces of a cube • +X, -X, +Y, -Y, +Z, -Z • Index by unnormalized vector Cube Map vs Skybox skybox background Cube maps map reflections to emulate reflective surface (e.g.
    [Show full text]
  • Texture / Image-Based Rendering Texture Maps
    Texture / Image-Based Rendering Texture maps Surface color and transparency Environment and irradiance maps Reflectance maps Shadow maps Displacement and bump maps Level of detail hierarchy CS348B Lecture 12 Pat Hanrahan, Spring 2005 Texture Maps How is texture mapped to the surface? Dimensionality: 1D, 2D, 3D Texture coordinates (s,t) Surface parameters (u,v) Direction vectors: reflection R, normal N, halfway H Projection: cylinder Developable surface: polyhedral net Reparameterize a surface: old-fashion model decal What does texture control? Surface color and opacity Illumination functions: environment maps, shadow maps Reflection functions: reflectance maps Geometry: bump and displacement maps CS348B Lecture 12 Pat Hanrahan, Spring 2005 Page 1 Classic History Catmull/Williams 1974 - basic idea Blinn and Newell 1976 - basic idea, reflection maps Blinn 1978 - bump mapping Williams 1978, Reeves et al. 1987 - shadow maps Smith 1980, Heckbert 1983 - texture mapped polygons Williams 1983 - mipmaps Miller and Hoffman 1984 - illumination and reflectance Perlin 1985, Peachey 1985 - solid textures Greene 1986 - environment maps/world projections Akeley 1993 - Reality Engine Light Field BTF CS348B Lecture 12 Pat Hanrahan, Spring 2005 Texture Mapping ++ == 3D Mesh 2D Texture 2D Image CS348B Lecture 12 Pat Hanrahan, Spring 2005 Page 2 Surface Color and Transparency Tom Porter’s Bowling Pin Source: RenderMan Companion, Pls. 12 & 13 CS348B Lecture 12 Pat Hanrahan, Spring 2005 Reflection Maps Blinn and Newell, 1976 CS348B Lecture 12 Pat Hanrahan, Spring 2005 Page 3 Gazing Ball Miller and Hoffman, 1984 Photograph of mirror ball Maps all directions to a to circle Resolution function of orientation Reflection indexed by normal CS348B Lecture 12 Pat Hanrahan, Spring 2005 Environment Maps Interface, Chou and Williams (ca.
    [Show full text]
  • Deconstructing Hardware Usage for General Purpose Computation on Gpus
    Deconstructing Hardware Usage for General Purpose Computation on GPUs Budyanto Himawan Manish Vachharajani Dept. of Computer Science Dept. of Electrical and Computer Engineering University of Colorado University of Colorado Boulder, CO 80309 Boulder, CO 80309 E-mail: {Budyanto.Himawan,manishv}@colorado.edu Abstract performance, in 2001, NVidia revolutionized the GPU by making it highly programmable [3]. Since then, the programmability of The high-programmability and numerous compute resources GPUs has steadily increased, although they are still not fully gen- on Graphics Processing Units (GPUs) have allowed researchers eral purpose. Since this time, there has been much research and ef- to dramatically accelerate many non-graphics applications. This fort in porting both graphics and non-graphics applications to use initial success has generated great interest in mapping applica- the parallelism inherent in GPUs. Much of this work has focused tions to GPUs. Accordingly, several works have focused on help- on presenting application developers with information on how to ing application developers rewrite their application kernels for the perform the non-trivial mapping of general purpose concepts to explicitly parallel but restricted GPU programming model. How- GPU hardware so that there is a good fit between the algorithm ever, there has been far less work that examines how these appli- and the GPU pipeline. cations actually utilize the underlying hardware. Less attention has been given to deconstructing how these gen- This paper focuses on deconstructing how General Purpose ap- eral purpose application use the graphics hardware itself. Nor has plications on GPUs (GPGPU applications) utilize the underlying much attention been given to examining how GPUs (or GPU-like GPU pipeline.
    [Show full text]
  • A Snake Approach for High Quality Image-Based 3D Object Modeling
    A Snake Approach for High Quality Image-based 3D Object Modeling Carlos Hernandez´ Esteban and Francis Schmitt Ecole Nationale Superieure´ des Tel´ ecommunications,´ France e-mail: carlos.hernandez, francis.schmitt @enst.fr f g Abstract faces. They mainly work for 2.5D surfaces and are very de- pendent on the light conditions. A third class of methods In this paper we present a new approach to high quality 3D use the color information of the scene. The color informa- object reconstruction by using well known computer vision tion can be used in different ways, depending on the type of techniques. Starting from a calibrated sequence of color im- scene we try to reconstruct. A first way is to measure color ages, we are able to recover both the 3D geometry and the consistency to carve a voxel volume [28, 18]. But they only texture of the real object. The core of the method is based on provide an output model composed of a set of voxels which a classical deformable model, which defines the framework makes difficult to obtain a good 3D mesh representation. Be- where texture and silhouette information are used as exter- sides, color consistency algorithms compare absolute color nal energies to recover the 3D geometry. A new formulation values, which makes them sensitive to light condition varia- of the silhouette constraint is derived, and a multi-resolution tions. A different way of exploiting color is to compare local gradient vector flow diffusion approach is proposed for the variations of the texture such as in cross-correlation methods stereo-based energy term.
    [Show full text]
  • UNWRELLA Step by Step Unwrapping and Texture Baking Tutorial
    Introduction Content Defining Seams in 3DSMax Applying Unwrella Texture Baking Final result UNWRELLA Unwrella FAQ, users manual Step by step unwrapping and texture baking tutorial Unwrella UV mapping tutorial. Copyright 3d-io GmbH, 2009. All rights reserved. Please visit http://www.unwrella.com for more details. Unwrella Step by Step automatic unwrapping and texture baking tutorial Introduction 3. Introduction Content 4. Content Defining Seams in 3DSMax 10. Defining Seams in Max Applying Unwrella Texture Baking 16. Applying Unwrella Final result 20. Texture Baking Unwrella FAQ, users manual 29. Final Result 30. Unwrella FAQ, user manual Unwrella UV mapping tutorial. Copyright 3d-io GmbH, 2009. All rights reserved. Please visit http://www.unwrella.com for more details. Introduction In this comprehensive tutorial we will guide you through the process of creating optimal UV texture maps. Introduction Despite the fact that Unwrella is single click solution, we have created this tutorial with a lot of material explaining basic Autodesk 3DSMax work and the philosophy behind the „Render to Texture“ workflow. Content Defining Seams in This method, known in game development as texture baking, together with deployment of the Unwrella plug-in achieves the 3DSMax following quality benchmarks efficiently: Applying Unwrella - Textures with reduced texture mapping seams Texture Baking - Minimizes the surface stretching Final result - Creates automatically the largest possible UV chunks with maximal use of available space Unwrella FAQ, users manual - Preserves user created UV Seams - Reduces the production time from 30 minutes to 3 Minutes! Please follow these steps and learn how to utilize this great tool in order to achieve the best results in minimum time during your everyday productions.
    [Show full text]
  • A New Way for Mapping Texture Onto 3D Face Model
    A NEW WAY FOR MAPPING TEXTURE ONTO 3D FACE MODEL Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree of Master of Science in Electrical Engineering By Changsheng Xiang UNIVERSITY OF DAYTON Dayton, Ohio December, 2015 A NEW WAY FOR MAPPING TEXTURE ONTO 3D FACE MODEL Name: Xiang, Changsheng APPROVED BY: John S. Loomis, Ph.D. Russell Hardie, Ph.D. Advisor Committee Chairman Committee Member Professor, Department of Electrical Professor, Department of Electrical and Computer Engineering and Computer Engineering Raul´ Ordo´nez,˜ Ph.D. Committee Member Professor, Department of Electrical and Computer Engineering John G. Weber, Ph.D. Eddy M. Rojas, Ph.D., M.A.,P.E. Associate Dean Dean, School of Engineering School of Engineering ii c Copyright by Changsheng Xiang All rights reserved 2015 ABSTRACT A NEW WAY FOR MAPPING TEXTURE ONTO 3D FACE MODEL Name: Xiang, Changsheng University of Dayton Advisor: Dr. John S. Loomis Adding texture to an object is extremely important for the enhancement of the 3D model’s visual realism. This thesis presents a new method for mapping texture onto a 3D face model. The complete architecture of the new method is described. In this thesis, there are two main parts, one is 3D mesh modifying and the other is image processing. In 3D mesh modifying part, we use one face obj file as the 3D mesh file. Based on the coordinates and indices of that file, a 3D face wireframe can be displayed on screen by using OpenGL API. The most common method for mapping texture onto 3D mesh is to do mesh parametrization.
    [Show full text]
  • High-Quality Texture Acquisition for Real-Time RGB-D Scanning
    TextureFusion: High-Quality Texture Acquisition for Real-Time RGB-D Scanning Joo Ho Lee∗1 Hyunho Ha1 Yue Dong2 Xin Tong2 Min H. Kim1 1KAIST 2Microsoft Research Asia Abstract Real-time RGB-D scanning technique has become widely used to progressively scan objects with a hand-held sensor. Existing online methods restore color information per voxel, and thus their quality is often limited by the trade- off between spatial resolution and time performance. Also, such methods often suffer from blurred artifacts in the cap- tured texture. Traditional offline texture mapping methods (a) Voxel representation (b) Texture representation with non-rigid warping assume that the reconstructed ge- without optimization ometry and all input views are obtained in advance, and the optimization takes a long time to compute mesh param- eterization and warp parameters, which prevents them from being used in real-time applications. In this work, we pro- pose a progressive texture-fusion method specially designed for real-time RGB-D scanning. To this end, we first de- vise a novel texture-tile voxel grid, where texture tiles are embedded in the voxel grid of the signed distance func- tion, allowing for high-resolution texture mapping on the low-resolution geometry volume. Instead of using expensive mesh parameterization, we associate vertices of implicit ge- (c) Global optimization only (d) Spatially-varying ometry directly with texture coordinates. Second, we in- perspective optimization troduce real-time texture warping that applies a spatially- Figure 1: We compare per-voxel color representation (a) varying perspective mapping to input images so that texture with conventional texture representation without optimiza- warping efficiently mitigates the mismatch between the in- tion (b).
    [Show full text]
  • LEAN Mapping
    LEAN Mapping Marc Olano∗ Dan Bakery Firaxis Games Firaxis Games (a) (b) (c) Figure 1: In-game views of a two-layer LEAN map ocean with sun just off screen to the right, and artist-selected shininess equivalent to a Blinn-Phong specular exponent of 13,777: (a) near, (b) mid, and (c) far. Note the lack of aliasing, even with an extremely high power. Abstract 1 Introduction For over thirty years, bump mapping has been an effective method We introduce Linear Efficient Antialiased Normal (LEAN) Map- for adding apparent detail to a surface [Blinn 1978]. We use the ping, a method for real-time filtering of specular highlights in bump term bump mapping to refer to both the original height texture that and normal maps. The method evaluates bumps as part of a shading defines surface normal perturbation for shading, and the more com- computation in the tangent space of the polygonal surface rather mon and general normal mapping, where the texture holds the ac- than in the tangent space of the individual bumps. By operat- tual surface normal. These methods are extremely common in video ing in a common tangent space, we are able to store information games, where the additional surface detail allows a rich visual ex- on the distribution of bump normals in a linearly-filterable form perience without complex high-polygon models. compatible with standard MIP and anisotropic filtering hardware. The necessary textures can be computed in a preprocess or gener- Unfortunately, bump mapping has serious drawbacks with filtering ated in real-time on the GPU for time-varying normal maps.
    [Show full text]
  • Texture Mapping
    Texture Mapping University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Reading Required Watt, intro to Chapter 8 and intros to 8.1, 8.4, 8.6, 8.8. Recommended Paul S. Heckbert. Survey of texture mapping. IEEE Computer Graphics and Applications 6(11): 56--67, November 1986. Optional Watt, the rest of Chapter 8 Woo, Neider, & Davis, Chapter 9 James F. Blinn and Martin E. Newell. Texture and reflection in computer generated images. Communications of the ACM 19(10): 542--547, October 1976. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell 2 What adds visual realism? Geometry only Phong shading Phong shading + Texture maps University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell 3 Texture mapping Texture mapping (Woo et al., fig. 9-1) Texture mapping allows you to take a simple polygon and give it the appearance of something much more complex. Due to Ed Catmull, PhD thesis, 1974 Refined by Blinn & Newell, 1976 Texture mapping ensures that “all the right things” happen as a textured polygon is transformed and rendered. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell 4 Non-parametric texture mapping With “non-parametric texture mapping”: Texture size and orientation are fixed They are unrelated to size and orientation of polygon Gives cookie-cutter effect University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell 5 Parametric texture mapping With “parametric texture mapping,” texture size and orientation are tied to the
    [Show full text]
  • Applications of Texture Mapping to Volume and Flow Visualization
    Applications of Texture Mapping to Volume and Flow Visualization Nelson Max Roger Crawfis Barry Becker Lawrence Livermore National Laboratory1 Abstract this image are used as texture parameters. When a primitive is ren- dered, texture parameters for each image pixel are determined, and We describe six visualization methods which take advantage of used to address the appropriate texture pixels. If the parameters hardware polygon scan conversion, texture mapping, and composit- vary smoothly across the surface, the texture appears to be applied ing, to give interactive viewing of 3D scalar fields, and motion for to the surface. For example, on polygons, the texture parameters 3D flows. For volume rendering, these are splatting of an optimized can be specified at the polygon vertices, and bilinearly interpolated 3D reconstruction filter, and tetrahedral cell projection using a tex- in screen space (or in object space for better perspective projection ture map to provide the exponential per pixel necessary for accurate during scan conversion). On triangles, bilinear interpolation is opacity calculation. For flows, these are the above tetrahedral pro- equivalent to linear interpolation. For surface patches, the same pa- jection method for rendering the “flow volume” dyed after passing rameters used for the surface shape functions can be used as texture through a dye releasing polygon, “splatting” of cycled anisotropic parameters. textures to provide flow direction and motion visualization, splat- ting motion blurred particles to indicate flow velocity, and advect- If the texture is a photograph complete with shading and shad- ing a texture directly to show the flow motion. All these techniques ows, it will not appear realistic when mapped to a curved surface.
    [Show full text]
  • Texture Mapping and Blending / Compositing
    TextureTexture MappingMapping andand BlendingBlending // CompositingCompositing See: Edward Angel's text, “Interactive Computer Graphics” 12 March 2009 CMPT370 Dr. Sean Ho Trinity Western University TexturesTextures inin thethe renderingrendering pipelinepipeline Texture processing happens late in the pipeline Relatively few polygons make it past clipping Each pixel on the fragment maps back to texture coordinates: (s,t) location on the texture map image Textures Vertex list Clipping / Fragment Vertex Rasterizer processor assembly processor Pixels CMPT370: texture maps in OpenGL 12 Mar 2009 2 (1)(1) StepsSteps toto createcreate aa texturetexture On initialization: ● Read or generate image to pixel array ● Request a texture object: glGenTextures() ● Select texture object: glBindTexture() ● Set options (wrap, filter): glTexParameter() ● Load image to texture: glTexImage2D() Or copy from framebuffer Or load with mip-maps: gluBuild2DMipmaps() CMPT370: texture maps in OpenGL 12 Mar 2009 3 (2)(2) StepsSteps toto useuse aa texturetexture On every frame (draw()): ● Enable texturing: glEnable(GL_TEXTURE_2D) ● Select texture object: glBindTexture() ● Set blending modes: glTexEnvf() ● Assign texture coordinates to vertices glTexCoord() with each vertex Or use generated texcoords: glTexGen() CMPT370: texture maps in OpenGL 12 Mar 2009 4 Filtering:Filtering: avoidingavoiding aliasingaliasing Pixels in fragment may map back to widely- spaced locations in texture coordinates Results in aliasing artifact misses blue bars entirely! Solution:
    [Show full text]
  • 04 Rasterization.Pdf
    Computergrafik Matthias Zwicker Universität Bern Herbst 2016 Rendering pipeline Scene data Vertex processing, modeling and viewing Lectures 1 and 2 transformation Projection Lecture 3(last time) Rasterization, fragment processing, Lecture 4 (today): visibility rasterization, visibility Lecture 5-7: shading Image 2 Base code architecture jrtr simple Scene data Application program Vertex processing, • No OpenGL/jogl calls modeling and viewing • Independent of transformation „rendering backend“ (low level graphics API) Projection • Can easily change rendering backend Rasterization, (OpenGL/jogl, fragment processing, software renderer) visibility Image Java library Java executable 3 The complete vertex transform • Mapping a 3D point in object coordinates to pixel coordinates • Object-to-world matrix , camera matrix projection matrix , viewport matrix Object space World space Camera space Canonic view volume Image space 4 The complete vertex transform • Mapping a 3D point in object coordinates to pixel coordinates • Object-to-world matrix , camera matrix projection matrix , viewport matrix Pixel coordinates 5 Today Drawing triangles • Homogeneous rasterization • Texture mapping • Perspective correct interpolation • Visibility 6 Rendering pipeline Scene data Vertex processing, modeling and viewing transformation • Scan conversion and rasterization are synonyms Projection • One of the main operations performed by GPU Rasterization, • Draw triangles, lines, points fragment processing, (squares) visibility • Focus on triangles in this lecture Image
    [Show full text]