Bump and Normal Mapping; Procedural Textures

Total Page:16

File Type:pdf, Size:1020Kb

Bump and Normal Mapping; Procedural Textures Bump Mapping 2D texture map looks unrealistically smooth across different material, EECS 487: Interactive especially at low viewing angle Fool the human viewer: • perception oF shape is determined by shading, Computer Graphics which is determined by surface normal • use texture map to perturb the surface normal per Fragment • does not actually alter the geometry oF the surface Lecture 26: • shade each Fragment using the perturbed normal as iF the surface • Bump mapping were a different shape • Solid and procedural texture Sphere w/Diffuse Texture Swirly Bump Map Sphere w/Diffuse Texture & Bump Map Bump Mapping Computing Perturbed Normal wrinkle wrinkled surface function p’(u,v) Treat the texture as a single-valued n’ n p’ p’ p p p n’ height function (height map) t • grayscale image stores height: black, s Blinn high area; white, low (or vice versa) • difference in heights determines how much to perturb n in the (u, v) directions n = pu × pv oF a parametric surface TP3 ∂ ∂ smooth surface TP3 • ∂b/∂u = b = (h[s+1, t] − h[s−1, t])/ds pu = p(u,v),!pv = p(u,v) u ∂u ∂v p(u,v) • ∂b/∂v = b = (h[s, t+1] − h[s, t−1])/dt v p' = p(u,v) + b(u,v)n! ⇐ !perturbed!surface • compute a new, perturbed normal ∂ p' u = (p(u,v) + b(u,v)n) = pu + bun + b(u,v)nu = pu + bun From (bu, bv) ∂u ∂ p' v = (p(u,v) + b(u,v)n) = pv + bvn + b(u,v)nv = pv + bvn ! ∂v 0: n ⊥ p Bump Map vs. Normal Map Perturbed Normal Computing n’ requires the height samples From 4 neighbors • each sample by itselF doesn’t perturb the normal n' = p' u × p' v! ⇐ !normal!of!perturbed!surface • an all-white height map renders exactly the same p' u = pu + bun Recall: as an all-black height map p' v = pv + bvn a × (kb + c) = k(a × b) + (a × c) n' = (p + b n) × (p + b n) a × b = −b × a Instead of encoding only the height of a point, u u v v a normal map encodes the normal oF the desired surface, = pu × pv + b (n × p ) + b (p × n) + b b (n × n) u v v u u v in the tangent space oF the surface at the point ! = n + bu (n × pv ) − bv (n × pu ) n' • can be obtained From: n npv • a high-resolution 3D model • http://zarria.net/nrmphoto/nrmphoto.html bv photos ( ) pv • a height-map (with more complex offline computation oF perturbed normals) bu p u • –npu filtered color texture (Photoshop, Blender, Gimp, etc. with plugin) [Hastings-Trew] Interpret the RGB values per Normal Map Creation on Gimp Normal Map texel as the perturbed normal, not height value On Mac OS X, run Gimp-2.6.11 (not 2.8.4) • load RGB file, then select FiltersMapNormalmap height map normal map (nx, ny, nz) = (r, g, b) [Hastings-Trew] For Windows, see http://code.google.com/p/gimp-normalmap/ Hanrahan09 Normal Mapping: Complications Tangent Space Is a coordinate system attached to the local surface 1. Normalized normals range [-1, 1], but RGB values with basis vectors comprising the normal vector (N), range [0,1], convert normals by n’= (n+1)/2 perpendicular to the surface, and two vectors tangent to the surface: the tangent (T) and bitangent (B) Values in normal map must be converted back beFore use: n = n’*2–1 We want T and B to span our texture: 2. Normals are in object space, so normals must be [s , t ] 2 2 p transFormed whenever object is transFormed 2 Instead, most implementations store normals in p0 tangent space, but then light and view vectors [s0, t0] p1 must be transFormed to tangent space [s1, t1] N texture/tangent space object space [Premecz] [s , t ] Tangent Space 2 2 Tangent to Object Space p p(s, t) A point p(s, t) = p0 + (s−s0)T + (t−t0)B 0 [s0, t0] [s1, t1] Given T’, B’, N orthonormal, 3D vectors p1−p0 = (s1−s0)T + (t1−t0)B, N [T’B’N] matrix transForms and p −p = (s −s )T + (t −t )B texture/tangent space 2 0 2 0 2 0 From tangent to object space TBN matrix Let Δsi = (si−s0) and Δti = (ti−t0), then T p ([T’B’N] matrix in Direct3D) 2 ⎡ ⎤ ⎡ ⎤ [We assume TBN orthonormal in the p1 − p0 Δs1 Δt1 ⎡ T ⎤ p ⎢ ⎥ = ⎢ ⎥⎢ ⎥ 0 figure and in subsequent slides and p − p Δs Δt B p1 ⎣⎢ 2 0 ⎦⎥ ⎣⎢ 2 2 ⎦⎥⎣ ⎦ we drop the “prime” sign] ModelView matrix ⎡ ⎤⎡ ⎤ object space ⎡ T ⎤ 1 Δt2 −Δt1 p1 − p0 ⎢ ⎥ = ⎢ ⎥⎢ ⎥ Bitangent is sometimes called ⎣ B ⎦ Δs1Δt2 − Δs2Δt1 ⎢ −Δs2 Δs1 ⎥⎢ p2 − p0 ⎥ ⎣ ⎦⎣ ⎦ binormal, second normal, which Projection matrix in texture space, T, B, N are orthonormal, but not is applicable to curves, but not to surfaces (see lecture on Frenet Frame) necessarily so in object space, use Gram-Schmidt See also: Orthogonalization: B’= N×T; T’= B’×N http://www.terathon.com/code/tangent.html [Premecz, Lengyel] [Lengyel, Dreijer Madsen] Lighting Computation Tangent-Space Lighting In app: n What we have: • load normal map into its own texture unit • per-texel n in tangent space • compute T per triangle and assign it to all three vertices stored in normal map TBN matrix • average out T oF shared vertices For curved surface • T, N in object space • pass per triangle N and T to vertex shader • l, v in eye space T, B, N In vertex shader: • transForm N and T From object to eye space (how?) To compute lighting with ModelView matrix • compute B and orthonormal eye-to-tangent matrix (how?) normal map in tangent space: l, v • transForm l and v to tangent space, normalize, and pass them, 1. transForm l and v to tangent space (how?) interpolated, to the Fragment shader 2. sample per-texel n From normal map Projection matrix 3. compute lighting in tangent space In Fragment shader: • sample normal map per texel (n) • compute lighting in tangent space using normalized n, l, and v [Lengyel, Dreijer Madsen] Bump/Normal Mapping Limitations Parallax/RelieF Mapping Parallax Mapping and RelieF Mapping*: Smooth silhouette • use height field and view vector Smooth when viewed at low viewing angle to compute which “bump” is visible • how to avoid computing view vector No self-shadowing/self-occlusion ideal parallax map and height field intersection? offset approximation, bad For grazing angle relieF sampling: linear search to first point inside surface, then binary search between this point and last point outside surface * a.k.a. Parallax Occlusion Mapping or Steep Parallax Mapping offset limiting O’Brien08 RTR3 Displacement Mapping Vertex Texture Fetch Interpret texel as offset vector to actually displace Fragments: Traditionally, during vertex shading, the p’ = p + h(p)n only texture-related computation is • correct silhouettes and shadows computing texture coordinates per vertex • must be done beFore visibility determination • complicates collision detection, e.g., iF done in vertex shader With Shader Model 3.0, vertex shader can use texture map to process vertices, e.g., For displacement mapping, per vertex texCoords fluid simulation, particle per fragment systems TP3 Marschner08 Solid Textures Texture Mapping Solid textures: WolFe • create a 3D parameterization 2D mapping 3D “carving” (s, t, r) For the texture Alternative definition: a general • map this onto the object technique For storing and evaluating • the easiest parameterization is to use functions the model-space coordinates to index into a 3D texture (s, t, r) = (x, y, z) • like “carving” the object From the material Textures are not just For shading parameters any more! Solid procedural textures: • more generally, instead oF using the texture coordinates as an index, use them to compute a function that defines the texture Marschner Perlin Solid Procedural Texture Example Procedural Textures Advantages over image texture: Instead of an image, use a function • infinite resolution and size • more compact than texture maps // vertex shader • f(x, y, z) may be a subroutine in the Fragment shader varying vec3 pos; • no need to parameterize surface ... • no worries about distortion and deFormation pos = gl_Position.xyz; • objects appear sculpted out of solid substance ... • can animate textures // fragment shader varying vec3 pos; Disadvantages: Peachey ... • difficult to match existing texture color = sin(pos.x)*sin(pos.y); • not always predictable ... • more difficult to code and debug • perhaps slower • aliasing can be a problem Simple Procedural Textures Simple Procedural Textures Stripe: color each point one or Ramp functions: the other color depending on • ramp(x, y, z, a) = ((float) mod(x, a))/a where floor(z) (or • mod(3.75, 2.0)/2.0 = 1.75/2.0 = .875 WolFe floor(x) or floor(y)) is • ramp(x, y, z) = (sin(x)+1)/2 even or odd Combination: procedural WolFe Rings: color each point one or color table lookup: the other color depending on f(x, y, z) computes an index whether the floor(distance into a color table, e.g., From object center along two using the ramp Function coordinates)is even or odd to compute an index Adding Noise Wood Texture 0 1 ClassiFy texture space into woodmap( f(p)) Add noise to cylinders to warp wood: f(s, t, r) = (s2 + t2) cylindrical shells wood(x2 + y2 + A*" 2 2 f(s, t, r) = (s + t ) noise( fx *x + ϕx, fy *y + ϕy, fz *z + ϕz)) controls: Outer rings closer together, which simulates the growth rate oF real trees • Frequency ( f ): coarse vs. fine detail, number and thickness of noise peaks, noise( f x, f y, f z) x y z • phase ( ϕ): location of noise peaks Wood colored color table wood(p) noise(x + ϕx, y + ϕy, z + ϕz) • woodmap(0) = brown “earlywood” • amplitude (A): controls distortion • woodmap(1) = tan “latewood” due to noise effect wood(p) = woodmap( f(p) mod 1) p = (s, t, r) connectedpixel.com/blog/texture/wood Hart08 Hart08 Perlin Noise Turbulence noise(p): pseudo-random number generator Fractal: • sum multiple calls to noise: with the Following characteristics: octaves 1 • memoryless white noise Perlin noise (p) (2i f p) turbulance = ∑ i noise ⋅ • repeatable i=1 2 f • isotropic • band limited (coherent): difference in values is a function of distance • no obvious periodicity • translation and rotation invariant (but not scale invariant) • [-1, 1] known range Hodgins • scale to [0,1] using 0.5(noise()+1) 1-8 octaves oF turbulence • abs(noise()) creates dark veins at zero crossings • each additional term adds finer detail, with diminishing return Hodgins07 Marble Texture Use a sine Function to create the stripes: marble = sin( f * x + A*turbulence(x, y, z)) • the Frequency ( f ) oF the sine Function controls the number and thickness oF the veins • the amplitude (A) oF the turbulence controls the distortion oF the veins legakis.net/justin/MarbleApplet/ .
Recommended publications
  • Steep Parallax Mapping
    I3D 2005 Posters Session Steep Parallax Mapping Morgan McGuire* Max McGuire Brown University Iron Lore Entertainment N E E E E True Surface I I I I Bump x P Surface P P P Parallax Mapping Steep Parallax Mapping Bump Mapping Parallax Mapping Steep Parallax Mapping Fig. 2 The viewer perceives an intersection at (P) from Fig. 1 A single polygon with a high-frequency bump map. shading, although the true intersection occurred at (I). Abstract. We propose a new bump mapping scheme that We choose t to be the first ti for which NB [ti]α > 1.0 – i / n can produce parallax, self-occlusion, and self-shadowing for and then shade as with other bump mapping methods: arbitrary bump maps yet is efficient enough for games when float step = 1.0 / n implemented in a pixel shader and uses existing data formats. vec2 dt = E.xy * bumpScale / (n * E.z) Related Work float height = 1.0; Let E and L be unit vectors from the eye and light in a local vec2 t = texCoord.xy; vec4 nb = texture2DLod (NB, t, LOD); tangent space. At a pixel, let 2-vector s be the texture coordinate (i.e. where the eye ray hits the true, flat surface) while (nb.a < height) { and t be the coordinate of the texel where the ray would have height -= step; t += dt; nb = texture2DLod (NB, t, LOD); hit the surface if it were actually displaced by the bump map. } Shading is computed by Phong illumination, using the // ... Shade using N = nb.rgb normal to the scalar bump map surface B at t and the color of the texture map at t.
    [Show full text]
  • Texture Mapping
    Texture Mapping Slides from Rosalee Wolfe DePaul University http://www.siggraph.org/education/materials/HyperGraph/mapping/r_wolf e/r_wolfe_mapping_1.htm 1 1. Mapping techniques add realism and interest to computer graphics images. Texture mapping applies a pattern of color to an object. Bump mapping alters the surface of an object so that it appears rough, dented or pitted. In this example, the umbrella, background, beachball and beach blanket have texture maps. The sand has been bump mapped. These and other mapping techniques are the subject of this slide set. 2 2: When creating image detail, it is cheaper to employ mapping techniques that it is to use myriads of tiny polygons. The image on the right portrays a brick wall, a lawn and the sky. In actuality the wall was modeled as a rectangular solid, and the lawn and the sky were created from rectangles. The entire image contains eight polygons.Imagine the number of polygon it would require to model the blades of grass in the lawn! Texture mapping creates the appearance of grass without the cost of rendering thousands of polygons. 3 3: Knowing the difference between world coordinates and object coordinates is important when using mapping techniques. In object coordinates the origin and coordinate axes remain fixed relative to an object no matter how the object’s position and orientation change. Most mapping techniques use object coordinates. Normally, if a teapot’s spout is painted yellow, the spout should remain yellow as the teapot flies and tumbles through space. When using world coordinates, the pattern shifts on the object as the object moves through space.
    [Show full text]
  • Game Developers Conference Europe Wrap, New Women’S Group Forms, Licensed to Steal Super Genre Break Out, and More
    >> PRODUCT REVIEWS SPEEDTREE RT 1.7 * SPACEPILOT OCTOBER 2005 THE LEADING GAME INDUSTRY MAGAZINE >>POSTMORTEM >>WALKING THE PLANK >>INNER PRODUCT ART & ARTIFICE IN DANIEL JAMES ON DEBUG? RELEASE? RESIDENT EVIL 4 CASUAL MMO GOLD LET’S DEVELOP! Thanks to our publishers for helping us create a new world of video games. GameTapTM and many of the video game industry’s leading publishers have joined together to create a new world where you can play hundreds of the greatest games right from your broadband-connected PC. It’s gaming freedom like never before. START PLAYING AT GAMETAP.COM TM & © 2005 Turner Broadcasting System, Inc. A Time Warner Company. Patent Pending. All Rights Reserved. GTP1-05-116-104_mstrA_v2.indd 1 9/7/05 10:58:02 PM []CONTENTS OCTOBER 2005 VOLUME 12, NUMBER 9 FEATURES 11 TOP 20 PUBLISHERS Who’s the top dog on the publishing block? Ranked by their revenues, the quality of the games they release, developer ratings, and other factors pertinent to serious professionals, our annual Top 20 list calls attention to the definitive movers and shakers in the publishing world. 11 By Tristan Donovan 21 INTERVIEW: A PIRATE’S LIFE What do pirates, cowboys, and massively multiplayer online games have in common? They all have Daniel James on their side. CEO of Three Rings, James’ mission has been to create an addictive MMO (or two) that has the pick-up-put- down rhythm of a casual game. In this interview, James discusses the barriers to distributing and charging for such 21 games, the beauty of the web, and the trouble with executables.
    [Show full text]
  • GAME DEVELOPERS a One-Of-A-Kind Game Concept, an Instantly Recognizable Character, a Clever Phrase— These Are All a Game Developer’S Most Valuable Assets
    HOLLYWOOD >> REVIEWS ALIAS MAYA 6 * RTZEN RT/SHADER ISSUE AUGUST 2004 THE LEADING GAME INDUSTRY MAGAZINE >>SIGGRAPH 2004 >>DEVELOPER DEFENSE >>FAST RADIOSITY SNEAK PEEK: LEGAL TOOLS TO SPEEDING UP LIGHTMAPS DISCREET 3DS MAX 7 PROTECT YOUR I.P. WITH PIXEL SHADERS POSTMORTEM: THE CINEMATIC EFFECT OF ZOMBIE STUDIOS’ SHADOW OPS: RED MERCURY []CONTENTS AUGUST 2004 VOLUME 11, NUMBER 7 FEATURES 14 COPYRIGHT: THE BIG GUN FOR GAME DEVELOPERS A one-of-a-kind game concept, an instantly recognizable character, a clever phrase— these are all a game developer’s most valuable assets. To protect such intangible properties from pirates, you’ll need to bring out the big gun—copyright. Here’s some free advice from a lawyer. By S. Gregory Boyd 20 FAST RADIOSITY: USING PIXEL SHADERS 14 With the latest advances in hardware, GPU, 34 and graphics technology, it’s time to take another look at lightmapping, the divine art of illuminating a digital environment. By Brian Ramage 20 POSTMORTEM 30 FROM BUNGIE TO WIDELOAD, SEROPIAN’S BEAT GOES ON 34 THE CINEMATIC EFFECT OF ZOMBIE STUDIOS’ A decade ago, Alexander Seropian founded a SHADOW OPS: RED MERCURY one-man company called Bungie, the studio that would eventually give us MYTH, ONI, and How do you give a player that vicarious presence in an imaginary HALO. Now, after his departure from Bungie, environment—that “you-are-there” feeling that a good movie often gives? he’s trying to repeat history by starting a new Zombie’s answer was to adopt many of the standard movie production studio: Wideload Games.
    [Show full text]
  • Steve Marschner CS5625 Spring 2019 Predicting Reflectance Functions from Complex Surfaces
    08 Detail mapping Steve Marschner CS5625 Spring 2019 Predicting Reflectance Functions from Complex Surfaces Stephen H. Westin James R. Arvo Kenneth E. Torrance Program of Computer Graphics Cornell University Ithaca, New York 14853 Hierarchy of scales Abstract 1000 macroscopic Geometry We describe a physically-based Monte Carlo technique for ap- proximating bidirectional reflectance distribution functions Object scale (BRDFs) for a large class of geometriesmesoscopic by directly simulating 100 optical scattering. The technique is more general than pre- vious analytical models: it removesmicroscopic most restrictions on sur- Texture, face microgeometry. Three main points are described: a new bump maps 10 representation of the BRDF, a Monte Carlo technique to esti- mate the coefficients of the representation, and the means of creating a milliscale BRDF from microscale scattering events. Milliscale These allow the prediction of scattering from essentially ar- (Mesoscale) 1 mm Texels bitrary roughness geometries. The BRDF is concisely repre- sented by a matrix of spherical harmonic coefficients; the ma- 0.1 trix is directly estimated from a geometric optics simulation, BRDF enforcing exact reciprocity. The method applies to rough- ness scales that are large with respect to the wavelength of Microscale light and small with respect to the spatial density at which 0.01 the BRDF is sampled across the surface; examples include brushed metal and textiles. The method is validated by com- paring with an existing scattering model and sample images are generated with a physically-based global illumination al- Figure 1: Applicability of Techniques gorithm. CR Categories and Subject Descriptors: I.3.7 [Computer model many surfaces, such as those with anisotropic rough- Graphics]: Three-Dimensional Graphics and Realism.
    [Show full text]
  • Towards a Set of Techniques to Implement Bump Mapping
    Towards a set of techniques to implement Bump Mapping Márcio da Silva Camilo, Bernardo Nogueira S. Hodge, Rodrigo Pereira Martins, Alexandre Sztajnberg Departamento de Informática e Ciências da Computação Universidade Estadual do Rio de Janeiro {pmacstronger, bernardohodge , rodrigomartins , alexszt}@ime.uerj.br Abstract . irregular lighting appearance without modeling its irregular patterns as true geometric perturbations, The revolution of three dimensional video games increasing the model’s polygon count, hence decreasing lead to an intense development of graphical techniques applications’ performance. and hardware. Texture-mapping hardware is now able to There are several other techniques for generate interactive computer-generated imagery with implementing bump mapping. Some of them, such as high levels of per-pixel detail. Nevertheless, traditional Emboss Mapping, do not result in a good appearance for single texture techniques are not able to simulate bumped some surfaces, in general because they use a too simplistic surfaces decently. More traditional bump mapping approximation of light calculation [4]. Others, such as techniques, such as Emboss bump mapping, most times Blinn’s original bump mapping idea, are computationally deploy an undesirable or ‘fake’ appearance to wrinkles in expensive to calculate in real-time [7]. Dot3 bump surfaces. Bump mapping can be applied to different types mapping deploys a great final result on surface appearance of applications varying form computer games, 3d and is feasible in today’s hardware in a single rendering environment simulations, and architectural projects, pass. It is based on a mathematical model of lighting among others. In this paper we will examine a method that intensity and reflection calculated for each pixel on a can be applied in 3d game engines, which uses texture- surface to be rendered.
    [Show full text]
  • Procedural Modeling
    Procedural Modeling From Last Time • Many “Mapping” techniques – Bump Mapping – Normal Mapping – Displacement Mapping – Parallax Mapping – Environment Mapping – Parallax Occlusion – Light Mapping Mapping Bump Mapping • Use textures to alter the surface normal – Does not change the actual shape of the surface – Just shaded as if it were a different shape Sphere w/Diffuse Texture Swirly Bump Map Sphere w/Diffuse Texture & Bump Map Bump Mapping • Treat a greyscale texture as a single-valued height function • Compute the normal from the partial derivatives in the texture Another Bump Map Example Bump Map Cylinder w/Diffuse Texture Map Cylinder w/Texture Map & Bump Map Normal Mapping • Variation on Bump Mapping: Use an RGB texture to directly encode the normal http://en.wikipedia.org/wiki/File:Normal_map_example.png What's Missing? • There are no bumps on the silhouette of a bump-mapped or normal-mapped object • Bump/Normal maps don’t allow self-occlusion or self-shadowing From Last Time • Many “Mapping” techniques – Bump Mapping – Normal Mapping – Displacement Mapping – Parallax Mapping – Environment Mapping – Parallax Occlusion – Light Mapping Mapping Displacement Mapping • Use the texture map to actually move the surface point • The geometry must be displaced before visibility is determined Displacement Mapping Image from: Geometry Caching for Ray-Tracing Displacement Maps EGRW 1996 Matt Pharr and Pat Hanrahan note the detailed shadows cast by the stones Displacement Mapping Ken Musgrave a.k.a. Offset Mapping or Parallax Mapping Virtual Displacement Mapping • Displace the texture coordinates for each pixel based on view angle and value of the height map at that point • At steeper view-angles, texture coordinates are displaced more, giving illusion of depth due to parallax effects “Detailed shape representation with parallax mapping”, Kaneko et al.
    [Show full text]
  • Per-Pixel Displacement Mapping with Distance Functions
    108_gems2_ch08_new.qxp 2/2/2005 2:20 PM Page 123 Chapter 8 Per-Pixel Displacement Mapping with Distance Functions William Donnelly University of Waterloo In this chapter, we present distance mapping, a technique for adding small-scale dis- placement mapping to objects in a pixel shader. We treat displacement mapping as a ray-tracing problem, beginning with texture coordinates on the base surface and calcu- lating texture coordinates where the viewing ray intersects the displaced surface. For this purpose, we precompute a three-dimensional distance map, which gives a measure of the distance between points in space and the displaced surface. This distance map gives us all the information necessary to quickly intersect a ray with the surface. Our algorithm significantly increases the perceived geometric complexity of a scene while maintaining real-time performance. 8.1 Introduction Cook (1984) introduced displacement mapping as a method for adding small-scale detail to surfaces. Unlike bump mapping, which affects only the shading of surfaces, displacement mapping adjusts the positions of surface elements. This leads to effects not possible with bump mapping, such as surface features that occlude each other and nonpolygonal silhouettes. Figure 8-1 shows a rendering of a stone surface in which occlusion between features contributes to the illusion of depth. The usual implementation of displacement mapping iteratively tessellates a base sur- face, pushing vertices out along the normal of the base surface, and continuing until 8.1 Introduction 123 Copyright 2005 by NVIDIA Corporation 108_gems2_ch08_new.qxp 2/2/2005 2:20 PM Page 124 Figure 8-1. A Displaced Stone Surface Displacement mapping (top) gives an illusion of depth not possible with bump mapping alone (bottom).
    [Show full text]
  • Analytic Displacement Mapping Using Hardware Tessellation
    Analytic Displacement Mapping using Hardware Tessellation Matthias Nießner University of Erlangen-Nuremberg and Charles Loop Microsoft Research Displacement mapping is ideal for modern GPUs since it enables high- 1. INTRODUCTION frequency geometric surface detail on models with low memory I/O. How- ever, problems such as texture seams, normal re-computation, and under- Displacement mapping has been used as a means of efficiently rep- sampling artifacts have limited its adoption. We provide a comprehensive resenting and animating 3D objects with high frequency surface solution to these problems by introducing a smooth analytic displacement detail. Where texture mapping assigns color to surface points at function. Coefficients are stored in a GPU-friendly tile based texture format, u; v parameter values, displacement mapping assigns vector off- and a multi-resolution mip hierarchy of this function is formed. We propose sets. The advantages of this approach are two-fold. First, only the a novel level-of-detail scheme by computing per vertex adaptive tessellation vertices of a coarse (low frequency) base mesh need to be updated factors and select the appropriate pre-filtered mip levels of the displace- each frame to animate the model. Second, since the only connec- ment function. Our method obviates the need for a pre-computed normal tivity data needed is for the coarse base mesh, significantly less map since normals are directly derived from the displacements. Thus, we space is needed to store the equivalent highly detailed mesh. Fur- are able to perform authoring and rendering simultaneously without typical ther space reductions are realized by storing scalar, rather than vec- displacement map extraction from a dense triangle mesh.
    [Show full text]
  • INTEGRATING PIXOLOGIC ZBRUSH INTO an AUTODESK MAYA PIPELINE Micah Guy Clemson University, [email protected]
    Clemson University TigerPrints All Theses Theses 8-2011 INTEGRATING PIXOLOGIC ZBRUSH INTO AN AUTODESK MAYA PIPELINE Micah Guy Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_theses Part of the Computer Sciences Commons Recommended Citation Guy, Micah, "INTEGRATING PIXOLOGIC ZBRUSH INTO AN AUTODESK MAYA PIPELINE" (2011). All Theses. 1173. https://tigerprints.clemson.edu/all_theses/1173 This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact [email protected]. INTEGRATING PIXOLOGIC ZBRUSH INTO AN AUTODESK MAYA PIPELINE A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Fine Arts in Digital Production Arts by Micah Carter Richardson Guy August 2011 Accepted by: Dr. Timothy Davis, Committee Chair Dr. Donald House Michael Vatalaro ABSTRACT The purpose of this thesis is to integrate the use of the relatively new volumetric pixel program, Pixologic ZBrush, into an Autodesk Maya project pipeline. As ZBrush is quickly becoming the industry standard in advanced character design in both film and video game work, the goal is to create a succinct and effective way for Maya users to utilize ZBrush files. Furthermore, this project aims to produce a final film of both valid artistic and academic merit. The resulting work produced a guide that followed a Maya-created film project where ZBrush was utilized in the creation of the character models, as well as noting the most useful formats and resolutions with which to approach each file.
    [Show full text]
  • Gpu-Based Normal Map Generation
    GPU-BASED NORMAL MAP GENERATION Jesús Gumbau, Carlos González Universitat Jaume I. Castellón, Spain Miguel Chover Universitat Jaume I. Castellón, Spain Keywords: GPU, normal maps, graphics hardware. Abstract: This work presents a method for the generation of normal maps from two polygonal models: a very detailed one with a high polygonal density, and a coarser one which will be used for real-time rendering. This method generates the normal map of the high resolution model which can be applied to the coarser model to improve its shading. The normal maps are completely generated on the graphics hardware. 1 INTRODUCTION a common texture coordinate system, this is not always the case and thus, it is not trivial to A normal map is a two-dimensional image whose implement on the graphics hardware. This is the contents are RGB colour elements which are reason why this kind of tools are often implemented interpreted as 3D vectors containing the direction of on software. the normal of the surface in each point. This is The high programmability of current graphics especially useful in real-time applications when this hardware allows for the implementation of these image is used as a texture applied onto a 3D model, kinds of methods on the GPU. This way, the great because normals for each point of a model can be scalability of the graphics hardware, which is specified without needing more geometry. This increased even more each year, can be used to feature enables the use of correct per-pixel lighting perform this task. using the Phong lighting equation.
    [Show full text]
  • LEAN Mapping
    LEAN Mapping Marc Olano∗ Dan Bakery Firaxis Games Firaxis Games (a) (b) (c) Figure 1: In-game views of a two-layer LEAN map ocean with sun just off screen to the right, and artist-selected shininess equivalent to a Blinn-Phong specular exponent of 13,777: (a) near, (b) mid, and (c) far. Note the lack of aliasing, even with an extremely high power. Abstract 1 Introduction For over thirty years, bump mapping has been an effective method We introduce Linear Efficient Antialiased Normal (LEAN) Map- for adding apparent detail to a surface [Blinn 1978]. We use the ping, a method for real-time filtering of specular highlights in bump term bump mapping to refer to both the original height texture that and normal maps. The method evaluates bumps as part of a shading defines surface normal perturbation for shading, and the more com- computation in the tangent space of the polygonal surface rather mon and general normal mapping, where the texture holds the ac- than in the tangent space of the individual bumps. By operat- tual surface normal. These methods are extremely common in video ing in a common tangent space, we are able to store information games, where the additional surface detail allows a rich visual ex- on the distribution of bump normals in a linearly-filterable form perience without complex high-polygon models. compatible with standard MIP and anisotropic filtering hardware. The necessary textures can be computed in a preprocess or gener- Unfortunately, bump mapping has serious drawbacks with filtering ated in real-time on the GPU for time-varying normal maps.
    [Show full text]