<<

2D texture map looks unrealistically smooth across different material, EECS 487: Interactive especially at low viewing angle Fool the human viewer: • perception of shape is determined by shading, which is determined by surface normal • use texture map to perturb the surface normal per fragment • does not actually alter the geometry of the surface Lecture 26: • shade each fragment using the perturbed normal as if the surface • Bump mapping were a different shape • Solid and procedural texture

Sphere w/Diffuse Texture Swirly Bump Map Sphere w/Diffuse Texture & Bump Map

Bump Mapping Computing Perturbed Normal wrinkle wrinkled surface function p’(u,v) Treat the texture as a single-valued n’ n p’ p’ p p p n’ height function (height map) t • grayscale image stores height: black, s Blinn high area; white, low (or vice versa) • difference in heights determines how much to perturb n in the (u, v) directions n = pu × pv of a parametric surface TP3 ∂ ∂ smooth surface TP3 • ∂b/∂u = b = (h[s+1, t] − h[s−1, t])/ds pu = p(u,v),!pv = p(u,v) u ∂u ∂v p(u,v) • ∂b/∂v = b = (h[s, t+1] − h[s, t−1])/dt v p' = p(u,v) + b(u,v)n! ⇐ !perturbed!surface • compute a new, perturbed normal ∂ p' u = (p(u,v) + b(u,v)n) = pu + bun + b(u,v)nu = pu + bun from (bu, bv) ∂u ∂ p' v = (p(u,v) + b(u,v)n) = pv + bvn + b(u,v)nv = pv + bvn ! ∂v 0: n ⊥ p Bump Map vs. Normal Map Perturbed Normal Computing n’ requires the height samples from 4 neighbors • each sample by itself doesn’t perturb the normal n' = p' u × p' v! ⇐ !normal!of!perturbed!surface • an all-white height map renders exactly the same p' u = pu + bun Recall: as an all-black height map

p' v = pv + bvn a × (kb + c) = k(a × b) + (a × c) n' = (p + b n) × (p + b n) a × b = −b × a Instead of encoding only the height of a point, u u v v a normal map encodes the normal of the desired surface, = pu × pv + b (n × p ) + b (p × n) + b b (n × n) u v v u u v in the tangent space of the surface at the point ! = n + bu (n × pv ) − bv (n × pu ) n' • can be obtained from: n npv • a high-resolution 3D model • http://zarria.net/nrmphoto/nrmphoto.html bv photos ( ) pv • a height-map (with more complex offline computation of perturbed normals) bu p u • –npu filtered color texture (Photoshop, , Gimp, etc. with plugin) [Hastings-Trew]

Interpret the RGB values per Normal Map Creation on Gimp Normal Map texel as the perturbed normal, not height value On Mac OS X, run Gimp-2.6.11 (not 2.8.4) • load RGB file, then select FiltersMapNormalmap height map normal map (nx, ny, nz) = (r, g, b)

[Hastings-Trew]

For Windows, see http://code.google.com/p/gimp-normalmap/

Hanrahan09 : Complications Tangent Space Is a coordinate system attached to the local surface 1. Normalized normals range [-1, 1], but RGB values with basis vectors comprising the normal vector (N), range [0,1], convert normals by n’= (n+1)/2 perpendicular to the surface, and two vectors tangent to the surface: the tangent (T) and bitangent (B) Values in normal map must be converted back before use: n = n’*2–1 We want T and B to span our texture:

2. Normals are in object space, so normals must be [s , t ] 2 2 p transformed whenever object is transformed 2 Instead, most implementations store normals in p0 tangent space, but then light and view vectors [s0, t0] p1 must be transformed to tangent space [s1, t1] N texture/tangent space object space [Premecz]

[s , t ] Tangent Space 2 2 Tangent to Object Space p p(s, t) A point p(s, t) = p0 + (s−s0)T + (t−t0)B 0 [s0, t0] [s1, t1] Given T’, B’, N orthonormal, 3D vectors p1−p0 = (s1−s0)T + (t1−t0)B, N [T’B’N] matrix transforms and p −p = (s −s )T + (t −t )B texture/tangent space 2 0 2 0 2 0 from tangent to object space TBN matrix Let Δsi = (si−s0) and Δti = (ti−t0), then T p ([T’B’N] matrix in Direct3D) 2 ⎡ ⎤ ⎡ ⎤ [We assume TBN orthonormal in the p1 − p0 Δs1 Δt1 ⎡ T ⎤ p ⎢ ⎥ = ⎢ ⎥⎢ ⎥ 0 figure and in subsequent slides and p − p Δs Δt B p1 ⎣⎢ 2 0 ⎦⎥ ⎣⎢ 2 2 ⎦⎥⎣ ⎦ we drop the “prime” sign] ModelView matrix ⎡ ⎤⎡ ⎤ object space ⎡ T ⎤ 1 Δt2 −Δt1 p1 − p0 ⎢ ⎥ = ⎢ ⎥⎢ ⎥ Bitangent is sometimes called ⎣ B ⎦ Δs1Δt2 − Δs2Δt1 ⎢ −Δs2 Δs1 ⎥⎢ p2 − p0 ⎥ ⎣ ⎦⎣ ⎦ binormal, second normal, which Projection matrix in texture space, T, B, N are orthonormal, but not is applicable to curves, but not to surfaces (see lecture on Frenet frame) necessarily so in object space, use Gram-Schmidt See also: Orthogonalization: B’= N×T; T’= B’×N http://www.terathon.com/code/tangent.html [Premecz, Lengyel] [Lengyel, Dreijer Madsen] Lighting Computation Tangent-Space Lighting In app: n What we have: • load normal map into its own texture unit • per-texel n in tangent space • compute T per triangle and assign it to all three vertices stored in normal map TBN matrix • average out T of shared vertices for curved surface • T, N in object space • pass per triangle N and T to vertex • l, v in eye space T, B, N In vertex shader: • transform N and T from object to eye space (how?) To compute lighting with ModelView matrix • compute B and orthonormal eye-to-tangent matrix (how?) normal map in tangent space: l, v • transform l and v to tangent space, normalize, and pass them, 1. transform l and v to tangent space (how?) interpolated, to the fragment shader 2. sample per-texel n from normal map Projection matrix 3. compute lighting in tangent space In fragment shader: • sample normal map per texel (n) • compute lighting in tangent space using normalized n, l, and v

[Lengyel, Dreijer Madsen]

Bump/Normal Mapping Limitations Parallax/Relief Mapping and Relief Mapping*: Smooth silhouette • use height field and view vector Smooth when viewed at low viewing angle to compute which “bump” is visible • how to avoid computing view vector No self-shadowing/self-occlusion ideal parallax map and height field intersection?

offset approximation, bad for grazing angle

relief sampling: linear search to first point inside surface, then binary search between this point and last point outside surface * a.k.a. Parallax Occlusion Mapping or Steep Parallax Mapping offset limiting O’Brien08 RTR3 Vertex Texture Fetch Interpret texel as offset vector to actually displace fragments: Traditionally, during vertex shading, the p’ = p + h(p)n only texture-related computation is • correct silhouettes and shadows computing texture coordinates per vertex • must be done before visibility determination • complicates collision detection, e.g., if done in vertex shader With Shader Model 3.0, vertex shader can use texture map to process vertices, e.g., for displacement mapping, per vertex texCoords fluid simulation, particle per fragment systems

TP3 Marschner08

Solid Textures Solid textures: Wolfe • create a 3D parameterization 2D mapping 3D “carving” (s, t, r) for the texture Alternative definition: a general • map this onto the object technique for storing and evaluating • the easiest parameterization is to use functions the model-space coordinates to index into a 3D texture (s, t, r) = (x, y, z) • like “carving” the object from the material Textures are not just for shading parameters any more! Solid procedural textures: • more generally, instead of using the texture coordinates as an index, use them to compute a function that defines the texture

Marschner Perlin Solid Procedural Texture Example Procedural Textures Advantages over image texture: Instead of an image, use a function • infinite resolution and size • more compact than texture maps // vertex shader • f(x, y, z) may be a subroutine in the fragment shader varying vec3 pos; • no need to parameterize surface ... • no worries about distortion and deformation pos = gl_Position.xyz; • objects appear sculpted out of solid substance ... • can animate textures

// fragment shader varying vec3 pos; Disadvantages: Peachey ... • difficult to match existing texture color = sin(pos.x)*sin(pos.y); • not always predictable ... • more difficult to code and debug • perhaps slower • aliasing can be a problem

Simple Procedural Textures Simple Procedural Textures Stripe: color each point one or Ramp functions: the other color depending on • ramp(x, y, z, a) = ((float) mod(x, a))/a where floor(z) (or • mod(3.75, 2.0)/2.0 = 1.75/2.0 = .875 Wolfe floor(x) or floor(y)) is • ramp(x, y, z) = (sin(x)+1)/2 even or odd Combination: procedural Wolfe Rings: color each point one or color table lookup: the other color depending on f(x, y, z) computes an index whether the floor(distance into a color table, e.g., from object center along two using the ramp function coordinates)is even or odd to compute an index Adding Noise Wood Texture 0 1 Classify texture space into woodmap( f(p)) Add noise to cylinders to warp wood: f(s, t, r) = (s2 + t2) cylindrical shells wood(x2 + y2 + A* 2 2 f(s, t, r) = (s + t ) noise( fx *x + ϕx, fy *y + ϕy, fz *z + ϕz)) controls: Outer rings closer together, which simulates the growth rate of real trees • frequency ( f ): coarse vs. fine detail, number and thickness of noise peaks, noise( f x, f y, f z) x y z • phase ( ϕ): location of noise peaks Wood colored color table wood(p) noise(x + ϕx, y + ϕy, z + ϕz) • woodmap(0) = brown “earlywood” • amplitude (A): controls distortion • woodmap(1) = tan “latewood” due to noise effect wood(p) = woodmap( f(p) mod 1) p = (s, t, r)

connectedpixel.com/blog/texture/wood Hart08 Hart08

Perlin Noise Turbulence noise(p): pseudo-random number generator Fractal: • sum multiple calls to noise: with the following characteristics: octaves 1 • memoryless white noise Perlin noise (p) (2i f p) turbulance = ∑ i noise ⋅ • repeatable i=1 2 f • isotropic • band limited (coherent): difference in values is a function of distance • no obvious periodicity

• translation and rotation invariant

(but not scale invariant)

• [-1, 1] known range Hodgins • scale to [0,1] using 0.5(noise()+1) 1-8 octaves of turbulence

• abs(noise()) creates dark veins at zero crossings • each additional term adds finer detail, with diminishing return

Hodgins07 Marble Texture Use a sine function to create the stripes: marble = sin( f * x + A*turbulence(x, y, z)) • the frequency ( f ) of the sine function controls the number and thickness of the veins • the amplitude (A) of the turbulence controls the distortion of the veins

legakis.net/justin/MarbleApplet/