Week 9 - Wednesday  What Did We Talk About Last Time?  Textures

Total Page:16

File Type:pdf, Size:1020Kb

Week 9 - Wednesday  What Did We Talk About Last Time?  Textures Week 9 - Wednesday What did we talk about last time? Textures . Volume textures . Cube maps . Texture caching and compression . Procedural texturing . Texture animation . Material mapping . Alpha mapping Bump mapping refers to a wide range of techniques designed to increase small scale detail Most bump mapping is implemented per-pixel in the pixel shader 3D effects of bump mapping are greater than textures alone, but less than full geometry Macro-geometry is made up of vertices and triangles . Limbs and head of a body Micro-geometry are characteristics shaded in the pixel shader, often with texture maps . Smoothness (specular color and m parameter) based on microscopic smoothness of a material Meso-geometry is the stuff in between that is too complex for macro- geometry but large enough to change over several pixels . Wrinkles . Folds . Seams Bump mapping techniques are primarily concerned with mesoscale effects James Blinn proposed the offset vector bump map or offset map . Stores bu and bv values at each texel, giving the amount that the normal should be changed at that point Another method is a heightfield, a grayscale image that gives the varying heights of a surface . Normal changes can be computed from the heightfield The results are the same, but these kinds of deformations are usually stored in normal maps . Normal maps give the full 3-component normal change Normal maps can be in world space (uncommon) . Only usable if the object never moves Or object space . Requires the object only to undergo rigid body transforms Or tangent space . Relative to the surface, can assume positive z Lighting and the surface have to be in the same space to do shading Filtering normal maps is tricky Bump mapping doesn't change what can be seen, just the normal High enough bumps should block each other Parallax mapping approximates the part of the image you should see by moving from the height back to the view vector and taking the value at that point h⋅v xy The final point used is: padj =p+ v z At shallow viewing angles, the previous approximation can look bad . A small change results in a big texture change To improve the situation, the offset is limited (by not scaling by the z component) It flattens the bumpiness at shallow angles, but it doesn't look crazy New equation: p′adj =p+ h⋅v xy The weakness of parallax mapping is that it can't tell where it first intersects the heightfield Samples are made along the view vector into the heightfield Three different research groups proposed the idea at the same time, all with slightly different techniques for doing the sampling There is still active research here Polygon boundaries are still flat in most models Yet another possibility is to change vertex position based on texture values . Called displacement mapping With the geometry shader, new vertices can be created on the fly Occlusion, self-shadowing, and realistic outlines are possible and fast Unfortunately, collision detection becomes more difficult Radiometry is the measurement of electromagnetic radiation (for us, specifically light) Light is the flow of photons . We'll generally think of photons as particles, rather than waves Photon characteristics . Frequency ν = c/λ (Hertz) . Wavelength λ = c/ν (meters) . Energy Q = hν (joules) [h is Planck's constant] Radiometry just deals with physics Photometry takes everything from radiometry and weights it by the sensitivity of the human eye Photometry is just trying to account for the eye's differing sensitivity to different wavelengths Colorimetry is the science of quantifying human color perception The CIE defined a system of three non- monochromatic colors X, Y, and Z for describing the human perceivable color space RGB is a transform from these values into monochromatic red, green, and blue colors . RGB can only express colors in the triangle As you know, there are others (HSV, HSL, etc.) Real light behaves consistently (but in a complex way) For rendering purposes, we often divide light into categories that are easy to model . Directional lights (like the sun) . Omni lights (located at a point, but evenly illuminate in all directions) . Spotlights (located at a point and have intensity that varies with direction) . Textured lights (give light projections variety in shape or color) ▪ Similar to gobos, if you know anything about stage lighting With a programmable pipeline, you can express lighting models of limitless complexity The old DirectX fixed function pipeline provided a few stock lighting models . Ambient lights . Omni lights . Spotlights . Directional lights . All lights have diffuse, specular, and ambient color Let's see how to implement these lighting models with shaders Ambient lights are very simple to implement in shaders We've already seen the code The vertex shader must simply transform the vertex into clip space (world x view x projection) The pixel shader colors each fragment a constant color . We could modulate this by a texture if we were using one float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 0, 0, 1); float AmbientIntensity = 0.5; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return AmbientColor * AmbientIntensity; } technique Ambient { pass Pass1 { VertexShader = compile VS_SHADERMODEL VertexShaderFunction(); PixelShader = compile PS_SHADERMODEL PixelShaderFunction(); } } Directional lights model lights from a very long distance with parallel rays, like the sun It only has color (specular and diffuse) and direction They are virtually free from a computational perspective Directional lights are also the standard model for BasicEffect . You don't have to use a shader to do them Let's look at a diffuse shader first We add values for the diffuse light intensity and direction We add a WorldInverseTranspose to transform the normals We also add normals to our input and color to our output float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float4x4 WorldInverseTranspose; float4 DiffuseLightDirection = float4(1, 2, 0, 0); float4 DiffuseColor = float4(1, .5, 0, 1); float DiffuseIntensity = 1.0; struct VertexShaderInput { float4 Position : POSITION0; float4 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; }; Color depends on the surface normal dotted with the light vector VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 normal = mul(input.Normal, WorldInverseTranspose); float lightIntensity = dot(normal, normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); return output; } No real differences here The diffuse color and ambient colors are added together The technique is exactly the same float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return saturate(input.Color + AmbientColor * AmbientIntensity); } Adding a specular component to the diffuse shader requires incorporating the view vector It will be included in the shader file and be set as a parameter in the C# code The camera location is added to the declarations As are specular colors and a shininess parameter float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; static const float PI = 3.14159265f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess = 20; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5; The output adds a normal so that the half vector can be computed in the pixel shader A world position lets us compute the view vector to the camera struct VertexShaderInput { float4 Position : POSITION0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float4 Color : COLOR0; float3 Normal : NORMAL0; float4 WorldPosition : POSITIONT; }; The same computations as the diffuse shader, but we store the normal and the transformed world position in the output VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); output.WorldPosition = worldPosition; float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); float lightIntensity = dot(normal, normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); output.Normal = normal; return output; } Here we finally have a real computation because we need to use the pixel normal (which is averaged from vertices) in combination with the view vector The technique is the same float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 light = normalize(DiffuseLightDirection); float3 normal = normalize(input.Normal);
Recommended publications
  • Vzorová Prezentace Dcgi
    Textures Jiří Bittner, Vlastimil Havran Textures . Motivation - What are textures good for? MPG 13 . Texture mapping principles . Using textures in rendering . Summary 2 Textures Add Details 3 Cheap Way of Increasing Visual Quality 4 Textures - Introduction . Surface macrostructure . Sub tasks: - Texture definition: image, function, … - Texture mapping • positioning the texture on object (assigning texture coordinates) - Texture rendering • what is influenced by texture (modulating color, reflection, shape) 5 Typical Use of (2D) Texture . Texture coordinates (u,v) in range [0-1]2 u brick wall texel color v (x,y,z) (u,v) texture spatial parametric image coordinates coordinates coordinates 6 Texture Data Source . Image - Data matrix - Possibly compressed . Procedural - Simple functions (checkerboard, hatching) - Noise functions - Specific models (marvle, wood, car paint) 8 Texture Dimension . 2D – images . 1D – transfer function (e.g. color of heightfield) . 3D – material from which model is manufactured (wood, marble, …) - Hypertexture – 3D model of partly transparent materials (smoke, hair, fire) . +Time – animated textures 9 Texture Data . Scalar values - weight, intensity, … . Vectors - color - spectral color 10 Textures . Motivation - What are textures good for? MPG 13 . Texture mapping principles . Using textures in rendering . Summary 11 Texture Mapping Principle Texture application Planar Mapping texture 2D image to 3D surface (Inverse) texture mapping T: [u v] –> Color M: [x y z] –> [u v] M ◦ T: [x y z] –> [u v] –> Color 12 Texture Mapping – Basic Principles . Inverse mapping . Geometric mapping using proxy surface . Environment mapping 13 Inverse Texture Mapping – Simple Shapes . sphere, toroid, cube, cone, cylinder T(u,v) (M ◦ T) (x,y,z) v z v y u u x z z [x, y, z] r r h y β α y x x [u,v]=[0,0.5] [u,v]=[0,0] 14 Texture Mapping using Proxy Surface .
    [Show full text]
  • Digital Lighting and Rendering
    [digital] LIGHTING & RENDERING Third Edition Jeremy Birn Digital Lighting and Rendering, Third Edition Jeremy Birn New Riders www.newriders.com To report errors, please send a note to [email protected] New Riders is an imprint of Peachpit, a division of Pearson Education. Copyright © 2014 Jeremy Birn Senior Editor: Karyn Johnson Developmental Editor: Corbin Collins Copyeditor: Rebecca Rider Technical Editor: Shawn Nelson Production Editor: David VanNess Proofreader: Emily K. Wolman Composition: Maureen Forys, Happenstance Type-O-Rama Indexer: Jack Lewis Interior design: Maureen Forys, Happenstance Type-O-Rama Cover design: Charlene Charles-Will Notice of Rights All rights reserved. No part of this book may be reproduced or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. For information on getting permission for reprints and excerpts, contact [email protected]. Notice of Liability The information in this book is distributed on an “As Is” basis, without warranty. While every precaution has been taken in the preparation of the book, neither the author nor Peachpit shall have any liability to any person or entity with respect to any loss or dam- age caused or alleged to be caused directly or indirectly by the instructions contained in this book or by the computer software and hardware products described in it. Trademarks Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Peachpit was aware of a trademark claim, the designations appear as requested by the owner of the trademark.
    [Show full text]
  • Texturing Prof Emmanuel
    Computer Graphics (CS 563) Lecture 2: Advanced Computer Graphics Part 2: Texturing Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Texturing: The Quest for Realism Texturing: Glue n‐dimensional images onto geometrical objects Purpose: more realism, and this is a cheap way to do it Bump mapping Also environment mapping, other effects + = Texture coordinates (0,1) (1,1) (u1,v1) (u2,v2) (0,0) (1,0) (u ,v ) (u,v) in [0,1] 0 0 What if (u,v) >1.0 or <0.0 ? To repeat textures, use just the fractional part Example: 5.3 ‐> 0.3 Repeat, mirror, clamp, border: (2,2) (-1,-1) Projector Function Different ways to project texture onto objects L‐R: spherical, cylinderical, planar and natural Natural? Some shapes have defined mappings (e.g. parametric curved surfaces Hot Research Topic: Parametrization Parametrization in Practice Texture creation and parametrization is an art form Option: Unfold the surface Parametrization in Practice Option: Create a Texture Atlas Break large mesh into smaller pieces Texturing Hardware Note: Textures also accessible from vertex shader on new hardware Texture Magnification 48 x 48 image projected (stretched) onto 320 x 320 pixels Left: Nearest neighbor filter Middle: Bilinear filter (average of 4 nearest texels) Right: Cubic filter (weighted avg. of 5 nearest texels) Texture magnification What does the theory say? sinc(x) is not feasible in real time Box filter (nearest‐neighbor) is Poor quality Texture magnification Tent filter is feasible! Linear interpolation Looks
    [Show full text]
  • Texture Mapping Effects
    CS 543 Computer Graphics Texture Mapping Effects by Cliff Lindsay “Top Ten List” Courtesy of David Letterman’s Late Show and CBS Talk Format List of Texture Mapping Effects from Good to Spectacular (my biased opinion): Highlights: . Define Each Effect . Describe Each Effect Briefly: Theory and Practice. Talk about how each effect extends the idea of general Texture Mapping (previous talk) including Pros and Cons. Demos of selected Texture Mapping Effects 1 Texture Mapping Effect #10 Light Mapping Main idea: Static diffuse lighting contribution for a surface can be captured in a texture and blended with another texture representing surface detail. Highlights: . Eliminate lighting calculation overhead . Light maps are low resolution . Light maps can be applied to multiple textures * = [Images courtesy of flipcode.com] Light Mapping Below is a night scene of a castle. No lighting calculation is being performed at all in the scene. Left: No Light Map applied Right: Light Map applied [Images courtesy of www.gamasutra.com] 2 Texture Mapping Effect #9 Non-Photorealistic Rendering Main idea: Recreating an environment that is focused on depicting a style or communicating a motif as effectively as possible. This is in contrast to Photorealistic which tries to create as real a scene as possible. High lights of NPR: . Toon Shading . Artistic styles (ink, water color, etc.) . Perceptual rendering [Robo Model with and without Toon Shading, Image courtesy of Michael Arias] Non-Photorealistic Rendering Non-Photorealistic Rendering Simple Example: Black
    [Show full text]
  • Shadow Maps, Shadow Volumes & Normal Maps, Bump Mapping
    Computer Graphics (CS 543) Lecture 10b: Soft Shadows (Maps and Volumes), Normal and Bump Mapping Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Shadow Buffer Theory Observation: Along each path from light Only closest object is lit Other objects on that path in shadow Shadow Buffer Method Position a camera at light source. uses second depth buffer called the shadow map Shadow buffer stores closest object on each path (Stores point B) Put camera here Lit In shadow Shadow Map Illustrated Point va stored in element a of shadow map: lit! Point vb NOT in element b of shadow map: In shadow Not limited to planes Shadow Map: Depth Comparison Recall: OpenGL Depth Buffer (Z Buffer) Depth: While drawing objects, depth buffer stores distance of each polygon from viewer Why? If multiple polygons overlap a pixel, only closest one polygon is drawn Depth Z = 0.5 1.0 1.0 1.0 1.0 Z = 0.3 1.0 0.3 0.3 1.0 0.5 0.3 0.3 1.0 0.5 0.5 1.0 1.0 eye Shadow Map Approach Rendering in two stages: Generate/load shadow Map Render the scene Loading Shadow Map Initialize each element to 1.0 Position a camera at light source Rasterize each face in scene updating closest object Shadow map (buffer) tracks smallest depth on each path Put camera here Shadow Map (Rendering Scene) Render scene using camera as usual While rendering a pixel find: pseudo-depth D from light source to P Index location [i][j] in shadow buffer, to be tested Value d[i][j] stored in shadow buffer If d[i][j] < D (other object on this path closer to
    [Show full text]
  • Paper Summaries Logistics One More Announcement Assignments Projects
    Paper Summaries • Any takers? Texture Mapping Logistics One more announcement • Job related news • Electronic Arts (EA) is coming to RIT. – Co-op orientation – Looking for a few (actually more than a few) good gaming programmers • Friday, Sept 16th 12:00 – 1:30pm 70-1400 • Friday, Oct 14th 1:00 – 2:30pm Eastman Aud. – EA Company Presentation – Career Fair • Tuesday, October 4th • Wednesday, Sept 28th • 6-7:30pm • 11am – 4pm • Golisano auditorium • Gordon Field House – Interviews th – EA Job offer • Wednesday, October 5 (Career Center) • SIGN UP IF INTERESTED!!! Assignments Projects • Checkpoint 2 • Approx 17 projects – Due today. • Listing of projects now on Web • Checkpoint 3 • Presentation schedule – Due Thursday – Presentations (20 min max) – Roundoff error! – Last 3 classes (week 10 + finals week) – Sign up • Checkpoint 4 / RenderMan • Email me with 1st , 2nd , 3rd choices – To be given Thursday • First come first served. 1 Computer Graphics as Virtual Photography Remember this? real camera photo Photographic Photography: scene (captures processing print light) • Bi-directional Reflectance Function processing BRDF = fr (φi ,θi ,φr ,θ r ) Computer 3D camera tone synthetic Graphics: models model reproduction image At a given point, gives relative reflected illumination in any (focuses simulated direction with respect to incoming illumination coming from lighting) any direction Illumination Models Phong Model • Illumination model - function or algorithm used to describe the reflective characteristics of a given surface. • More accurately,
    [Show full text]
  • Texture Mapping  Still Grading…
    Logistics Checkpoint 2 Texture Mapping Still grading… Note on grading Checkpoint 3 Due Monday Project Proposals All should have received e-mail feedback. Logistics Projects Grad students Approx 26-28 projects Please send topic of grad report. Listing of projects now on Web Presentation schedule Presentations (15 min max) Last 4 classes (week 9 + week 10 + finals week) Sign up st nd rd Email me with 1 , 2 , 3 choices First come first served. Computer Graphics as Virtual Photography Remember this? real camera photo Photographic Photography: scene (captures processing print light) Bi-directional Reflectance Function processing BRDF = fr ("i ,!i ,"r ,! r ) Computer 3D camera tone synthetic Graphics: models model reproduction image At a given point, gives relative reflected illumination in any (focuses simulated direction with respect to incoming illumination coming from lighting) any direction 1 Illumination Models Phong Model Illumination model - function or algorithm used to describe the reflective characteristics of a given surface. More accurately, function or algorithm used to approximate the BRDF. = + • + • ke L(V ) ka La kd ∑i Li (Si N) ks ∑i Li (Ri V) ambient diffuse specular Question Texture Mapping What if Phong (or other) Illumination models aren’t good enough? Developed in 1974 by Ed Catmull, currently president of Pixar Texture Mapping – use an image Procedural Shading – program your own Goal: Make Phong shading less plastic looking Texture Mapping Texture Mapping A means to define surface characteristics of an object using an image Mapping a 2D image onto a 3D surface Coordinate spaces in texture mapping Texture space (u, v) Object space (xo,yo,zo) Screen space (x, y) Watt 2 Texture Mapping Texture Mapping Key to texture mapping is texture space (u,v) parameterization parameterization 3D geometry must be expressed as a function of 2 variables, u and v.
    [Show full text]
  • Yafaray User's Guide by Is Licensed Under a Creative Commons Attribution-Share Alike 3.0 Unported License
    The YafaRay User's Guide by www.yafaray.org is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Permissions beyond the scope of this license may be available at www.yafaray.org. Downloads for various platforms are available on our Downloads Page. Please open a thread in the forum section if you have any question about the YafaRay installation. Installation notes. Table of Contents: • User Prerequisites. • General Workflow. • Windows Installer Notes. • OSX Installation Notes for YafaRay 0.1.1. • Debian packages for Ubuntu. User Prerequisites. It is good to have some previous knowledge about Blender before taking on YafaRay. It will be particularly useful to know about Blender's interface arrangement, Blender lighting features, Blender texturing workflow and general scene handling. Some background on basic lighting & shading techniques will be very helpful to get good results with YafaRay as well. General Workflow • YafaRay works with any Blender official release from blender.org, as long as the binaries used are compiled with the same version of python than YafaRay. • YafaRay uses a python-coded settings interface to set up most of lighting and shading parameters and all rendering settings. Once YafaRay is correctly installed, the settings interface can be launched by an item automatically added in the Blender Render menu, called YafaRay Export. Divide your 3Dwindows and click on the menu item. • To launch rendering, you must use the RENDER buttons located in the settings interface. A separated window will pop up displaying the render progress as it was 1 with the old versions. In this separated interface you can view and save your images.
    [Show full text]
  • Bump Mapping, Parallax, Relief, Alpha, Specular Mapping
    Computer Graphics (CS 543) Lecture 10: Bump Mapping, Parallax, Relief, Alpha, Specular Mapping Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Bump Mapping Bump mapping: examples Bump mapping by Blinn in 1978 Inexpensive way of simulating wrinkles and bumps on geometry Too expensive to model these geometrically Instead let a texture modify the normal at each pixel, and then use this normal to compute lighting = geometry + Bump mapped geometry Bump map Stores heights: can derive normals Use normals of bumpy geometry Bump mapping: Blinn’s method Idea: Distort the surface normal at point to be rendered Option a: Modify normal n along u, v axes to give n’ In texture map, store how much to perturb n (bu and bv) Using bumpmap Look up bu and bv n’ = n + buT + bvB (T and B are tangent and bi-tangent vectors) Note: N’ is not normalized Bump map code similar to normal map code. Just compute, use n’ instead of n Bump mapping: Blinn’s method Option b: Store values of u, v as a heightfield Slope of consecutive columns determines how much changes n along u Slope of consecutive rows determines how much changes n along v Option c (Angel textbook): Encode using differential equations Bump Mapping Vs Normal Mapping Bump mapping Normal mapping (Normals n=(nx , ny , nz) stored as Coordinates of normal (relative to local distortion of face orientation. tangent space) are encoded in Same bump map can be color channels tiled/repeated and reused for Normals stored combines many faces) face orientation + plus
    [Show full text]
  • Environment Mapping (Reflections and Refractions)
    Computer Graphics (CS 543) Lecture 9 (Part 1): Environment Mapping (Reflections and Refractions) Prof Emmanuel Agu (Adapted from slides by Ed Angel) Computer Science Dept. Worcester Polytechnic Institute (WPI) Environment Mapping Used to create appearance of reflective and refractive surfaces without ray tracing which requires global calculations Types of Environment Maps Assumes environment infinitely far away Options: Store “object’s environment as a) Sphere around object (sphere map) b) Cube around object (cube map) N V R OpenGL supports cube maps and sphere maps Cube Map Stores “environment” around objects as 6 sides of a cube (1 texture) Forming Cube Map Use 6 cameras directions from scene center each with a 90 degree angle of view 6 Reflection Mapping eye n y r x z Need to compute reflection vector, r Use r by for lookup OpenGL hardware supports cube maps, makes lookup easier Indexing into Cube Map •Compute R = 2(N∙V)N‐V •Object at origin V •Use largest magnitude component R of R to determine face of cube •Other 2 components give texture coordinates 8 Example R = (‐4, 3, ‐1) Same as R = (‐1, 0.75, ‐0.25) Use face x = ‐1 and y = 0.75, z = ‐0.25 Not quite right since cube defined by x, y, z = ±1 rather than [0, 1] range needed for texture coordinates Remap by s = ½ + ½ y, t = ½ + ½ z Hence, s =0.875, t = 0.375 Declaring Cube Maps in OpenGL glTextureMap2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, level, rows, columns, border, GL_RGBA, GL_UNSIGNED_BYTE, image1) Repeat similar for other 5images (sides) Make 1texture object from 6 images Parameters apply to all six images.
    [Show full text]
  • Real ­ Time Rendering
    Real - Time Rendering Lighting / Shading Michal Červeňanský Juraj Starinský Overview Light Sources Lighting Shading Texturing Alpha Mapping Gloss Mapping Light Mapping Rough Surface 10.03.2010 2 Light Sources Directional light Point light Spot light Colour, intensity 10.03.2010 3 Lighting model Phong model Colour, Intensity Ambient, Diffuse, Specular + + = 10.03.2010 4 Lighting model - ambient Constant colour Light form other surfaces + 10.03.2010 5 Lighting model - diffuse = ⋅ = φ Lambert©s law idiff n l cos + Photons are scattered equally = ⋅ ⊗ i diff (n l)mdiff sdiff 10.03.2010 6 Lighting model - specular The highlight simulation r = −l + 2(n⋅l)n = ⋅ mshi = ρ mshi ispec (r v) (cos ) = ⋅ mshi ⊗ i spec max(0,(r v) )mspec sspec 10.03.2010 7 Lighting model (2) Half vector Speed-up i = iamb + idiff + ispec 10.03.2010 8 Shading Flat, Gouraud, Phong Per - primitive, vertex, fragment 10.03.2010 9 Shading (2) Tessellation is important 10.03.2010 10 Texturing Light Sources Lighting Shading Texturing − Texture mapping − Texture settings − Texture filtering Alpha Mapping Environment Mapping Gloss Mapping 10.03.2010 ... 11 Texture mapping 1D, 2D, 3D, Cube Texture transformations Texture projections (spherical, cylindrical, planar, natural) 10.03.2010 12 Texture settings Wrap modes (repeat, mirror, clamp, border) Compression (DXT, S3C, ¼) Multi-texturing 10.03.2010 13 Texture filtering Nearest Linear ± (tri/bi)linear interpolation Bi-cubic ± ATI or shaders Nearest Linear Catmul-Rom CubicBSpline 10.03.2010 14 Texture filtering (2) Mipmapping ± minification − Down sampled textures to half − Memory requirements 133% − d = log2(sqrt(A)) − Trilinear filtering (interpolation) 10.03.2010 15 Texture filtering (3) Anisotropic − < 16 samples − Vertical, horizontal 10.03.2010 16 Alpha mapping Billboards Transp.
    [Show full text]
  • Texture Mapping Effects
    CS 563 Advanced Topics in Computer Graphics Texture Mapping Effects by Cliff Lindsay “Top Ten List” Courtesy of David Letterman’s Late Show and CBS Talk Format List of Texture Mapping Effects from Good to Spectacular (my biased opinion): Highlights: Define Each Effect Describe Each Effect Briefly: Theory and Practice. Talk about how each effect extends the idea of general Texture Mapping (previous talk) including Pros and Cons. Demos of selected Texture Mapping Effects Texture Mapping Effect #10 Light Mapping Main idea: Static diffuse lighting contribution for a surface can be captured in a texture and blended with another texture representing surface detail. High lights: Eliminate lighting calculation overhead Light maps are low resolution Light maps can be applied to multiple textures * = [Images courtesy of flipcode.com] Light Mapping Below is a night scene of a castle. No lighting calculation is being performed at all in the scene. Left: No Light Map applied Right: Light Map applied [Images courtesy of www.gamasutra.com] Texture Mapping Effect #9 Non-Photorealistic Rendering Main idea: Recreating an environment that is focused on depicting a style or communicates a motif as effectively as possible. This is in contrast to Photorealistic which tries to create as real a scene as possible. High lights of NPR: Toon Shading Artistic styles (ink, water color, etc.) Perceptual rendering [Robo Model with and without Toon Shading, Image courtesy of Michael Arias] Non-Photorealistic Rendering Non-Photorealistic Rendering Simple Example:
    [Show full text]