Computer Graphics Shading

Total Page:16

File Type:pdf, Size:1020Kb

Computer Graphics Shading Computer Graphics Shading The Physics 2 Local vs. Global Illumination Models Example Local model – direct and local interaction of Ambient each object with the Diffuse light. Global model: interactions and exchange of light energy between Final different objects. Specular Image 3 Light Sources Ambient Light Point source (A): All light originates at a point !Assume non-directional light in the environment " ! Rays hit planar surface at different incidence angles !Object illuminated with same light everywhere !Parallel source (B): All light rays are parallel " ! Looks like silhouette " ! Rays hit a planar surface at identical incidence angles " ! May be modeled as point source at infinity !The Illumination equation I = Iaka " ! Also called directional source " ! Ia - ambient light !Area source (C): Light originates at finite area in intensity space. " ! k - fraction of " ! Inbetween the point and parallel sources a B C ambient light reflected " ! Also called distributed source A from surface. Also defines object color 5 6 Copyright Gotsman, Elber, Barequet, Karni, Sheffer Page 1 Computer Science, Technion Computer Graphics Shading Diffuse Light Diffused Ball ! Dull surfaces such as solid matte plastic reflects incoming light uniformly in all directions. Called diffuse or Lambertian reflection ! Understand intensity as the number of photons per inch2. If a flow of m photos passing each second through an inch2 window orthogonal to the flow, is hitting the red surface, how many photons hit an inch2 of the surface ? ! Let θ is the angle between the direction of incoming light and normal to surface, and let L, N be corresponding unit vectors. L N Moon Paradox Diffuse Reflection 10 Reflection from a perfect mirror Specular Reflection !Assume object is a perfect mirror. L N R !Shiny objects (e.g. metallic) reflect light in preferred θ θ α V direction R determined by surface normal N. L N R θ θ α V !Most objects are not ideal mirrors – also reflect in the immediate vicinity of R !Lights emits the object in direction V only if V=R α=0 !Phong Model – approximate attenuation by the form of cosnα (no real physical basis) 11 12 Copyright Gotsman, Elber, Barequet, Karni, Sheffer Page 2 Computer Science, Technion Computer Graphics Shading Specular Reflection (Phong Model) Specular Reflection (cont’d) !Illumination equation: ! Exponent n of cosine controls decay factor of attenuation !ks - Specular reflection coefficient function: !n - Specularity exponent ! No physical basis but looks good: 13 14 !Retroreflector " ! sends light back where it came from regardless of the angle of insidence More on Illumination Equation ! For multiple light sources: shadingmodel " ! Ip of all light sources are added together " ! Precautions should be taken from overflows 18 Copyright Gotsman, Elber, Barequet, Karni, Sheffer Page 3 Computer Science, Technion Computer Graphics Shading Even More on Illumination Equation Flat Shading ! For distance/atmospheric attenuation sources: ! Applied to piecewise linear polygonal models ! Simple surface lighting approximated over polygons ! Illumination value depends only on polygon normal ⇒ each polygon is colored with a dp - distance between surface and light source and/or uniform intensity distance between surface and viewer (heuristic ! Looks non-smooth (worsened by Mach band effect) atmospheric attenuation) 19 20 Normal per Vertex Gouraud Shading !Compute illumination intensity at vertices using those normals !Interpolate intensity over polygon interior 21 22 Gouraud Shading Phong Shading ! Interpolate (at the vertices in image space) normal vectors instead of illumination intensities ! Apply the illumination equation for each interior pixel with its own (interpolated) normal 23 24 Copyright Gotsman, Elber, Barequet, Karni, Sheffer Page 4 Computer Science, Technion Computer Graphics Shading Comments on Shading Comparison !Phong shading is more expensive (why?) but well worth the effort !Can achieve good looking specular highlight effects !Both the Gouraud and Phong shading schemes are performed in the image plane and fit well into a polygonal scan-conversion fill scheme !Both the Gouraud and Phong are view dependent !Can cause artifacts during animation as they are transformation dependent 25 26 Copyright Gotsman, Elber, Barequet, Karni, Sheffer Page 5 Computer Science, Technion .
Recommended publications
  • Photorealistic Texturing for Dummies
    PHOTOREALISTIC TEXTURING FOR DUMMIES Leigh Van Der Byl http://leigh.cgcommunity.com Compiled by Carlos Eduardo de Paula - Brazil PHOTOREALISTIC TEXTURING FOR DUMMIES Index Part 1 – An Introduction To Texturing ...............................................................................................3 Introduction ........................................................................................................................................................ 3 Why Procedural Textures just don't work ...................................................................................................... 3 Observing the Aspects of Surfaces In Real Life............................................................................................ 4 The Different Aspects Of Real World Surfaces ............................................................................................ 5 Colour................................................................................................................................................................... 5 Diffuse.................................................................................................................................................................. 5 Luminosity........................................................................................................................................................... 6 Specularity...........................................................................................................................................................
    [Show full text]
  • BRDF, Reflection Integral
    CMSC740 Advanced Computer Graphics Spring 2020 Matthias Zwicker Today Simulating light transport • BRDF & reflection integral • BRDF examples 2 Surface appearance • How is light reflected by a – Mirror – White sheet of paper – Blue sheet of paper – Glossy metal 3 The BRDF (bidirectional reflectance distribution function) http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function • Describes quantitatively how light is reflected off a surface • “For every pair of light and viewing directions, BRDF gives fraction of transmitted light” • Captures appearance of surface Diffuse reflection Glossy reflection 4 The BRDF (bidirectional reflectance distribution function) http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function Relation of BRDF to physics • BRDF is not a model that explains light scattering on surfaces based on first principles • Instead, BRDF is just a quantitative description of overall result of light scattering for a given type of material 5 Types of reflection • Diffuse – Matte paint • Glossy – Plastic, high-gloss paint • Perfect specular – Mirror • Retro-reflective – Surface of the moon http://en.wikipedia.org/wiki/Retroreflector • Natural surfaces are often combinations 6 Mathematical formulation • Preliminaries: differential irradiance on surface due to incident radiance Li within a small cone of around direction ωi 7 Mathematical formulation • BRDF is fraction of reflected radiance in outgoing direction ωo over differential irradiance 8 Reflection equation • Outgoing radiance due to
    [Show full text]
  • Ray Tracing Notes CS445 Computer Graphics, Fall 2012 (Last Modified
    Ray Tracing Notes CS445 Computer Graphics, Fall 2012 (last modified 10/7/12) Jenny Orr, Willamette University 1 The Ray Trace Algorithm refracted ray re!ected ray camera pixel Light3 light ray computed ray screen in shadow Light1 Light2 Figure 1 1 For each pixel in image { 2 compute ray 3 pixel_color = Trace(ray) 4 } 5 color Trace(ray) { 6 For each object 7 find intersection (if any) 8 check if intersection is closest 9 If no intersection exists 10 color = background color 11 else for the closest intersection 12 for each light // local color 13 color += ambient 14 if (! inShadow(shadowRay)) color += diffuse + specular 15 if reflective 16 color += k_r Trace(reflected ray) 17 if refractive 18 color += k_t Trace(transmitted ray) 19 return color 20 } 21 boolean inShadow(shadowRay) { 22 for each object 23 if object intersects shadowRay return true 24 return false 25 } 1 1.1 Complexity w = width of image h = height of image n = number of objects l = number of lights d = levels of recursion for reflection/refraction Assuming no recursion or shadows O(w ∗ h ∗ (n + l ∗ n)). How does this change if shadows and recursion are added? What about anti-aliasing? 2 Computing the Ray In general, points P on a ray can be expressed parametrically as P = P0 + t dir where P0 is the starting point, dir is a unit vector pointing in the ray's direction, and t ≥ 0 is the parameter. When t = 0, P corresponds to P0 and as t increases, P moves along the ray. In line 2 of the code, we calculate a ray which starts at the camera P0 and points from P0 to the given pixel located at P1 on a virtual screen (view plane), as shown in Figure 2.
    [Show full text]
  • Shadow and Specularity Priors for Intrinsic Light Field Decomposition
    Shadow and Specularity Priors for Intrinsic Light Field Decomposition Anna Alperovich, Ole Johannsen, Michael Strecke, and Bastian Goldluecke University of Konstanz [email protected] Abstract. In this work, we focus on the problem of intrinsic scene decompo- sition in light fields. Our main contribution is a novel prior to cope with cast shadows and inter-reflections. In contrast to other approaches which model inter- reflection based only on geometry, we model indirect shading by combining ge- ometric and color information. We compute a shadow confidence measure for the light field and use it in the regularization constraints. Another contribution is an improved specularity estimation by using color information from sub-aperture views. The new priors are embedded in a recent framework to decompose the input light field into albedo, shading, and specularity. We arrive at a variational model where we regularize albedo and the two shading components on epipolar plane images, encouraging them to be consistent across all sub-aperture views. Our method is evaluated on ground truth synthetic datasets and real world light fields. We outperform both state-of-the art approaches for RGB+D images and recent methods proposed for light fields. 1 Introduction Intrinsic image decomposition is one of the fundamental problems in computer vision, and has been studied extensively [23,3]. For Lambertian objects, where an input image is decomposed into albedo and shading components, numerous solutions have been pre- sented in the literature. Depending on the input data, the approaches can be divided into those dealing with a single image [36,18,14], multiple images [39,24], and image + depth methods [9,20].
    [Show full text]
  • Computer Graphicsgraphics Shading
    ComputerComputer GraphicsGraphics Shading The Physics Illumination Models & Shading 2 Local vs. Global Illumination Models Light Sources Point source (A): All light originates at a point Local model – direct Rays hit planar surface at different incidence angles and local interaction of Parallel source (B): All light rays are parallel each object with the Rays hit a planar surface at identical incidence angles light. May be modeled as point source at infinity Also called directional source Area source (C): Light originates at finite area in Global model : space. interactions and Inbetween the point and parallel sources exchange of light B C energy between Also called distributed source A different objects. 3 4 Diffuse Light Ambient Light Dull surfaces such as solid matte plastic reflects incoming light Assume non-directional light in the environment uniformly in all directions. Called diffuse or Lambertian reflection Object illuminated with same light everywhere Understand intensity as the number of photons per inch 2. If a flow Looks like silhouette of photos passing an inch 2 window is hitting a the red surface, how many photons hit an inch 2 of the surface ? The Illumination equation I = I k θ a a Let is the angle between the direction of incoming light and Ia - ambient light normal to surface, and let L, N be corresponding unit vectors. intensity x1 The length of the segment ka - fraction of 1 x θ ambient light reflected x is 1/cos . 1 2 from surface. L θ The amount of incident x3 Also defines object 1 N light per unit surface area (thus reflected light) is color proportional to cos θ =N•L 5 Copyright Gotsman, Elber, Barequet, Karni, Sheffer Page Computer Science, Technion ComputerComputer GraphicsGraphics Shading Diffused Ball Moon Paradox Diffuse Reflection Specular Reflection Illumination equation is now: Shiny objects (e.g.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • CSE 167: Introduction to Computer Graphics Lecture #6: Illumination Model
    CSE 167: Introduction to Computer Graphics Lecture #6: Illumination Model Jürgen P. Schulze, Ph.D. University of California, San Diego Spring Quarter 2015 Announcements Project 3 due this Friday at 1pm Grading starts at 12:15 in CSE labs 260+270 Next Thursday: Midterm Midterm discussion on Monday at 4pm 2 Lecture Overview Depth Testing Illumination Model 3 Visibility • At each pixel, we need to determine which triangle is visible 4 Painter’s Algorithm Paint from back to front Every new pixel always paints over previous pixel in frame buffer Need to sort geometry according to depth May need to split triangles if they intersect Outdated algorithm, created when memory was expensive 5 Z-Buffering Store z-value for each pixel Depth test During rasterization, compare stored value to new value Update pixel only if new value is smaller setpixel(int x, int y, color c, float z) if(z<zbuffer(x,y)) then zbuffer(x,y) = z color(x,y) = c z-buffer is dedicated memory reserved for GPU (graphics memory) Depth test is performed by GPU 6 Z-Buffering in OpenGL In your application: Ask for a depth buffer when you create your window. Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine. Ensure that your zNear and zFar clipping planes are set correctly (in glOrtho, glFrustum or gluPerspective) and in a way that provides adequate depth buffer precision. Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear. 7 Z-Buffering Problem: translucent geometry Storage of multiple depth and color values per pixel (not practical in real-time graphics) Or back to front rendering of translucent geometry, after rendering opaque geometry Does not always work correctly: programmer has to weight rendering correctness against computational effort 8 Lecture Overview Depth Testing Illumination Model 9 Shading Compute interaction of light with surfaces Requires simulation of physics “Global illumination” Multiple bounces of light Computationally expensive, minutes per image Used in movies, architectural design, etc.
    [Show full text]
  • LEAN Mapping
    LEAN Mapping Marc Olano∗ Dan Bakery Firaxis Games Firaxis Games (a) (b) (c) Figure 1: In-game views of a two-layer LEAN map ocean with sun just off screen to the right, and artist-selected shininess equivalent to a Blinn-Phong specular exponent of 13,777: (a) near, (b) mid, and (c) far. Note the lack of aliasing, even with an extremely high power. Abstract 1 Introduction For over thirty years, bump mapping has been an effective method We introduce Linear Efficient Antialiased Normal (LEAN) Map- for adding apparent detail to a surface [Blinn 1978]. We use the ping, a method for real-time filtering of specular highlights in bump term bump mapping to refer to both the original height texture that and normal maps. The method evaluates bumps as part of a shading defines surface normal perturbation for shading, and the more com- computation in the tangent space of the polygonal surface rather mon and general normal mapping, where the texture holds the ac- than in the tangent space of the individual bumps. By operat- tual surface normal. These methods are extremely common in video ing in a common tangent space, we are able to store information games, where the additional surface detail allows a rich visual ex- on the distribution of bump normals in a linearly-filterable form perience without complex high-polygon models. compatible with standard MIP and anisotropic filtering hardware. The necessary textures can be computed in a preprocess or gener- Unfortunately, bump mapping has serious drawbacks with filtering ated in real-time on the GPU for time-varying normal maps.
    [Show full text]
  • The Syntax of MDL 145
    NVIDIA Material Definition Language 1.7 Language Specification Document version 1.7.1 May 12, 2021 DRAFT — Work in Progress Copyright Information © 2021 NVIDIA Corporation. All rights reserved. Document build number 345213 NVIDIA Material Definition Language 1.7.1 DRAFT WiP LICENSE AGREEMENT License Agreement for NVIDIA MDL Specification IMPORTANT NOTICE – READ CAREFULLY: This License Agreement (“License”) for the NVIDIA MDL Specification (“the Specification”), is the LICENSE which governs use of the Specification of NVIDIA Corporation and its subsidiaries (“NVIDIA”) as set out below. By copying, or otherwise using the Specification, You (as defined below) agree to be bound by the terms of this LICENSE. If You do not agree to the terms of this LICENSE, do not copy or use the Specification. RECITALS This license permits you to use the Specification, without modification for the purposes of reading, writing and processing of content written in the language as described in the Specification, such content may include, without limitation, applications that author MDL content, edit MDL content including material parameter editors and applications using MDL content for rendering. 1. DEFINITIONS. Licensee. “Licensee,” “You,” or “Your” shall mean the entity or individual that uses the Specification. 2. LICENSE GRANT. 2.1. NVIDIA hereby grants you the right, without charge, on a perpetual, non- exclusive and worldwide basis, to utilize the Specification for the purpose of developing, making, having made, using, marketing, importing, offering to sell or license, and selling or licensing, and to otherwise distribute, products complying with the Specification, in all cases subject to the conditions set forth in this Agreement and any relevant patent (save as set out below) and other intellectual property rights of third parties (which may include NVIDIA).
    [Show full text]
  • Object Shape and Reflectance Modeling from Observation
    Object Shape and Reflectance Modeling from Observation Yoichi Sato1, Mark D. Wheeler2, and Katsushi Ikeuchi1 1Institute of Industrial Science 2Apple Computer Inc. University of Tokyo ABSTRACT increase in the demand for 3D computer graphics. For instance, a new format for 3D computer graphics on the internet, called VRML, is becoming an industrial standard format, and the number of appli- An object model for computer graphics applications should contain cations using the format is increasing quickly. two aspects of information: shape and reflectance properties of the object. A number of techniques have been developed for modeling However, it is often the case that 3D object models are created object shapes by observing real objects. In contrast, attempts to manually by users. That input process is normally time-consuming model reflectance properties of real objects have been rather limited. and can be a bottleneck for realistic image synthesis. Therefore, In most cases, modeled reflectance properties are too simple or too techniques to obtain object model data automatically by observing complicated to be used for synthesizing realistic images of the real objects could have great significance in practical applications. object. An object model for computer graphics applications should In this paper, we propose a new method for modeling object reflec- contain two aspects of information: shape and reflectance properties tance properties, as well as object shapes, by observing real objects. of the object. A number of techniques have been developed for mod- First, an object surface shape is reconstructed by merging multiple eling object shapes by observing real objects. Those techniques use range images of the object.
    [Show full text]
  • IMAGE-BASED MODELING TECHNIQUES for ARTISTIC RENDERING Bynum Murray Iii Clemson University, [email protected]
    Clemson University TigerPrints All Theses Theses 5-2010 IMAGE-BASED MODELING TECHNIQUES FOR ARTISTIC RENDERING Bynum Murray iii Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_theses Part of the Fine Arts Commons Recommended Citation Murray iii, Bynum, "IMAGE-BASED MODELING TECHNIQUES FOR ARTISTIC RENDERING" (2010). All Theses. 777. https://tigerprints.clemson.edu/all_theses/777 This Thesis is brought to you for free and open access by the Theses at TigerPrints. It has been accepted for inclusion in All Theses by an authorized administrator of TigerPrints. For more information, please contact [email protected]. IMAGE-BASED MODELING TECHNIQUES FOR ARTISTIC RENDERING A Thesis Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Master of Arts Digital Production Arts by Bynum Edward Murray III May 2010 Accepted by: Timothy Davis, Ph.D. Committee Chair David Donar, M.F.A. Tony Penna, M.F.A. ABSTRACT This thesis presents various techniques for recreating and enhancing two- dimensional paintings and images in three-dimensional ways. The techniques include camera projection modeling, digital relief sculpture, and digital impasto. We also explore current problems of replicating and enhancing natural media and describe various solutions, along with their relative strengths and weaknesses. The importance of artistic skill in the implementation of these techniques is covered, along with implementation within the current industry applications Autodesk Maya, Adobe Photoshop, Corel Painter, and Pixologic Zbrush. The result is a set of methods for the digital artist to create effects that would not otherwise be possible.
    [Show full text]
  • Material Category Determined by Specular Reflection Structure Mediates the Processing of Image Features for Perceived Gloss
    bioRxiv preprint doi: https://doi.org/10.1101/2019.12.31.892083; this version posted January 2, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. Material category determined by specular reflection structure mediates the processing of image features for perceived gloss Authors: Alexandra C. Schmid1, Pascal Barla2, Katja Doerschner1,3 1Justus Liebig University Giessen, Germany 2INRIA, University of Bordeaux, France 3Bilkent University, Turkey Correspondence: [email protected] (A.C.S.); [email protected] (P.B.); [email protected] (K.D.) 31.12.2019 1 bioRxiv preprint doi: https://doi.org/10.1101/2019.12.31.892083; this version posted January 2, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. ABSTRACT There is a growing body of work investigating the visual perception of material properties like gloss, yet practically nothing is known about how the brain recognises different material classes like plastic, pearl, satin, and steel, nor the precise relationship between material properties like gloss and perceived material class. We report a series of experiments that show that parametrically changing reflectance parameters leads to qualitative changes in material appearance beyond those expected by the reflectance function used.
    [Show full text]