The Graphics Pipeline and Opengl II: Lighting and Shading, Fragment Processing

Total Page:16

File Type:pdf, Size:1020Kb

The Graphics Pipeline and Opengl II: Lighting and Shading, Fragment Processing The Graphics Pipeline and OpenGL II: Lighting and Shading, Fragment Processing Gordon Wetzstein Stanford University EE 267 Virtual Reality Lecture 3 stanford.edu/class/ee267/ Announcements • Most likely: everyone on the wait list will get in. We may even have space for a few more students, so email us right away if you’re auditing and would like to take the class! • questions for HW1? post on piazza and zoom office hours! • WIM workshop 1: this Friday 2-3 pm, zoom à if you are a WIM student, you must attend! • WIM HW1 going out this Friday Lecture Overview • rasterization • the rendering equation, BRDFs • lighting: computer interaction between vertex/fragment and lights • Phong lighting • shading: how to assign color (i.e. based on lighting) to each fragment • Flat, Gouraud, Phong shading • vertex and fragment shaders • texture mapping Review of Vertex/Normal Transforms vertex/normal transforms https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html Rasterization Rasterization Purpose: 1. determine which fragments are inside the triangles 2. interpolate vertex attributes (e.g. color) to all fragments https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html Rasterization / Scanline Interpolation y • grid of 6x6 fragments 5 4 3 2 1 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • grid of 6x6 fragments 5 • 2D vertex positions after transformations (x1, y1) 4 3 2 1 (x2, y2) (x3, y3) 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • grid of 6x6 fragments 5 • 2D vertex positions after transformations (x1, y1) 4 + edges = triangle 3 2 1 (x2, y2) (x3, y3) 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • grid of 6x6 fragments 5 • 2D vertex positions after transformations A1 4 + edges = triangle 3 • each vertex has 1 or more attributes A, 2 such as R/G/B color, depth, … 1 • user can assign arbitrary attributes, e.g. A2 A3 0 surface normals 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • scanline moving top to bottom 5 A1 4 3 2 1 A2 A3 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • scanline moving top to bottom 5 A1 4 3 2 1 A2 A3 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • scanline moving top to bottom 5 • determine which fragments are inside the A1 4 triangle 3 2 1 A2 A3 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y • scanline moving top to bottom 5 • determine which fragments are inside the A1 4 triangle A(l) A(r 3 ) • interpolate attribute along edges in y 2 (l) (l) (l) ⎛ y − y2 ⎞ ⎛ y1 − y ⎞ A = A1 + A2 1 ⎜ ⎟ ⎜ ⎟ ⎝ y1 − y2 ⎠ ⎝ y1 − y2 ⎠ A2 A3 0 (r) (r) 0 1 2 3 4 5 x (r) ⎛ y − y3 ⎞ ⎛ y1 − y ⎞ A = ⎜ ⎟ A1 + ⎜ ⎟ A3 ⎝ y1 − y3 ⎠ ⎝ y1 − y3 ⎠ Rasterization / Scanline Interpolation y • scanline moving top to bottom 5 • determine which fragments are inside the A1 4 triangle A(l) A A(r) 3 • interpolate attribute along edges in y • then interpolate along x 2 1 A2 A3 (l) (r) ⎛ x − x ⎞ (r) ⎛ x − x ⎞ (l) 0 A A A = ⎜ (r) (l) ⎟ + ⎜ (r) (l) ⎟ 0 1 2 3 4 5 x ⎝ x − x ⎠ ⎝ x − x ⎠ Rasterization / Scanline Interpolation y repeat: 5 • interpolate attribute along edges in y A1 4 • then interpolate along x 3 A(l) A(r) 2 1 A2 A3 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y repeat: 5 • interpolate attribute along edges in y A1 4 • then interpolate along x 3 2 A(l) A(r) 1 A2 A3 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y 5 A1 4 3 2 1 A2 A3 0 0 1 2 3 4 5 x Rasterization / Scanline Interpolation y output: set of fragments inside triangle(s) 5 with interpolated attributes for each of 4 these fragments 3 2 1 0 0 1 2 3 4 5 x Lighting & Shading (how to determine color and what attributes to interpolate) The Rendering Equation • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye radiance towards viewer emitted radiance BRDF incident radiance from some direction Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye 3D location radiance towards viewer emitted radiance BRDF incident radiance from some direction Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye Direction towards viewer radiance towards viewer emitted radiance BRDF incident radiance from some direction Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye wavelength radiance towards viewer emitted radiance BRDF incident radiance from some direction Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye time radiance towards viewer emitted radiance BRDF incident radiance from some direction Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • drop time, wavelength (RGB) & global illumination to make it simple • direct (local) illumination: light source à surface à eye • indirect (global) illumination: light source à surfaceà … à surface à eye Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • drop time, wavelength (RGB), emission & global illumination to make it simple • direct (local) illumination: num _lights light source à surface à eye L0 (x,ω 0 ) = ∑ fr (x,ω k ,ω o )Li (x,ω k )(ω k ⋅n) k=1 • indirect (global) illumination: light source à surfaceà … à surface à eye Kajija “The Rendering Equation”, SIGGRAPH 1986 The Rendering Equation • drop time, wavelength (RGB), emission & global illumination to make it simple num _lights L0 (x,ω 0 ) = ∑ fr (x,ω k ,ω o )Li (x,ω k )(ω k ⋅n) k=1 Bidirectional Reflectance Distribution Function (BRDF) • many different BRDF models exist: analytic, data BRDF.en.html / driven (i.e. captured) GlobalIllumination /lecture/cg/ escience.anu.edu.au http:// Ngan et al. 2004 Bidirectional Reflectance Distribution Function (BRDF) • can approximate BRDF with a few simple BRDF.en.html / components GlobalIllumination /lecture/cg/ normal specular incident light component direction escience.anu.edu.au http:// diffuse component surface Phong Lighting • emissive part can be added if desired • calculate separately for each color channel: RGB Phong Lighting CG_BasicsTheory.html • simple model for direct lighting / opengl • ambient, diffuse, and specular parts /programming/ ehchua /home/ • requires: • material color mRGB (for each of www.ntu.edu.sg normalized vector pointing towards light source ambient, diffuse, specular) L https:// N normalized surface normal • light color lRGB (for each of ambient, normalized vector pointing towards viewer diffuse, specular) V R = 2(N i L)N − L normalized reflection on surface normal Phong Lighting: Ambient • independent of light/surface position, viewer, normal • basically adds some background color ambient ambient m{R,G,B} ⋅l{R,G,B} Phong Lighting: Diffuse CG_BasicsTheory.html • needs normal and light source direction / opengl • adds intensity cos-falloff with incident angle /programming/ ehchua /home/ www.ntu.edu.sg diffuse diffuse https:// m{R,G,B} ⋅l{R,G,B} ⋅max(L i N,0) dot product Phong Lighting: Specular CG_BasicsTheory.html • needs normal, light & viewer direction / opengl • models reflections = specular highlights /programming/ • shininess – exponent, larger for smaller ehchua /home/ highlights (more mirror-like surfaces) www.ntu.edu.sg https:// mspecular ⋅l specular ⋅max(R iV,0)shininess {R,G,B} {R,G,B} Phong Lighting: Attenuation CG_BasicsTheory.html • models the intensity falloff of light w.r.t. / opengl distance • The greater the distance, the lower the /programming/ intensity d ehchua /home/ 1 www.ntu.edu.sg 2 https:// kc + kld + kqd constant linear quadratic attenuation Phong Lighting: Putting it all Together • this is a simple, but efficient lighting model • has been used by OpenGL for ~25 years • absolutely NOT sufficient to generate photo-realistic renderings (take a computer graphics course for that) num _lights 1 color = mambient ⋅l ambient + mdiffuse ⋅l diffuse ⋅max(L i N,0) + mspecular ⋅l specular ⋅max(R iV,0)shininess {R,G,B} {R,G,B} {R,G,B} ∑ k + k d + k d 2 ( {R,G,B} i,{R,G,B} i {R,G,B} i{R,G,B} i ) i=1 c l i q i ambient attenuation diffuse specular Lighting Calculations • all lighting calculations happen in camera/view space! • transform vertices and normals into camera/view space • calculate lighting, i.e. per color (i.e., given material properties, light source color & position, vertex position, normal direction, viewer position) Lighting v Shading • lighting: interaction between light and surface (e.g. using Phong lighting model; think about this as “what formula is being used to calculate intensity/color”) • shading: how to compute color of each fragment (e.g. what attributes to interpolate and where to do the lighting calculation) 1. Flat shading 2. Gouraud shading (per-vertex lighting) 3. Phong shading (per-fragment lighting) - different from Phong lighting courtesy: Intergraph Computer Systems Flat Shading • compute color only once per triangle (i.e. with Phong lighting) • pro: usually fast to compute; con: creates a flat, unrealistic appearance • we won’t use it Gouraud or Per-vertex Shading • compute color once per vertex (i.e.
Recommended publications
  • GLSL 4.50 Spec
    The OpenGL® Shading Language Language Version: 4.50 Document Revision: 7 09-May-2017 Editor: John Kessenich, Google Version 1.1 Authors: John Kessenich, Dave Baldwin, Randi Rost Copyright (c) 2008-2017 The Khronos Group Inc. All Rights Reserved. This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast, or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specification for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part. Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. Such distributed specification may be reformatted AS LONG AS the contents of the specification are not changed in any way. The specification may be incorporated into a product that is sold as long as such product includes significant independent work developed by the seller. A link to the current version of this specification on the Khronos Group website should be included whenever possible with specification distributions.
    [Show full text]
  • First Person Shooting (FPS) Game
    International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 04 | Apr-2018 www.irjet.net p-ISSN: 2395-0072 Thunder Force - First Person Shooting (FPS) Game Swati Nadkarni1, Panjab Mane2, Prathamesh Raikar3, Saurabh Sawant4, Prasad Sawant5, Nitesh Kuwalekar6 1 Head of Department, Department of Information Technology, Shah & Anchor Kutchhi Engineering College 2 Assistant Professor, Department of Information Technology, Shah & Anchor Kutchhi Engineering College 3,4,5,6 B.E. student, Department of Information Technology, Shah & Anchor Kutchhi Engineering College ----------------------------------------------------------------***----------------------------------------------------------------- Abstract— It has been found in researches that there is an have challenged hardware development, and multiplayer association between playing first-person shooter video games gaming has been integral. First-person shooters are a type of and having superior mental flexibility. It was found that three-dimensional shooter game featuring a first-person people playing such games require a significantly shorter point of view with which the player sees the action through reaction time for switching between complex tasks, mainly the eyes of the player character. They are unlike third- because when playing fps games they require to rapidly react person shooters in which the player can see (usually from to fast moving visuals by developing a more responsive mind behind) the character they are controlling. The primary set and to shift back and forth between different sub-duties. design element is combat, mainly involving firearms. First person-shooter games are also of ten categorized as being The successful design of the FPS game with correct distinct from light gun shooters, a similar genre with a first- direction, attractive graphics and models will give the best person perspective which uses light gun peripherals, in experience to play the game.
    [Show full text]
  • Advanced Computer Graphics to Do Motivation Real-Time Rendering
    To Do Advanced Computer Graphics § Assignment 2 due Feb 19 § Should already be well on way. CSE 190 [Winter 2016], Lecture 12 § Contact us for difficulties etc. Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir Motivation Real-Time Rendering § Today, create photorealistic computer graphics § Goal: interactive rendering. Critical in many apps § Complex geometry, lighting, materials, shadows § Games, visualization, computer-aided design, … § Computer-generated movies/special effects (difficult or impossible to tell real from rendered…) § Until 10-15 years ago, focus on complex geometry § CSE 168 images from rendering competition (2011) § § But algorithms are very slow (hours to days) Chasm between interactivity, realism Evolution of 3D graphics rendering Offline 3D Graphics Rendering Interactive 3D graphics pipeline as in OpenGL Ray tracing, radiosity, photon mapping § Earliest SGI machines (Clark 82) to today § High realism (global illum, shadows, refraction, lighting,..) § Most of focus on more geometry, texture mapping § But historically very slow techniques § Some tweaks for realism (shadow mapping, accum. buffer) “So, while you and your children’s children are waiting for ray tracing to take over the world, what do you do in the meantime?” Real-Time Rendering SGI Reality Engine 93 (Kurt Akeley) Pictures courtesy Henrik Wann Jensen 1 New Trend: Acquired Data 15 years ago § Image-Based Rendering: Real/precomputed images as input § High quality rendering: ray tracing, global illumination § Little change in CSE 168 syllabus, from 2003 to
    [Show full text]
  • Developer Tools Showcase
    Developer Tools Showcase Randy Fernando Developer Tools Product Manager NVISION 2008 Software Content Creation Performance Education Development FX Composer Shader PerfKit Conference Presentations Debugger mental mill PerfHUD Whitepapers Artist Edition Direct3D SDK PerfSDK GPU Programming Guide NVIDIA OpenGL SDK Shader Library GLExpert Videos CUDA SDK NV PIX Plug‐in Photoshop Plug‐ins Books Cg Toolkit gDEBugger GPU Gems 3 Texture Tools NVSG GPU Gems 2 Melody PhysX SDK ShaderPerf GPU Gems PhysX Plug‐Ins PhysX VRD PhysX Tools The Cg Tutorial NVIDIA FX Composer 2.5 The World’s Most Advanced Shader Authoring Environment DirectX 10 Support NVIDIA Shader Debugger Support ShaderPerf 2.0 Integration Visual Models & Styles Particle Systems Improved User Interface Particle Systems All-New Start Page 350Z Sample Project Visual Models & Styles Other Major Features Shader Creation Wizard Code Editor Quickly create common shaders Full editor with assisted Shader Library code generation Hundreds of samples Properties Panel Texture Viewer HDR Color Picker Materials Panel View, organize, and apply textures Even More Features Automatic Light Binding Complete Scripting Support Support for DirectX 10 (Geometry Shaders, Stream Out, Texture Arrays) Support for COLLADA, .FBX, .OBJ, .3DS, .X Extensible Plug‐in Architecture with SDK Customizable Layouts Semantic and Annotation Remapping Vertex Attribute Packing Remote Control Capability New Sample Projects 350Z Visual Styles Atmospheric Scattering DirectX 10 PCSS Soft Shadows Materials Post‐Processing Simple Shadows
    [Show full text]
  • 3D Visualization of Weather Radar Data Examensarbete Utfört I Datorgrafik Av
    3D Visualization of Weather Radar Data Examensarbete utfört i datorgrafik av Aron Ernvik lith-isy-ex-3252-2002 januari 2002 3D Visualization of Weather Radar Data Thesis project in computer graphics at Linköping University by Aron Ernvik lith-isy-ex-3252-2002 Examiner: Ingemar Ragnemalm Linköping, January 2002 Avdelning, Institution Datum Division, Department Date 2002-01-15 Institutionen för Systemteknik 581 83 LINKÖPING Språk Rapporttyp ISBN Language Report category Svenska/Swedish Licentiatavhandling ISRN LITH-ISY-EX-3252-2002 X Engelska/English X Examensarbete C-uppsats Serietitel och serienummer ISSN D-uppsats Title of series, numbering Övrig rapport ____ URL för elektronisk version http://www.ep.liu.se/exjobb/isy/2002/3252/ Titel Tredimensionell visualisering av väderradardata Title 3D visualization of weather radar data Författare Aron Ernvik Author Sammanfattning Abstract There are 12 weather radars operated jointly by SMHI and the Swedish Armed Forces in Sweden. Data from them are used for short term forecasting and analysis. The traditional way of viewing data from the radars is in 2D images, even though 3D polar volumes are delivered from the radars. The purpose of this work is to develop an application for 3D viewing of weather radar data. There are basically three approaches to visualization of volumetric data, such as radar data: slicing with cross-sectional planes, surface extraction, and volume rendering. The application developed during this project supports variations on all three approaches. Different objects, e.g. horizontal and vertical planes, isosurfaces, or volume rendering objects, can be added to a 3D scene and viewed simultaneously from any angle. Parameters of the objects can be set using a graphical user interface and a few different plots can be generated.
    [Show full text]
  • Real-Time Rendering Techniques with Hardware Tessellation
    Volume 34 (2015), Number x pp. 0–24 COMPUTER GRAPHICS forum Real-time Rendering Techniques with Hardware Tessellation M. Nießner1 and B. Keinert2 and M. Fisher1 and M. Stamminger2 and C. Loop3 and H. Schäfer2 1Stanford University 2University of Erlangen-Nuremberg 3Microsoft Research Abstract Graphics hardware has been progressively optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on-the-fly within the GPU’s rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real-time applications. However, many well- established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this survey, we provide an overview of recent work and challenges in this topic by summarizing, discussing, and comparing methods for the rendering of smooth and highly-detailed surfaces in real-time. 1. Introduction Hardware tessellation has attained widespread use in computer games for displaying highly-detailed, possibly an- Graphics hardware originated with the goal of efficiently imated, objects. In the animation industry, where displaced rendering geometric surfaces. GPUs achieve high perfor- subdivision surfaces are the typical modeling and rendering mance by using a pipeline where large components are per- primitive, hardware tessellation has also been identified as a formed independently and in parallel.
    [Show full text]
  • Lighting and Shading
    Chapter 8 Lighting and Shading When we think that we \see" an object in the real world, we are not actually seeing the object itself. Instead, we are seeing light either emitted from or reflected off of that object. The pattern of the light reaching our eyes allows us to reconstruct, in our minds, the form of the object that we are looking at. Consequently, in order to simulate this image formation process in a computer, we need to be able to model lights, materials and geometry. Lights provide a source of illumination, materials determine how light is reflected off of a surface made of this material, and geometry provides the shape of the surface. 47 48 CHAPTER 8. LIGHTING AND SHADING 8.1 Point and parallel lights In the real world, all lights have a finite size and shape, and a location in space. However, we will see later that modeling the surface area of the light in a lighting algorithm can be difficult. It turns out to be much easier, and generally quite effective, to model lights as either being geometric points or as being infinitely far away. A point light has a single location in 3D space, and radiates light equally in all directions. An infinite light is considered to be so far away that all of its light rays are parallel to each other. Thus, an infinite light has no position, but there is a fixed direction for all of its rays. Such a light is sometimes called a parallel light source. uL xL cL cL Point Light Parallel Light A point light is specified by its position xL, and a parallel light is specified by its light direction vector uL.
    [Show full text]
  • The Magic Lantern Gazette a Journal of Research Volume 26, Number 1 Spring 2014
    ISSN 1059-1249 The Magic Lantern Gazette A Journal of Research Volume 26, Number 1 Spring 2014 The Magic Lantern Society of the United States and Canada www.magiclanternsociety.org 2 Above: Fig. 3 and detail. Right and below: Fig. 5 Color figures (see interior pages for complete captions) Cover Article 3 Outstanding Colorists of American Magic Lantern Slides Terry Borton American Magic-Lantern Theater P.O. Box 44 East Haddam CT 06423-0044 [email protected] They knocked the socks off their audiences, these colorists did. They wowed their critics. They created slides that shimmered like opals. They made a major contribution to the success of the best lantern presentations. Yet they received little notice in their own time, and have received even less in ours. Who were these people? Who created the best color for the commercial lantern slide companies? Who colored the slides that helped make the best lecturers into superstars? What were their secrets—their domestic and foreign inspirations, their hidden artistic techniques—the elements that made them so outstanding? What was the “revolution” in slide coloring, and who followed in the footsteps of the revolutionaries? When we speak of “colorists” we are usually referring to The lantern catalogs offered hundreds of pages of black and those who hand-colored photographic magic lantern slides white images—both photographs and photographed illustra- in the years 1850–1940. Nevertheless, for two hundred tions—to be used by the thousands of small-time showmen years, from the 1650s, when the magic lantern was in- operating in churches and small halls.
    [Show full text]
  • Opengl 4.0 Shading Language Cookbook
    OpenGL 4.0 Shading Language Cookbook Over 60 highly focused, practical recipes to maximize your use of the OpenGL Shading Language David Wolff BIRMINGHAM - MUMBAI OpenGL 4.0 Shading Language Cookbook Copyright © 2011 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: July 2011 Production Reference: 1180711 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849514-76-7 www.packtpub.com Cover Image by Fillipo ([email protected]) Credits Author Project Coordinator David Wolff Srimoyee Ghoshal Reviewers Proofreader Martin Christen Bernadette Watkins Nicolas Delalondre Indexer Markus Pabst Hemangini Bari Brandon Whitley Graphics Acquisition Editor Nilesh Mohite Usha Iyer Valentina J. D’silva Development Editor Production Coordinators Chris Rodrigues Kruthika Bangera Technical Editors Adline Swetha Jesuthas Kavita Iyer Cover Work Azharuddin Sheikh Kruthika Bangera Copy Editor Neha Shetty About the Author David Wolff is an associate professor in the Computer Science and Computer Engineering Department at Pacific Lutheran University (PLU).
    [Show full text]
  • Phong Shading
    Computer Graphics Shading Based on slides by Dianna Xu, Bryn Mawr College Image Synthesis and Shading Perception of 3D Objects • Displays almost always 2 dimensional. • Depth cues needed to restore the third dimension. • Need to portray planar, curved, textured, translucent, etc. surfaces. • Model light and shadow. Depth Cues Eliminate hidden parts (lines or surfaces) Front? “Wire-frame” Back? Convex? “Opaque Object” Concave? Why we need shading • Suppose we build a model of a sphere using many polygons and color it with glColor. We get something like • But we want Shading implies Curvature Shading Motivation • Originated in trying to give polygonal models the appearance of smooth curvature. • Numerous shading models – Quick and dirty – Physics-based – Specific techniques for particular effects – Non-photorealistic techniques (pen and ink, brushes, etching) Shading • Why does the image of a real sphere look like • Light-material interactions cause each point to have a different color or shade • Need to consider – Light sources – Material properties – Location of viewer – Surface orientation Wireframe: Color, no Substance Substance but no Surfaces Why the Surface Normal is Important Scattering • Light strikes A – Some scattered – Some absorbed • Some of scattered light strikes B – Some scattered – Some absorbed • Some of this scattered light strikes A and so on Rendering Equation • The infinite scattering and absorption of light can be described by the rendering equation – Cannot be solved in general – Ray tracing is a special case for
    [Show full text]
  • CS488/688 Glossary
    CS488/688 Glossary University of Waterloo Department of Computer Science Computer Graphics Lab August 31, 2017 This glossary defines terms in the context which they will be used throughout CS488/688. 1 A 1.1 affine combination: Let P1 and P2 be two points in an affine space. The point Q = tP1 + (1 − t)P2 with t real is an affine combination of P1 and P2. In general, given n points fPig and n real values fλig such that P P i λi = 1, then R = i λiPi is an affine combination of the Pi. 1.2 affine space: A geometric space composed of points and vectors along with all transformations that preserve affine combinations. 1.3 aliasing: If a signal is sampled at a rate less than twice its highest frequency (in the Fourier transform sense) then aliasing, the mapping of high frequencies to lower frequencies, can occur. This can cause objectionable visual artifacts to appear in the image. 1.4 alpha blending: See compositing. 1.5 ambient reflection: A constant term in the Phong lighting model to account for light which has been scattered so many times that its directionality can not be determined. This is a rather poor approximation to the complex issue of global lighting. 1 1.6CS488/688 antialiasing: Glossary Introduction to Computer Graphics 2 Aliasing artifacts can be alleviated if the signal is filtered before sampling. Antialiasing involves evaluating a possibly weighted integral of the (geometric) image over the area surrounding each pixel. This can be done either numerically (based on multiple point samples) or analytically.
    [Show full text]
  • Scan Conversion & Shading
    3D Rendering Pipeline (for direct illumination) 3D Primitives 3D Modeling Coordinates Modeling TransformationTransformation 3D World Coordinates Scan Conversion LightingLighting 3D World Coordinates ViewingViewing P1 & Shading TransformationTransformation 3D Camera Coordinates ProjectionProjection TransformationTransformation Adam Finkelstein 2D Screen Coordinates P2 Princeton University Clipping 2D Screen Coordinates ViewportViewport COS 426, Spring 2003 TransformationTransformation P3 2D Image Coordinates Scan Conversion ScanScan Conversion & Shading 2D Image Coordinates Image Overview Scan Conversion • Scan conversion • Render an image of a geometric primitive o Figure out which pixels to fill by setting pixel colors • Shading void SetPixel(int x, int y, Color rgba) o Determine a color for each filled pixel • Example: Filling the inside of a triangle P1 P2 P3 Scan Conversion Triangle Scan Conversion • Render an image of a geometric primitive • Properties of a good algorithm by setting pixel colors o Symmetric o Straight edges void SetPixel(int x, int y, Color rgba) o Antialiased edges o No cracks between adjacent primitives • Example: Filling the inside of a triangle o MUST BE FAST! P1 P1 P2 P2 P4 P3 P3 1 Triangle Scan Conversion Simple Algorithm • Properties of a good algorithm • Color all pixels inside triangle Symmetric o void ScanTriangle(Triangle T, Color rgba){ o Straight edges for each pixel P at (x,y){ Antialiased edges if (Inside(T, P)) o SetPixel(x, y, rgba); o No cracks between adjacent primitives } } o MUST BE FAST! P P1
    [Show full text]