A Theoretical Framework for Physically Based Rendering

Total Page:16

File Type:pdf, Size:1020Kb

A Theoretical Framework for Physically Based Rendering Volume 13, (1994) number 2, pp. 97-107 A Theoretical Framework for Physically Based Rendering Eric P. Lafortune and Yves D. Willems Department of Computer Science, Katholieke Universiteit Leuven Celestijnenlaan 200A, 3001 Heverlee, Belgium [email protected] Abstract In this paper we introduce the concept of the global reflection distribution function which allows concise formulation of the global illumination problem. Unlike previous formulations it is not geared towards any specific algorithm. As an example of its versatility we derive a Monte Carlo rendering algorithm that seam- lessly integrates the ideas of shooting and gathering power to create photorealistic images. Keywords: Rendering and visualisation; Global illumination; Photorealistic rendering 1. Introduction 2. Related Work The global illumination problem consists of determining An important milestone in the development of the global the lighting functions for a given scene by simulating illumination theory for computer graphics was the in- the transport of light throughout the scene. The starting troduction of the radiosity method by Goral et al.1. point for finding a physically correct solution consists Originally developed within the field of heat transfer in identifying the quantities and properties that are rele- it is based on the energy equilibrium between diffuse vant to the problem. Using these notions a mathematical emitters and reflectors of radiative energy such as heat model can be constructed, which formally expresses the or light. Although restricted to the simulation of diffuse physical laws of illumination, reflection, refraction and lighting effects it presents a sound physical model. Dis- everything else encompassing the transport of light. The cretisation of the scene in patches or elements results complexity of the problem most often forces simplifying in a mathematical model consisting of a set of linear assumptions. The final step is to derive practical algo- energy equations. The coefficients are determined by the rithms to solve the mathematically formulated problems. geometry and the reflective properties of the scene, the A concise mathematical formulation allows standard an- right-hand side coefficients are the self-emitted radiosi- alytical and numerical techniques to be applied to solve ties of the elements and the unknowns are the radiosities the problem. In computer graphics the solution of the of the elements. Various algorithms have been presented global illumination problem is mostly used to create a to solve this set of equations. The radiosity solution is realistic rendering of the scene. In this case the solution view-independent. Smits et al.2 introduced the notion can be tuned for one or more specific views. of importance and adapted the radiosity algorithm to tune the solution with respect to the final rendering In the following paragraphs we will introduce a new parameters. formulation of the global illumination problem. The medium is assumed to be non-participating; only sur- The rendering equation as proposed by Kajiya3 pro- faces interact with light, by reflection, refraction and ab- vides a more general mathematical formulation of the sorption. We will then present a new rendering algorithm problem that is no longer restricted to diffusely emitting and show how it can be derived from the mathematical surfaces. It can be expressed as a Fredholm equation formulation. of the second kind where the kernel is determined by © The Eurographics Association 1994. Published by Blackwell Publishers, 108 Cow- ley Road, Oxford OX4 1JF, UK and 238 Main Street, Cambridge, MA 02142, USA. 97 7-2 98 E. Lafortune and Y. Willems A Theoretical Framework for Physically Based Rendering Figure 1: Explanation of the symbols used in the rendering equation and the potential equation respectively. the geometry and the reflective properties of the scene. strongly reduce the storage requirements and the com- The unknown function is the radiance across the sur- putational times. faces of the scene. Path tracing is presented as a Monte Carlo algorithm that solves the rendering equation. The idea is to sample the flux through the pixels, gather- 3. The Rendering Equation and the Potential Equation ing light by following all light paths back to the light We briefly recall the rendering equation and the po- sources. As such it is entirely view-dependent. Various tential equation which were presented by Pattanaik10-12 4-9 other algorithms are based on the same principle . as a set of adjoint equations, both describing the global illumination problem from a different point of view. The 10-12 Pattanaik recently presented the potential equa- rendering equation expresses the emitted radiance at a tion, which is adjoint to the rendering equation. The given point on a surface and in a given direction as the notions of radiance and potential capability, and the sum of the self-emitted radiance and of the reflected ra- set of equations form a mathematical framework that diance resulting from incoming light from other surfaces explains all common global illumination algorithms. He (Figure 1): proposed an accompanying Monte Carlo particle trac- ing algorithm that shoots light particles from the light sources13. The scene is discretised into elements at which received and diffusely emitted power is accumulated dur- ing the simulation. In a second view-dependent pass, path tracing or distribution ray tracing is used to ren- where : der the scene, adding specular effects to the already the emitted radiance at point x along computed diffuse components. direction the self-emitted radiance at point x along Most other algorithms that solve the global illu- direction mination problem are similarly based on two-pass 14-17 the bi-directional reflection distribu- methods . They compute diffuse lighting compo- tion function (brdf) at point x for light coming in nents in a first extended radiosity pass and specular from direction and going out along direction lighting components in a second view-dependent pass. Some algorithms try to represent the complete radiance y = the point that 'sees' point x along direction function instead of only its diffuse component order in the absolute value of the cosine of the to find a view-independent solution. The main prob- angle between the direction and the normal to the lem with this approach is the huge amount of storage surface at point x, that is required to represent the lighting function. Aup- 18 19 the set of incoming directions around point x perle and Hanrahan and Christensen et al. tried to (a sphere or a hemisphere), meet this problem using the notion of directional impor- a differential angle around [sr]. tance, explained in a mathematical framework similar to Pattanaik's. Their deterministic algorithms concentrate The flux of light leaving a given set S, consisting of on important contributions to the solution and thereby pairs of points on a surface and directions around these © The Eurographics Association 1994 E. Lafortune and Y. Willems / A Theoretical Framework for Physically Based Rendering 99 points in which one is interested, can then be expressed by : where: belongs to the set S, and 0 otherwise, A = the set of all surface points of the given scene, a differential surface area element around point x on surface A. Sets S in which one is commonly interested are the points and directions associated with the patches in a scene for radiosity methods, ana me points ana direc- Figure 2: The global reflection distribution function gives tions seen through the pixels for ray tracing techniques. a measure of the fraction of the radiance emitted at The potential capability is the dual of the point x along direction that is eventually output at radiance function It is defined as the differen- point y along direction through all possible reflections tial flux through a given set S of points and directions, throughout the scene. resulting from a single unit of emitted radiance from point x along direction Although not presented as such earlier, this definition can be expressed formally as: defined by the expressions, making it difficult to com- bine them into some model or algorithm. We therefore propose the notion of the global reflectance distribution function (grdf) which is defined with respect to a given scene but which does not depend on any particular emis- sion function such as nor on a fixed set such The potential equation that governs the potential ca- as S. pability expresses that the potential of a point and a direction to light the given set results from a direct con- 4.1. Definition tribution if the pair belongs to the set and from an indirect contribution through reflection on the surface The definition bears some similarity to the approach to which it points (Figure 1). Formally: often taken in system theory, where a system is studied on the basis of its response to some input signal. The grdf expresses the differential amount of radiance leaving point y along direction as result of a single unit of emitted radiance from point x on a surface along direction (Figure 2). Formally: where: y = the point seen by point x along direction the set of outgoing directions around point y. The flux can now be expressed in terms of the poten- tial capability: The grdf may be considered as a generalisation of the brdf. While the latter only specifies the local behaviour of light reflecting at a single surface, the former defines the global illumination effects of light reflecting through 4. The Global Reflection Distribution Function a complete scene. Note that the function is defined in From the previous paragraph it is clear that the radi- terms of radiances leaving the surfaces, unlike the brdf ance function is defined with respect to a fixed which is commonly expressed in terms of both incoming self-emitted radiance function and that the and outgoing radiances. It is theoretically defined over potential function is defined with respect to the entire space of points and directions, but in the a fixed function corresponding to the set S.
Recommended publications
  • Photorealistic Texturing for Dummies
    PHOTOREALISTIC TEXTURING FOR DUMMIES Leigh Van Der Byl http://leigh.cgcommunity.com Compiled by Carlos Eduardo de Paula - Brazil PHOTOREALISTIC TEXTURING FOR DUMMIES Index Part 1 – An Introduction To Texturing ...............................................................................................3 Introduction ........................................................................................................................................................ 3 Why Procedural Textures just don't work ...................................................................................................... 3 Observing the Aspects of Surfaces In Real Life............................................................................................ 4 The Different Aspects Of Real World Surfaces ............................................................................................ 5 Colour................................................................................................................................................................... 5 Diffuse.................................................................................................................................................................. 5 Luminosity........................................................................................................................................................... 6 Specularity...........................................................................................................................................................
    [Show full text]
  • Path Tracing
    Path Tracing Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Irradiance Circumference of a Circle • The circumference of a circle of radius r is 2πr • In fact, a radian is defined as the angle you get on an arc of length r • We can represent the circumference of a circle as an integral 2휋 푐푖푟푐 = 푟푑휑 0 • This is essentially computing the length of the circumference as an infinite sum of infinitesimal segments of length 푟푑휑, over the range from 휑=0 to 2π Area of a Hemisphere • We can compute the area of a hemisphere by integrating ‘rings’ ranging over a second angle 휃, which ranges from 0 (at the north pole) to π/2 (at the equator) • The area of a ring is the circumference of the ring times the width 푟푑휃 • The circumference is going to be scaled by sin휃 as the rings are smaller towards the top of the hemisphere 휋 2 2휋 2 2 푎푟푒푎 = sin 휃 푟 푑휑 푑휃 = 2휋푟 0 0 Hemisphere Integral 휋 2 2휋 2 푎푟푒푎 = sin 휃 푟 푑휑 푑휃 0 0 • We are effectively computing an infinite summation of tiny rectangular area elements over a two dimensional domain. Each element has dimensions: sin 휃 푟푑휑 × 푟푑휃 Irradiance • Now, let’s assume that we are interested in computing the total amount of light arriving at some point on a surface • We are essentially integrating over all possible directions in the hemisphere above the surface (assuming it’s opaque and we’re not receiving translucent light from below the surface) • We are integrating the light intensity (or radiance) coming in from every direction • The total incoming radiance over all directions in the hemisphere
    [Show full text]
  • Ray Tracing Notes CS445 Computer Graphics, Fall 2012 (Last Modified
    Ray Tracing Notes CS445 Computer Graphics, Fall 2012 (last modified 10/7/12) Jenny Orr, Willamette University 1 The Ray Trace Algorithm refracted ray re!ected ray camera pixel Light3 light ray computed ray screen in shadow Light1 Light2 Figure 1 1 For each pixel in image { 2 compute ray 3 pixel_color = Trace(ray) 4 } 5 color Trace(ray) { 6 For each object 7 find intersection (if any) 8 check if intersection is closest 9 If no intersection exists 10 color = background color 11 else for the closest intersection 12 for each light // local color 13 color += ambient 14 if (! inShadow(shadowRay)) color += diffuse + specular 15 if reflective 16 color += k_r Trace(reflected ray) 17 if refractive 18 color += k_t Trace(transmitted ray) 19 return color 20 } 21 boolean inShadow(shadowRay) { 22 for each object 23 if object intersects shadowRay return true 24 return false 25 } 1 1.1 Complexity w = width of image h = height of image n = number of objects l = number of lights d = levels of recursion for reflection/refraction Assuming no recursion or shadows O(w ∗ h ∗ (n + l ∗ n)). How does this change if shadows and recursion are added? What about anti-aliasing? 2 Computing the Ray In general, points P on a ray can be expressed parametrically as P = P0 + t dir where P0 is the starting point, dir is a unit vector pointing in the ray's direction, and t ≥ 0 is the parameter. When t = 0, P corresponds to P0 and as t increases, P moves along the ray. In line 2 of the code, we calculate a ray which starts at the camera P0 and points from P0 to the given pixel located at P1 on a virtual screen (view plane), as shown in Figure 2.
    [Show full text]
  • Adjoints and Importance in Rendering: an Overview
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 9, NO. 3, JULY-SEPTEMBER 2003 1 Adjoints and Importance in Rendering: An Overview Per H. Christensen Abstract—This survey gives an overview of the use of importance, an adjoint of light, in speeding up rendering. The importance of a light distribution indicates its contribution to the region of most interest—typically the directly visible parts of a scene. Importance can therefore be used to concentrate global illumination and ray tracing calculations where they matter most for image accuracy, while reducing computations in areas of the scene that do not significantly influence the image. In this paper, we attempt to clarify the various uses of adjoints and importance in rendering by unifying them into a single framework. While doing so, we also generalize some theoretical results—known from discrete representations—to a continuous domain. Index Terms—Rendering, adjoints, importance, light, ray tracing, global illumination, participating media, literature survey. æ 1INTRODUCTION HE use of importance functions started in neutron importance since it indicates how much the different parts Ttransport simulations soon after World War II. Im- of the domain contribute to the solution at the most portance was used (in different disguises) from 1983 to important part. Importance is also known as visual accelerate ray tracing [2], [16], [27], [78], [79]. Smits et al. [67] importance, view importance, potential, visual potential, value, formally introduced the use of importance for global
    [Show full text]
  • Shadow and Specularity Priors for Intrinsic Light Field Decomposition
    Shadow and Specularity Priors for Intrinsic Light Field Decomposition Anna Alperovich, Ole Johannsen, Michael Strecke, and Bastian Goldluecke University of Konstanz [email protected] Abstract. In this work, we focus on the problem of intrinsic scene decompo- sition in light fields. Our main contribution is a novel prior to cope with cast shadows and inter-reflections. In contrast to other approaches which model inter- reflection based only on geometry, we model indirect shading by combining ge- ometric and color information. We compute a shadow confidence measure for the light field and use it in the regularization constraints. Another contribution is an improved specularity estimation by using color information from sub-aperture views. The new priors are embedded in a recent framework to decompose the input light field into albedo, shading, and specularity. We arrive at a variational model where we regularize albedo and the two shading components on epipolar plane images, encouraging them to be consistent across all sub-aperture views. Our method is evaluated on ground truth synthetic datasets and real world light fields. We outperform both state-of-the art approaches for RGB+D images and recent methods proposed for light fields. 1 Introduction Intrinsic image decomposition is one of the fundamental problems in computer vision, and has been studied extensively [23,3]. For Lambertian objects, where an input image is decomposed into albedo and shading components, numerous solutions have been pre- sented in the literature. Depending on the input data, the approaches can be divided into those dealing with a single image [36,18,14], multiple images [39,24], and image + depth methods [9,20].
    [Show full text]
  • LIGHT TRANSPORT SIMULATION in REFLECTIVE DISPLAYS By
    LIGHT TRANSPORT SIMULATION IN REFLECTIVE DISPLAYS by Zhanpeng Feng, B.S.E.E., M.S.E.E. A Dissertation In ELECTRICAL ENGINEERING DOCTOR OF PHILOSOPHY Dr. Brian Nutter Chair of the Committee Dr. Sunanda Mitra Dr. Tanja Karp Dr. Richard Gale Dr. Peter Westfall Peggy Gordon Miller Dean of the Graduate School May, 2012 Copyright 2012, Zhanpeng Feng Texas Tech University, Zhanpeng Feng, May 2012 ACKNOWLEDGEMENTS The pursuit of my Ph.D. has been a rather long journey. The journey started when I came to Texas Tech ten years ago, then took a change in direction when I went to California to work for Qualcomm in 2006. Over the course, I am privileged to have met the most amazing professionals in Texas Tech and Qualcomm. Without them I would have never been able to finish this dissertation. I begin by thanking my advisor, Dr. Brian Nutter, for introducing me to the brilliant world of research, and teaching me hands on skills to actually get something done. Dr. Nutter sets an example of excellence that inspired me to work as an engineer and researcher for my career. I would also extend my thanks to Dr. Mitra for her breadth and depth of knowledge; to Dr. Karp for her scientific rigor; to Dr. Gale for his expertise in color science and sense of humor; and to Dr. Westfall, for his great mind in statistics. They have provided me invaluable advice throughout the research. My colleagues and supervisors in Qualcomm also gave tremendous support and guidance along the path. Tom Fiske helped me establish my knowledge in optics from ground up and taught me how to use all the tools for optical measurements.
    [Show full text]
  • The Photon Mapping Method
    7 The Photon Mapping Method “I get by with a little help from my friends.” —John Lennon, 1940–1980 HOTON mapping is a practical approach for computing global illumination within complex P environments. Much like irradiance caching methods, photon mapping caches and reuses illumination information in the scene for efficiency. Photon mapping has also been successfully applied for computing lighting within, and in the presence of, participating media. In this chapter we briefly introduce the photon mapping technique. This sets the foundation for our contributions in the next chapter, which make volumetric photon mapping practical. 7.1 Algorithm Overview Photon mapping, introduced by Jensen[1995; 1996; 1997; 1998; 2001], is a practical approach for computing global illumination. At a high level, the algorithm consists of two main steps: Algorithm 7.1:PHOTONMAPPING() 1 PHOTONTRACING(); 2 RENDERUSINGPHOTONMAP(); In the first step, a lighting simulation is performed by tracing packets of energy, or photons, from light sources and storing these photons as they scatter within the scene. This processes 119 120 Algorithm 7.2:PHOTONTRACING() 1 n 0; e Æ 2 repeat 3 (l, pdf (l)) = CHOOSELIGHT(); 4 (xp , ~!p , ©p ) = GENERATEPHOTON(l ); ©p 5 TRACEPHOTON(xp , ~!p , pdf (l) ); 6 n 1; e ÅÆ 7 until photon map full ; 1 8 Scale power of all photons by ; ne results in a set of photon maps, which can be used to efficiently query lighting information. In the second pass, the final image is rendered using Monte Carlo ray tracing. This rendering step is made more efficient by exploiting the lighting information cached in the photon map.
    [Show full text]
  • 3D Computer Graphics Compiled By: H
    animation Charge-coupled device Charts on SO(3) chemistry chirality chromatic aberration chrominance Cinema 4D cinematography CinePaint Circle circumference ClanLib Class of the Titans clean room design Clifford algebra Clip Mapping Clipping (computer graphics) Clipping_(computer_graphics) Cocoa (API) CODE V collinear collision detection color color buffer comic book Comm. ACM Command & Conquer: Tiberian series Commutative operation Compact disc Comparison of Direct3D and OpenGL compiler Compiz complement (set theory) complex analysis complex number complex polygon Component Object Model composite pattern compositing Compression artifacts computationReverse computational Catmull-Clark fluid dynamics computational geometry subdivision Computational_geometry computed surface axial tomography Cel-shaded Computed tomography computer animation Computer Aided Design computerCg andprogramming video games Computer animation computer cluster computer display computer file computer game computer games computer generated image computer graphics Computer hardware Computer History Museum Computer keyboard Computer mouse computer program Computer programming computer science computer software computer storage Computer-aided design Computer-aided design#Capabilities computer-aided manufacturing computer-generated imagery concave cone (solid)language Cone tracing Conjugacy_class#Conjugacy_as_group_action Clipmap COLLADA consortium constraints Comparison Constructive solid geometry of continuous Direct3D function contrast ratioand conversion OpenGL between
    [Show full text]
  • Lighting - the Radiance Equation
    CHAPTER 3 Lighting - the Radiance Equation Lighting The Fundamental Problem for Computer Graphics So far we have a scene composed of geometric objects. In computing terms this would be a data structure representing a collection of objects. Each object, for example, might be itself a collection of polygons. Each polygon is sequence of points on a plane. The ‘real world’, however, is ‘one that generates energy’. Our scene so far is truly a phantom one, since it simply is a description of a set of forms with no substance. Energy must be generated: the scene must be lit; albeit, lit with virtual light. Computer graphics is concerned with the construction of virtual models of scenes. This is a relatively straightforward problem to solve. In comparison, the problem of lighting scenes is the major and central conceptual and practical problem of com- puter graphics. The problem is one of simulating lighting in scenes, in such a way that the computation does not take forever. Also the resulting 2D projected images should look as if they are real. In fact, let’s make the problem even more interesting and challenging: we do not just want the computation to be fast, we want it in real time. Image frames, in other words, virtual photographs taken within the scene must be produced fast enough to keep up with the changing gaze (head and eye moves) of Lighting - the Radiance Equation 35 Great Events of the Twentieth Century35 Lighting - the Radiance Equation people looking and moving around the scene - so that they experience the same vis- ual sensations as if they were moving through a corresponding real scene.
    [Show full text]
  • CS 184: Problems and Questions on Rendering
    CS 184: Problems and Questions on Rendering Ravi Ramamoorthi Problems 1. Define the terms Radiance and Irradiance, and give the units for each. Write down the formula (inte- gral) for irradiance at a point in terms of the illumination L(ω) incident from all directions ω. Write down the local reflectance equation, i.e. express the net reflected radiance in a given direction as an integral over the incident illumination. 2. Make appropriate approximations to derive the radiosity equation from the full rendering equation. 3. Match the surface material to the formula (and goniometric diagram shown in class). Also, give an example of a real material that reasonably closely approximates the mathematical description. Not all materials need have a corresponding diagram. The materials are ideal mirror, dark glossy, ideal diffuse, retroreflective. The formulae for the BRDF fr are 4 ka(R~ · V~ ), kb(R~ · V~ ) , kc/(N~ · V~ ), kdδ(R~), ke. 4. Consider the Cornell Box (as in the radiosity lecture, assume for now that this is essentially a room with only the walls, ceiling and floor. Assume for now, there are no small boxes or other furniture in the room, and that all surfaces are Lambertian. The box also has a small rectangular white light source at the center of the ceiling.) Assume we make careful measurements of the light source intensity and dimensions of the room, as well as the material properties of the walls, floor and ceiling. We then use these as inputs to our simple OpenGL renderer. Assuming we have been completely accurate, will the computer-generated picture be identical to a photograph of the same scene from the same location? If so, why? If not, what will be the differences? Ignore gamma correction and other nonlinear transfer issues.
    [Show full text]
  • LEAN Mapping
    LEAN Mapping Marc Olano∗ Dan Bakery Firaxis Games Firaxis Games (a) (b) (c) Figure 1: In-game views of a two-layer LEAN map ocean with sun just off screen to the right, and artist-selected shininess equivalent to a Blinn-Phong specular exponent of 13,777: (a) near, (b) mid, and (c) far. Note the lack of aliasing, even with an extremely high power. Abstract 1 Introduction For over thirty years, bump mapping has been an effective method We introduce Linear Efficient Antialiased Normal (LEAN) Map- for adding apparent detail to a surface [Blinn 1978]. We use the ping, a method for real-time filtering of specular highlights in bump term bump mapping to refer to both the original height texture that and normal maps. The method evaluates bumps as part of a shading defines surface normal perturbation for shading, and the more com- computation in the tangent space of the polygonal surface rather mon and general normal mapping, where the texture holds the ac- than in the tangent space of the individual bumps. By operat- tual surface normal. These methods are extremely common in video ing in a common tangent space, we are able to store information games, where the additional surface detail allows a rich visual ex- on the distribution of bump normals in a linearly-filterable form perience without complex high-polygon models. compatible with standard MIP and anisotropic filtering hardware. The necessary textures can be computed in a preprocess or gener- Unfortunately, bump mapping has serious drawbacks with filtering ated in real-time on the GPU for time-varying normal maps.
    [Show full text]
  • Light (Technical)
    Chapter 21 Light (technical) In this chapter we will describe in more detail how light and reflections are properly measured and represented. These concepts may not be necessary for doing casual com- puter graphics, but they can become important in order to do high quality rendering. Such high quality rendering is often done using stand alone software and does not use the same rendering pipeline as OpenGL. We will cover some of this material, as it is perhaps the most developed part of advanced computer graphics. This chapter will be covering material at a more advanced level than the rest of this book. For an even more detailed treatment of this material, see Jim Arvo’s PhD thesis [3] and Eric Veach’s PhD thesis [71]. There are two steps needed to understand high quality light simulation. First of all, one needs to understand the proper units needed to measure light and reflection. This understanding directly leads to equations which model how light behaves in a scene. Secondly, one needs algorithms that compute approximate solutions to these equations. These algorithms make heavy use of the ray tracing infrastructure described in Chapter 20. In this chapter we will focuson the more fundamental aspect of deriving the appropriate equations, and only touch on the subsequent algorithmic issues. For more on such issues, the interested reader should see [71, 30]. Our basic mental model of light is that of “geometric optics”. We think of light as a field of photons flying through space. In free space, each photon flies unmolested in a straight line, and each moves at the same speed.
    [Show full text]