<<

University of North Florida UNF Digital Commons

All Volumes (2001-2008) The sprO ey Journal of Ideas and Inquiry

2003 Tracing And Jason Rupard University of North Florida

Follow this and additional works at: http://digitalcommons.unf.edu/ojii_volumes Part of the Sciences Commons

Suggested Citation Rupard, Jason, " And Global Illumination" (2003). All Volumes (2001-2008). 105. http://digitalcommons.unf.edu/ojii_volumes/105

This Article is brought to you for free and open access by the The sprO ey Journal of Ideas and Inquiry at UNF Digital Commons. It has been accepted for inclusion in All Volumes (2001-2008) by an authorized administrator of UNF Digital Commons. For more information, please contact Digital Projects. © 2003 All Rights Reserved Ray Tracing interact with each other in many ways. And "Global illumination," the more advanced Global Illumination form of ray tracing, adds to the local model by reflecting from surrounding surfaces to the object. A Jason Rupard global illumination model is more comprehensive, more physically correct, Faculty Sponsor: Dr.Yap Siong Chua, and it produces more realistic images. Professor of Computer Science Ray tracing is an essential subject when it comes to computer . It Abstract combines issues of efficiency and realism, thus finding a favorable balance of the In order to represent real-world time and effort involved to make realistic images with a computer, a program has three dimensional images. In the process to relate three-dimensional images on a of researching the many different ways of two-dimensional monitor screen. Several implementing a ray tracer, the study ways of doing this exist with varying began with local illumination and degrees of realism. One of the most graduated to global illumination, using successful methods can be grouped in a some pre-established techniques and the "screen-to-world method" of viewing, development of new techniques. which is also known as "ray-tracing." This technology Ray Tracing Basics simulates light rays within a 3D environment. Since light rays have A basic model shown in Figure 1 will predictable physical properties, the ray­ shoot one ray per . If an image is tracing algorithm can attempt to SOOx600 , then when the ray tracing is calculate the exact coloring of each complete, 4S0,000 rays will have been shot. ray/object intersection at any given pixel. Each will begin at the viewer and end at its Advanced levels of ray tracing allow light closest intersection with an object in the rays to bounce from object to object, scene. The viewer's location is defined with mimicking what they do in real life. the other objects of the scene in an input "Local illumination" represents the file. An illumination model will be applied basic form of ray tracing. It only takes to figure out how much light is falling on into account the relationship between that point and what color will be produced. light sources and a single object, but does An illumination model is an equation used not consider the effects that result from to calculate the of light that we the presence of multiple objects. For should see at a given point on the surface of instance, a light source can be intersected an object [2]. by another surface and therefore be obscured to any point behind that surface. Similarly, light can be contributed not by a light source, but by a of light from some other object. The local illumination model does not visually show this reflection of light. Therefore, special techniques have to be used to represent these effects. In real life there are often multiple sources of light and multiple reflecting objects that

56 Osprey Journal of Ideas and Inquiry Object within a cene have propertie de cribing its color, if it 's re fl ecti ve (mirror­ like) or refracti ve (glass-like) and its locati on within the scene. Objects can be I.aglO PlanlO pheres, tri angle (polygons), rings, cylinder , etc [5] . Anyone of these hapes could be a li ght as well , known as area li ght sources when the whole surface of the object emits li ght. For now, we will use point li ght sources, li ght coming from a single point in the scene, for our illuminati on model.

lylO

Figure 1. Basic Ray Tracing Scenario

Calculating the Closest Intersection These equati on are hown next to Figure 2. The goal is to find the small est t Parametric equati ons fo r a line in a 3 va lue. The mall est t value will give the dimensional pace are used fo r calculating clo est intersecti on to the Vi ewer as hown th e clo e tinter ecti on with an object from in Figure 2. the eye (viewer)[4].

t=3:.3 t=38.1 x = Xo + t* (x, - xo) Viewer y = Yo + t* (y, - Yo) 2 = 20 + t* (2, - 20)

t=93.7

Figure 2. Intersections along a ray, where t=32.3 is the closest one.

Pointo (xo, Yo, zo) i the locati on of the Applying Illumination Vi ewer at the ori gin fo r the ray. Point , (x " y" z,) is a poi nt on the image pl ane. Point ow that the calcul ati on of what the (x, y, z) i any point on the line defined by viewer can ee at a particul ar pi xel is fo und, Po and PI' otice that (x , - xo) is the x the illuminati on model i applied. The local component of a vector, same for y and z. illuminati on model i used fi gure out what So a ray can be represented by vector: (x , - color the pi xel will be. xu), (y , - Yo), (z, - Zo)· Pixel"",,, = Diffuse + Specular

Osprey Journal of Ideas alld IlI quiry 57 A diffused material is a dull material, Diffuse = K diffu" *Colordiffuu *cos8 like chalk. At the point of intersection, a vector is made from the intersection to a Kd'ffu" and Colord'ffu" are pre-defined light. This forms the light vector L. N is the inputs of the program describing a particular of the surface at that intersection. L object's diffused properties. The angle and N are shown in Figure 3. The normal is between Land N is e, which is calculated perpendicular to the surface. The formula to and will change according to the light's calculate the diffused component of the location. This will give the object a shaded local illumination model is as follows [2]: look dependent on the light.

Diffuse

L N

Figure 3. Diffused component, L points at a light source and N is normal to the surface.

Specular = K,pecula, *C%r'PCCUI"'. *COS,hiUY l/>

Specular color is viewer dependent. The K,pecu'ac' Color,p",u,,,' and shiny are input closer the reflection vector R is pointing describing the object. The shiny exponent towards the eye, the brighter the pixel will affects the specular spot on the object, get. Simply put, specular color will brighten shown in Figure 5. The higher shiny is, the a point more if the light reflects back into more concentrated the spot becomes.

specular N

Figure 4. Specular component, R is the reflection of L

58 Osprey Journal of Ideas and Inquiry Specular Color

Figure S. Specular color on a sphere, the shiny exponent will effect the area of the Specular "spot".

Pi xels will be in a "Red-Green-Blue" jagged edge of an image and blends it with color space known as (R, G, B) va lue. colors around the edge making it smooth Each RGB component will have range [0.0 [ I] . For in stance, a bl ack surface - 1.0]. White would be ( I, I, I) and Black intersecting a white surface, at tho e poi nts (0,0,0). The illuminati on model fo rmula of intersecti on the colors will blend and becomes [2]: make a grayish color. To apply thi technique to ray tracing is straightforward. Take a pi xel and di vide it in to sub-pi xel , shown in Figure 6, and shoot the sub-pi xels with rays. Add all the sub­ colors up and di vide that by the number of Pixe("",,_H= DiffusedH+ Specular. sub-pixels. Thi give you an average color for that whole pi xel. This works because the Smoothing the Image all the sub-rays shot will all not hit the same pl ace, some will hit the black surface and Antiali asing is a technique used fo r the some will hit the white surface. Then smoothing of an image. It takes sharp, averaging the colors will give you a gray.

Tv • .!:3 til v • • • . ~ 0- ...... • • • ...L 1-1 pixel size-1

Figure 6. The subdivision of a pixel to make sub-pixels. Calculate the color for each sub­ pixel and then averaging them to get a color for the whole pixel.

Accelerated Ray Tracing 800x600 (480K ray ), 4 1.28 billion intersecti on calculati ons are made. Ray tracing is very time consuming A way of speeding up this process is to algorithm. The majority of the time goes to use a 3-D grid to encompass all the objects in finding the intersecti on of a ray [I ]. To find a scene. Now, instead of testing all 86k objects thi intersecti on you have to test a ray with per ray, onl y test objects that are in the sub­ every object in the scene, and then chose the boxes for whi ch the ray passes through, as closest intersecti on. Therefore, if there are shown in Figure 7. Only objects in boxes 86K object in a scene and the image is 8,9, I 0, 11 ,12 need to be test for intersecti on.

Osprey Journal of Ideas all d IlI quiry 59 1 3 4 5 6 7

Figure7. 2-D representation of 3-D grid structure.

Another way of accelerating the Interpolation of Normals rendering proce s is to take advantage of a computer th at ha multiple processors. With norm al is perpendicul ar to the thi capability, an image could be split into surface at a particular point on th at surface N sub-image, where is the number of [4]. If a tri angle has just one normal for all processors. Each processor hav ing one sub­ th e points on the triangle then that triangle image to work on, making the run-time will be perfectly flat. With one normal per time faster. triangle an object made up of triangles will become patched, as seen with th e teapot on the left in Figu re 8. With interpolati on, the goa l is to have a slightly different norm al for every point on the triangle making th e objec t curved, as seen with th e teapot on the ri ght in Figure 8 [2].

Figure 8. Left teapot is without interpolation; the right teapot was rendered with interpolation.

60 Osprey JOt/mal of Ideas lind In qlliry This new normal, , will be calculated N = a * N" + f3 * Nh + (j * N, from three other normals, a, Nb, and c, representing the normals of the three points The locati on of point P and its di stances of a tri angle, A, 8 , and C respecti ve ly, from each normal determine whi ch one; shown in Figure 9. The three normals of the a, b, or c is weighted more than the tri angle are pre-defin ed inputs to the ray rest. In Figure 9, P is clo er to 8 , therefore tracer. These normals will have been ~ will have a greater value than a and cr. calculated fro m a di ffe rent program. They are based upon the averaging of urrounding tri angles and their normals. N is a linear combinati on of the vectors, Na, Nb, and Nc.

Figure 9. A triangle with three normals used to calculate N, the normal for point P.

Global Illumination the li ght fa lling on that point. To produce Global Illuminati on will give a more the most realisti c image possible, all reali sti c image. It will take into account all directi ons of li ght would have to be tested. li ght, direct and indirect, to fo rm a better Thi i impossible because there are infinite li ghting model on a surface. With local directi ons of li ght falling on any parti cul ar illuminati on, one ray is shot to every li ght to poi nt. Instead, sample rays are shot to calculate how much li ght will be falling on produce a li ghting model, shown in Figure that surface. With global illuminati on, once II. Figure 10 hows that the more samples the intersecti on is calculated many rays are that have been taken, the better the image shot out in di fferent directi ons to produce will come out [4].

Osprey l ournal of Ideas and Inquily 61 Figure 10. Left image: 100 sample rays per intersection. Middle image: 1000 sample rays per intersection. Right image: 3500 sample rays per intersection

Scen e

Ar ea Li ght Source D

Figure 11. Basic Global Illumination Scenario

ample ray can bounce randomly off will beg in equaling th e number of object until it reaches a light. If it never ample, but every time gl( ) return 0 it wi II reaches a light in the max imum allowed be ubtracted by I . The gl( ) function will bounces, then it is thrown out of the fi nal return 0 if the sample ray never hit a light. ca lculation of pi xel co lor. If a ample hit a T hi. th rows out all noncontributing sa mple light directly, the full intensity of light goe rays. Also, gl( ) is a recu rsive fun ction. It in to the calculation. If it doesn' t hit th e light will rec urse till th e max imum number of directly. after every bounc the light bounces has been reached, or it has hit a inten ity is decreased by a fac tor of the light. Once gl( ) has hit a light, it returns the diffused co mponent of th e objec t it is light 's intensity. That light intensity bouncing from. In global illuminati on the decreases with every bounce it took to get to sample ray takes the place of th e Light th e light. 0, the more bounces th e ray takes vec tor, L, in the pi xel co lor ca lculations, to get to the light, th e less the light's seen in Figures 3 and 4. The pixel color inten ity will factor into the fi nal ca lcul ation formu la is now changed to the following. of the pixel co lor. Thi P eudocode below how this being done.

I '-' Pixel'Q'Q' = IN * I; (Diffused, + Specular) * gl (samples) .. I

62 Osprey 10um al of Ideas and Il/ quil)1 Pseudocode: Global Illumination .color-pixel(Vector ray_from_eye) intersection = find_intersection (ray_from_eye) N=#samples for(i=O;i<#samples; ++i) sample_ray = generate_random_ray(intersection) current_color = diffuse(intersection)+ specular (intersection) factor = gl(sample_ray)

if(factor == 0) N - 1 goto next sample end_if sum_of_colors += current color * factor end_for pixel_color = sum_of_colors I N return pixel_color end_color-pixel

gl(Vector sample_ray) Ilrecursive function if(Max Bounces Reached) return 0 end_if intersection find_intersection (sample_ray) if(intersection is a light) return light's intensity end_if sample_ray = generate_random_ray(intersection) factor = gl(sample_ray) return diffuse (intersection) * factor

Generating Random Sample Rays need to point away from the surface they are coming from. A sample ray needs to have Generating sample rays is an important an angle with the normal between 0 and 90 part of the global illumination algorithm. degrees, and should be able to reach 360 Sample rays will produce the light; so bad degrees around the normal. This depicts a sample rays will render bad lighting. If a hemisphere, with the surface's normal going sample ray never hits a light it is thrown out straight through the top "North Pole" of it, of the calculation. We want all rays to have as in Figure 12[1]. a "chance" of hitting the light. Sample rays

Osprey Journal of Ideas and Inquiry 63 Figure 12. Hemisphere for sample rays

Figure 13. P is a point on the hemisphere. r is the radius f the hemisphere. S has a range of [0, 90] degrees. has a range of [0, 360] degrees.

perpendicular, and are unit length [1]. The most famous ONB is the xyz coordinate system, the natural basis. ONBs make converting to and from different coordinate systems uncomplicated. Once an orthonormal basis is made with the surface's The spherical coordinates system can normal, P can be transformed into P'. P' is be use to make random sample rays [1], now oriented with the new basis. To finish shown in Figure 13. Point P on the this process off, a vector is made from the hemisphere needs to be randomized. This intersection point on the surface to P' and can be accomplished by randomizing the the new sample ray is formed. The new angles and S. Radius r can be kept sample ray will be used in place of the light constant at 1. P needs to converted to it's x, ray in the illumination formula. y, and z components with the following formulas. Conclusion x = r sin cos () The study of ray tracing can lead to y = r sin sin () interesting things in the field of computer z = r cos graphics. Ray tracing is a viable technique of producing two-dimensional images of a P is oriented about the Origin. So, P three dimensional world. It can be a tool needs to be transformed so it is oriented that becomes more and more valued as our with the surface, and its normal. To do this, culture heads deeper into computer an orthonormal basis is made with the generated worlds via games, movies, surface's normal. An orthonormal basis training simulators, or even architectural (ONB) is a set of vectors that are mutually modeling. Ray tracing can produce images

64 Osprey Journal of Ideas and Inquiry with varying degrees of realism. With its I min. to render and its size was strong mathematical and physical 400x300. foundations, ray tracing is and will remain a F.) The Parthenon is inside of a blue major concept of computer graphics. reflective box. Three light sources, on in the back left bottom comer, Gallery one in the front left bottom comer and one inside the Parthenon. To see these images in their full size Image took 3hours to render and its please visit: size was 800x600 with http://www.unf.edu/-rupjOOOI/ray/ 10rays/pixel. G.) A glass sphere with a blue diffused A.) One of the first images produce. sphere behind it. Image took 3mins The light is coming from the top to render and its size was right, and shadows were turned off. 1024x768 with lOrays/pixel. Image took 13min. to render and H.) 100,000 random spheres to test the its size was 1024x768. 3-D grid acceleration technique. B.) All most same scene as image A, The image about 18mins at but with a new cylinder and two S12xS12. Without a 3-D grid it light sources with shadows turned would still be rendering today! on. Also this image has I.) A Sphereflake, this image took Antialiasing 10rays/pixel. Image about Smin. to render at 1024x768. took 10hours to render and its size J.) The Rhinoceros Logo, 86000 was 1024x768. triangles. Image took 30mins on 7 C.) Two light sources, one inside the processors at 1024x768. cylinder, and one coming from the K.) This image was the goal of the top left. Shadows turned on. Image research. A global illuminated took 4min. to render and its size scene at 800x600 with was 1024x768. 3000samples/intersection. Two area D.) Two light sources coming from the light sources at the ceiling of the bottom left and the top right. 4 room. A reflective sphere floats at reflective sphere all reflecting each the left, with two soft shadows other's colors. Image took Imin. to under it. The soft shadows are one render and its size was 1024x768 of the products of the global with Srays/pixel. illumination technique. Image took E.) Same 4 sphere as image D, but 7hours on 8 processors. now incase in a reflective box. One L.) A comparison of local illumination gold-colored light source in the top vs. global illumination. right front of the box. Image took

Osprey Journal of Ideas and Inquiry 65 A. B.

c.

D. E.

66 Osprey JOll rnal of Ideas al/d II/ quiry F. G.

H.

I. J.

Osprey l oumal of Ideas and Inquiry 67 K.

L. Local vs. Global

68 Osprey Journal of Ideas and Inquiry References [4] Larson, Roland. Hostetler, Robert. Edwards, Bruce. 1998. Calculus Sixth [1] Shirley, Peter. 2000. Realistic Ray Edition. Houghton Mifflin Company, Tracing. A K Peters, Natick, Massachusetts. Boston, New York.

[2] Baker, M. Pauline. Hearn Donald. [5] Ward, Greg. Materials and 1986. Computer Graphics C Version. Geometry Format. Prentice Hall, Upper Saddle River, New http://radsite.lbl.gov/mgf/. Jersey.

[3] Bourke, Paul. Personal Pages. http://astronomy.swin.edu.aul-pbourke/

Osprey Journal of Ideas and Inquiry 69