University of Pennsylvania ScholarlyCommons

IRCS Technical Reports Series Institute for Research in Cognitive

January 1996 Pipeline Rendering: Interaction and Realism Through Hardware-Based Multi-Pass Rendering Paul Joseph Diefenbach University of Pennsylvania

Follow this and additional works at: http://repository.upenn.edu/ircs_reports

Diefenbach, Paul Joseph, "Pipeline Rendering: Interaction and Realism Through Hardware-Based Multi-Pass Rendering" (1996). IRCS Technical Reports Series. 103. http://repository.upenn.edu/ircs_reports/103

University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-96-25.

This paper is posted at ScholarlyCommons. http://repository.upenn.edu/ircs_reports/103 For more , please contact [email protected]. Pipeline Rendering: Interaction and Realism Through Hardware-Based Multi-Pass Rendering

Abstract While large investments are made in sophisticated graphics hardware, most realistic rendering is still performed off-line using ray trace or radiosity . A coordinated use of hardware-provided bitplanes and rendering pipelines can, however, approximate ray trace quality illumination effects in a user-interactive environment, as well as provide the tools necessary for a user to declutter such a complex scene. A variety of common ray trace and radiosity illumination effects are presented using multi-pass rendering in a pipeline architecture. We provide recursive reflections through the use of secondary viewpoints, and present a method for using a homogeneous 2-D projective image mapping to extend this method for refractive transparent surfaces. This paper then introduces the Dual Z-buffer, or DZ-buffer, an evolutionary hardware extension which, along with current frame-buffer functions such as stencil planes and accumulation buffers, provides the hardware platform to render non-refractive transparent surfaces in a back-to-front or front-to-back order. We extend the traditional use of shadow volumes to provide reflected and refracted shadows as well as specular light reclassification. The shadow and lighting effects are then incorporated into our recursive viewpoint paradigm. Global direct illumination is provided through a shadow blending technique. Hardware surface illumination is fit ot a physically-based BRDF to provide a better local direct model, and the framework permits incorporation of a radiosity solution for indirect illumination as well. Additionally, we incorporate material including translucency, light scattering, and non-uniform transmittance to provide a general framework for creating realistic renderings. The ZD -buffer also provides decluttering facilities such as transparency and clipping. This permits selective scene viewing through arbitrary view-dependent and non- planar clipping and transparency surfaces in real-time. The ombc ination of these techniques provide for understandable, realistic scene rendering at typical rates 5-50 times that of a comparable ray trace images. In addition, the pixel-parallel nature of these methods leads to exploration of further hardware rendering engine extensions which can exploit this coherence.

Comments University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-96-25.

This thesis or dissertation is available at ScholarlyCommons: http://repository.upenn.edu/ircs_reports/103

PIPELINE RENDERING INTERACTION AND REALISM

THROUGH HARDWAREBASED MULTIPASS

RENDERING

PAUL JOSEPH DIEFENBACH

A DISSERTATION

in

COMPUTER AND

Presented to the Faculties of the UniversityofPennsylvania in Partial Fulllment

of the Requirements for the Degree of Do ctor of

Norman I Badler

Sup ervisor of Dissertation

Peter Buneman Graduate Group Chairp erson

c

Copyright

by

Paul Joseph Diefenbach ii

Acknowledgments

Lazlos Chinese Relativity Axiom

No matter how great your triumphs or how tragic your defeats

approximately one billion Chinese couldnt care less

There are many p eople who contributed to this thesis b oth in direct supp ort as well as

through p ersonal supp ort and I am greatly indebted to them all Foremost I would

like to thank my advisor Dr Norman Badler whose condence started me along the

do ctoral path whose supp ort enabled my research and whose exp ectations help ed

push the nal result I am sincerely thankful to my other committee memb ers

namely Dimitri Metaxas Insup Lee Julie Dorsey and Greg Ward who provided

valuable direction in turning a disjoint prop osal into a cohesive directed thesis I am

particularly indebted to Greg Ward whose exp ertise and guidance help ed strengthen

the rendering asp ects of this work and whose bathro om mo del and images provided

an invaluable testb ed

I am also immensely thankful for the supp ort friendship and help of the students

and sta of the Center for Human Mo deling and They have always b een

willing to readily share their and exp ertise as well as their machines

when needed In particular I am grateful to Je Nimero for his help with ren

dering jargon and his bibtexlikeknowledge to Doug DeCarlo for his help with SGI

architectural quirks and for our discussions on shadowvolumes and to John Granieri

for providing the imp etus for the sp ecular image work I would also like to thank

MinZhi Shao for his help with his radiosity program to garner the indirect solution

Additionally I am appreciative to those p eople outside the university who pro

vided assistance with my research and implementation This includes Jim Arvo who iii

was most generous in providing his images and the intricate buttery environments

and Paul Heckb ert who provided additional shadowing references

On a p ersonal note I am extremely lucky to havemany friends and family which

provided innumerous supp ort through these manyyears and managed to keep me

mostly sane throughout Tomy b est friend Dr Jana Kosecka who b egan and

ended days early this endeavor with me you have b een my dub strom as well

and I will miss you more than you know Without you I would never have nished

I am also fortunate to have had a couple of other friends at Penn whichhave also

gone b eyond what anyone could hop e for in always lo oking out for me and who have

meantsomuch to me This includes former lab memb er Leanne Hwang one of the

sweetest p eople I know and who with Min kept me fed despite giving me grey hairs

and includes Diane Chi who literally went out of her way to b e a tremendous friend

and who with Rox proved to b e always full of surprises I am also grateful for

my continuing friendship with Kasturi Bagchi Esq who suered through my pap er

deadlines and was my p ersonal advo cate inspiration and escap e throughout this

work Tomy longtime friends including Rob ert AndyGary Glenn and Auts Sarah

Jui and Jim Ma y Susanne and Steve I am very thankful for the allto oinfrequent

times wewere able to get together during this work There are many other friends

not mentioned but who are never far from my

Lastly but most imp ortantly I am fortunate to have b een raised in a loving and

supp ortive familywhichprovided the foundation for everything I haveachieved

Ihave no means to truly thank my mom Natalie who is the mother every child

would wish for and whose unconditional love and supp ort has always b een the one

constant throughout my life I dedicate this thesis to you I thank my dad Joseph

for instilling in me a of indep endence and hard work and for pushing me to

do my b est I also am grateful to havemy Grandma Marie my sister Terri and

brotherin Rob and my amazing nephew Evan all in my life They along with

my other relatives always make coming home truly home iv

Abstract

While large investments are made in sophisticated graphics hardware most realistic rendering is

still p erformed oline using ray trace or radiosity systems A co ordinated use of hardwareprovided

bitplanes and rendering pip elines can however approximate ray trace quality illumination eects

in a userinteractiveenvironment as well as provide the to ols necessary for a user to declutter such

a complex scene A variety of common ray trace and radiosity illumination eects are presented

using multipass rendering in a pip eline architecture Weprovide recursive reections through the

use of secondary viewp oints and present a metho d for using a homogeneous D pro jective image

mapping to extend this metho d for refractive transparent surfaces This pap er then intro duces the

Dual Zbuer or DZbuer an evolutionary hardware extension which along with current frame

buer functions such as stencil planes and accumulation buers provides the hardware platform to

render nonrefractive transparent surfaces in a backtofrontorfronttoback order We extend the

traditional use of shadowvolumes to provide reected and refracted shadows as well as sp ecular

light reclassication The shadow and lighting eects are then incorp orated into our recursive

viewp oint paradigm Global direct illuminationisprovided through a shadow blending technique

Hardware surface illumination is t to a physicallybased BRDF to provide a b etter lo cal direct

mo del and the framework p ermits incorp oration of a radiosity solution for indirect illumination

as well Additionallywe incorp orate material prop erties including translucencylight scattering

vide a general framework for creating realistic renderings and nonuniform transmittance to pro

The DZbuer also provides decluttering facilities such as transparency and clipping This p ermits

selective scene viewing through arbitrary viewdep endent and nonplanar clipping and trnasparency

surfaces in realtime The combination of these techniques provide for understandable realistic

scene rendering at typical rates times that of a comp erable ray trace images In addition the

pixelparallel nature of these metho ds leads to exploration of further hardware rendering engine

extensions which can exploit this coherence v

Contents

Acknowledgments iii

Abstract v

Intro duction and Motivation

Problems with interactive rendering

Our Approach

Overview

Background

Rendering Metho ds

Basic RecursiveRayTracing

Backward RayTracing

Radiosity

TwoPass Metho ds

Beam Tracing

Hardwarebased Rendering

Denitions

Stencil Planes

Accumulation Buer

Alpha Blending

ShadowVolumes vi

LightVolumes

InOut Refractions

Sp ecular Surface Rendering

Reections

RefractiveTransparency

NonRefractiveTransparency

Denitions

BacktoFrontTransparency

FronttoBackTransparency

Translucency

Shadow and LightVolumes

ShadowVolumes

Silhouette Volume

View in Volume

Sp ecular Shadows

Intersection Recursion

Virtual Position Recursion

LightVolumes

SingleBounce LightVolumes

MultipleBounce LightVolumes

Light Accumulation

Attenuation

Filtering

Multipass Pro cess

Recursion

Stenciling vii

Illumination Mo del

Global Direct Illumination

Light Accumulation

Soft Shadows

Lo cal Direct Illumination

Indirect Illumination

Total Scene Illumination

Performance

Diuse Transp ort

Shadows

Sp ecular Images

Scene Dynamics

Scene Uncluttering

Background

Clipping Planes

CSG

Clipping Surface

Transparency Surface

Software Emulation

Extensions

Limitations

Hardware Extensions

Hardware Platforms

Ob jectOrder Rasterization Systems

ImageOrder Rasterization Systems

Hybrid Systems viii

Conclusion

Contributions

Future Work

Conclusion

A Pro jectiveMaps

B Refraction Approximation ix

List of Tables

Frame rendering for bathro om environment

Single imagesingle light rendering statistics for bathro om environment

Varying Shadow Settings

Varying Sp ecular Settings x

List of Figures

Multipass Pip eline Rendering Pro cess

Multipass Pip eline Rendering Image

RayTrace Rays

Caustic Polygons from Light Beam Tracing

Virtual viewp oint for reected b eam

Z buer based rendering pip eline

Shadowvolumes

Recursive Reection

Refracted Image vs Camera Image

Undistorted and Distorted Refraction

TangentLaw for paraxial rays

Uncorrected Refraction

Corrected Refraction

Backtofront rendered area

Backtofront iterations

Fronttoback iterations

Partial Sp ecular Image

Frosted Glass

Glossy Table

Shadowvolume clipp ed by front clipping plane xi

Lightinteraction with refractive surface

Refracted Caustics and Shadows

Reected light and shadows

Shadowvolumes generated involving R

Lightvolume clipp ed by b oth refracting planes

Lightvolume clipp ed by b oth reecting planes

VPR LightVolumes

VPR Lighting and Shadows

Light and view co ordinate systems

Pro jected Textures

Pro jected Texture LightPattern

Recursive Image Rendering

Recursive Stenciling

Jittered Light ShadowVolumes

Jitter Approximation ShadowVolumes

BRDF Scattering Angles

Phong BRDF Notation

Sp ecular Scattering Comp onents

i

Gaussian vs Phong Sp ecular Term

Simplied Gaussian vs Fit Phong

Full Gaussian vs Fit Phong

Partial Gaussian vs Fit Phong

Full Gaussian vs Phong vs Fit Phong view

Full Gaussian vs Phong vs Fit Phong view

Gaussian vs Phong Sp ecular Highlights

Calculating Indirect Illumination Image

Gaussian vs Phong Scene Illumination xii

Rendering Relationship

Pip eline Rendering Relationship

Varying Shadow Settings

Varying Sp ecular Settings

Varying Scene Geometry and Lighting

ClippingTransparency Surface xiii

Chapter

Intro duction and Motivation

Ok wait a moment here Hold on There Nowyou see the change

was made in real time

Frame showperson at Seybold conference

As the p ower of to days graphics workstations has increased so to o havethe

demands of the user Whereas realism and interaction were previously mutually

exclusive to days graphic workstations are providing the platform to develop ap

plications with photorealistic interactive dynamic comprehensible environments

Unfortunatelytodays applications generally do not p ermit or takeadvantage of all

of these features

Traditional interactive graphics develop ed from the early military ap

plications such pro ject SAGE in the late s SAGE for SemiAutomatic Ground

Environment provided a CRT radarlike displayofpotential Soviet b omber

targets and interceptors with a lightp en interface for assigning interceptors to tar

gets Symb ols and identiers replaced radar blips as the graphical representation

CAD applications also b egan to develop around this time with systems suchas

APT Automatically Programmed To oling allowing sp ecication of part geometry

and milling paths Sp ecication of this however was p erformed entirely oline

In one of the rst presentations of interactive graphics on a CRT system

was made b y Sutherland at the Joint Computer Conference Sut His Sketchpad

system enabled interactive creation and manipulation of parts using display primi

tives such as lines arcs etc This system intro duced the notion of hierarchies based

on a set of graphic primitives It also intro duced manyinteraction techniques using

the keyb oard and lightp en This system provided the foundation of mo dern graphics

packages and libraries such as PHIGSHew and SGIs GLSGI

The need for computergenerated imagery for ightsimulators led to the devel

opment of raster graphics systems Raster graphics were intro duced in the early

s with systems such as the Alto from XeroxPARC This p ermitted lled col

ored solid surfaces which had b een essentially unachievable on vector displays This

system provided the foundation for the mo dern realistic images which are common

in to days interactive applications

Hidden surface removal in these raster systems led to a variety of sorting algo

rithms NNSWarSchb and eventually let to the developmentofthe Z

buer Cat which resolved visibility conicts at the pixel level Some of these

early hiddensurface removal metho dsAppGNalsointro duced the notion of

ray castingRot which later formed the bases of mo dern photorealistic rendering

techniques

Whitted Whi intro duced ray tracing as a means to integrate reection re

fraction hidden surface removal and shadows into a single mo del This paradigm

provided the means to build photorealistic images one pixel at a time by casting rays

from an eyep oint through the pixel into the environment and tracing its path As

this metho d op erates on a pixelbypixel basis it is inherently noninteractive Ray

tracing techniques continued to expand in the s and s to include variations

suchasbackward ray tracing and distribution ray tracing as increasingly accurate

physical prop erties such as caustics and shadow refraction were desired

Other metho ds were develop ed to pro duce photorealistic images the most no

table b eing radiosityintro duced to by Goral et al GTGB

This metho d relies on energy exchange b etween surfaces in the environment a calcu

lation whichalsomakes this metho d noninteractive This calculation is p erformed

as a precomputation on a static environment Interactive view changes are p ossible

in this environment however the necessityofastaticenvironment precludes anyreal

user manipulation of the environment

Hardwarebased graphics continued to evolve p ermitting sophisticated realtime

features such as texture mapping reection mapping transparency and shadows

While many of these features are implemented as visual approximations instead of

the physicsbased ray tracing and radiosity solutions they do provide realistic looking

images at interactive rates

The current state of computer graphics has in essence diverged into twoar

eas one based on oline calculations to pro duce noninteractive physicallybased

photorealistic images the other based on hardwareimplemented calculations to

pro duce realtime physicallyapproximated pseudorealistic images Each of these

approaches has its advantages and shortcomings

Problems with interactive rendering

Much has b een devoted to photorealistic rendering techniques as ray trac

ing and radiosity packages have b ecome increasingly sophisticated These metho ds

provide a basic foundation of visual cues and eects to pro duce extremely high qual

ity and highly accurate images at a considerate cost namely computation time

Neither of these techniques haveany widespread application in true interactiveand

dynamic environments such as animation creation and virtual worlds

arebased D graphics systems provide pseudorealistic images at inter Hardw

active rates through use of minute geometric detailing and other visual cues So

phisticated graphics pip elines p ermit realtime texturing and shadows in addition

to a variety of basic lighting eects These systems do not provide the sophisticated

lighting eects and material prop erties that the photorealistic systems do

In order to clarify the limitations of the ab ove describ ed rendering approaches

each is individually addressed

RayTracing

Systems based on forward ray tracing Gla are noninteractive and suer from

problems inherent in the technique WW such as costly intersection testing and

incorrect sp ecular illumination In addition only a few attempt to accurately handle

indirect illumination Ka j Backward ray tracing systems ArvHHCF

more accurately handle caustics but again these metho ds are very timeintensive

and not remotely interactive Even the fastest ray tracing systems require static

geometry to achieve their results SSb

Radiosity

Many socalled interactiveenvironments such as Virtual Building systems ARB

TS rely on precomputation of static environments to form progressive radios

ity solutions Other systems dealing with lighting eects Dor rely on a series

of images from a single viewp oint All of the systems suer from large compu

tational overhead and unchangeable geometryEven in incremental radiosity solu

tions Che geometry changes require signicant recomputation time In addition

radiositybased solutions inhibit the use of reective and refractive surfaces Ray

traceradiositymultipass or combined systems WCG PSV enable this sp ec

ularity but only imagebased systems CW NDR p ermit anylevel of dynamic

interaction although they sacrice image resolution

Geometric detail

In systems which provide realism through minute geometric detailing this vast

amount of data itself presents several problems Whereas previous applications used

graphics to simulate individual parts of a complex environment current applications

tire system together at maximal resolution Where fo cus on of the en

a small CAD part or a single ro om used to b e the level of detail supp orted cur

rentarchitectures and techniques now p ermit visualization of an entire airplane or

building at the same resolution of detail Interactive rates have b een maintained

in systems suchaswalkthrough packages by using complex view dep endencies to

limit the amount of data needed at any particular view eg TS Cohesive CAD

analysis and visualization to ols used in Simulation Based Design systems also re

quire this data in addition to sophisticated means of presentation and

user selection of desired features While these systems may b e able to manage and

present these vast amounts of visual data visual overload will result if the user is

not able to disregard unwanted sections

Many techniques currently exist to minimize or declutter unwanted visual in

formation Two of the most frequently used are clipping surfaces and transparency

Unfortunatelytodays hardwarebased graphic systems do not handle either of these

in a wholly satisfactory manner While the use of arbitrarily oriented clipping planes

is common in many graphics systems their use is limited to planar visual cuts This

presents a broad elimination of visual data whereas a ne directed or sculptured cut

is often desired such as in medical image reconstruction and visualization Trans

parency is also available through the use of alpha blending however the actual

rendering is often incorrect correct transparency requires depthordered drawing of

the surfaces which do es not comply with the Z buer based sorting pro cedure used

in almost all graphics systems In addition with the increased reliance on hardware

texture mapping to add visual complexity semitransparent textures will further

stress the need for a correct hardwarebased rendering of nonopaque surfaces

Graphics pip eline rendering

TM

Advanced hardware architectures such as the SGI Reality Engine have brought

an added level of realism and interaction to dynamic environments through the use

of sophisticated graphics pip elines and added levels of screen buer information

e enabled software develop ers to bring previously unavailable de These features hav

tails such as shadows and mirrors to manyinteractive applications as well as allow

the user to selectively see desired details by clipping awayunwanted p ortions These

and other hardware provisions haveyet to b e fully exploited though clever pro

gramming techniques by several implementors have pro duced realtime shadows and

mirrorsHeiKSC

Our Approach

Our research expands hardwarebased pip eline rendering techniques to present a plat

form which do es provide realism and user interaction as well as additional means

in which to manipulate and comprehend these complex scenes It prop oses evolu

tionary not revolutionary mo dications of the graphics pip eline where necessary

and the techniques to use these features for the aforementioned purp oses Although

slight hardware mo dications maybeintro duced manysuch mo dications are based

on similar features found in other architectures This thesis is an intro duction of new

techniques using these features it is not an intro duction of a new rendering archi

tecture itself With the high investment in pip eline rendering architectures b etter

rendering and interactive techniques using these architectures b ecomes a necessity

as the demands of the applications grow

Our approach to rendering is to takefulladvantage of the provided graphics ren

dering pip eline to provide realistic rendering in a dynamic environment Muchofthis

work is fo cused on using multipass rendering techniques based on existing pip eline

architectures Through these multipass metho ds weprovide a means to include not

ximating refractive planar surfaces as well only reection but a technique for appro

The mo del presented extends the current reection techniques to provide an arbitrary

level of refraction and reection based on bitplane depth for use in hallofmirror

typ e environments and to provide a close approximation for refractive ob jects An

image transform is presented to correct for p ersp ective distortions during the image

mapping of the secondary refracted image For nonrefractive transparent surfaces

a displayarchitecture mo dication is prop osed to provide the facilities for correct

sorted surface blending This extension the Dual Z buer or DZ buer along with

current framebuer functions such as stencil planes and accumulation buers pro

vide the hardware platform to render correct transparent surfaces through multiple

rendering passes

Multiple rendering passes also provide the bases for shadowvolume supp ort with

sp ecular surfaces We provide a practical shadowvolume metho d which is extended

for interaction with sp ecular light reclassication This multipass metho d is com

bined with the similar sp ecular surface stenciling metho ds to provide a recursive

metho dology which not only preserves shadows in all reected and refracted images

but which also accounts for refraction and reection of the light and shadows in the

primary and secondary images as well

Our pip eline rendering platform also includes utilizing hardware provided features

such as fog and texture mapping to provide simulation of varying material prop erties

such as translucency and ltering Fitting of the hardware lighting mo del and surface

attributes to a more physicsbased and empiricallyderived mo del further provides

more realistic rendering Combined with the multipass features these techniques

provide an alternativetoray tracing for creating fast approximate sp ecular lighting

eects at rates on the order of times faster as do cumented in the examples

We additionally supp ort incorp oration of diuse illumination eects presenting full

scene illumination for dynamic environments The co ordination of these pro cesses is

seen in Figure with eects demonstrated in Figure

Finallyweintro duce scene decluttering facilities to promote user comprehen

sion of these interactiveenvironments This includes selective visualization of the

environmentby supp orting arbitrary clipping surfaces in realtime By combining

this with our sorted transparency pro cedure the arbitrary clipping surface can b e

used as an arbitrary transparency surface making all enclosed areas transparent Draw Environment

N More Draw Specs? Image Y Draw Image Reclassify Draw w/ View (Ch. 3.1,3.2) Indirect (Ch. 6.3) Draw Draw w/ Transparent Ambient Objects Draw (Ch. 3.3) Environment (Ch. 6.1.1) Make Shadows Make Shadows More N Draw Specs? Shadows Y Draw Shadows Reclassify Light (Ch. 4.3) Make Draw w/ Shadow Direct Illum. Volumes (Ch. 6.1,6.2) (Ch. 4.1,4.2) Make Shadows

Figure Multipass Pip eline Rendering Pro cess

Figure Multipass Pip eline Rendering Image

Overview

Chapter rst intro duces the sophisticated illumination eects which traditionally

app ear only in photorealistic rendering metho ds This includes discussion of the

b enets and limitations of each metho d Currenthardwarebased metho ds used to

achieve these eects are then presented

The next three chapters discuss the primary multipass contributions for pip eline

rendering of sp ecular environments Chapter discusses sp ecular surface rendering

It rst intro duces the standard metho d of implementing reection through secondary

viewp oints The case of refractive transparency is then investigated using an ex

tension of the reection metho d for refractions Nonrefractive transparency is addi

tionally supp orted through techniques which provide correct transparency blending

This includes intro ducing some required pip eline extensions and supp ort functions

including a Dual Z buer Both backtofront and fronttobacktraversal metho ds

are included Simulation of scattering material prop erties such as translucency are

also included

Chapter describ es our implementation of shadows whichwork in conjunction

with the reections and refractions This includes discussion of a practical imple

mentation of shadowvolumes as well as the various metho ds of mo deling their

interaction with sp ecular surfaces The current implementations use of virtual light

sources or light source reclassication is discussed in a recursive framework with

this metho d eventually extended to creation of sp ecular lightvolumes Material

transmission and reection prop erties for these lightvolumes are then describ ed

including simulation of nonuniform surfaces

Chapter brings the two previous chapters together as a comp osite recursive

pro cedure Co ordinated recursion of the twomultipass metho ds is rst detailed

followed b y allo cation sp ecics of the primary shared resource the stencil buer

As the previous chapters intro duce the primary shadowing techniques Chapter

discusses use of these techniques in pro ducing physicallybased global and lo cal illu

mination eects This includes global direct eects suchaslight accumulation and

area light sources lo cal direct eects through tting of the hardware lighting mo del

and indirect eects through incorp orating a radiositygenerated diusediuse trans

fer solution Both direct and indirect eects are then evaluated in toto

The p erformance of all of the previously intro duced features are examined in

Chapter This fo cuses on the use of pip eline rendering to bridge the quality p er

formance gap of traditional rendering Qualityversus timing tradeos are discussed

in the context of b oth userselected criteria and automatic selection in progressive

renement applications

Whereas the previous chapters fo cus on the realism of dynamic scenes Chap

ter fo cuses on user comprehension and interaction with these environments The

use of clipping and transparency surfaces is discussed for selective scene rendering

The DZ buer is used to provide arbitrary clipping surfaces this is combined with

the nonrefractive transparency metho d for providing arbitrary transparency sur

faces which analogous to clipping surfaces render all enclosed surfaces and volumes

transparent

Chapter addresses the limitations of the system particularly in regards to the

current hardware platform as well as p ossible extensions to the platform Other

graphics architectures are then examined for relevance feasibility and p ossible ex

tensions to the multipass rendering pro cess

Finally Chapter summarizes our contributions in presenting the pip eline ren

dering metho dology and discusses p ossible future work in this area

Chapter

Background

Rendering Metho ds

Traditionally sophisticated illumination and rendering eects have app eared only in

raytracing and radiosity systems This includes reective sp ecular surfaces refrac

tive transparent surfaces shadows and caustics and translucencyTo understand

the complexities of these eects their implementation and limitations in these non

interactive systems will b e examined In addition existing empirical for

achieving some eects will also b e examined

Basic RecursiveRayTracing

Although raycasting was rst develop ed by App elApp and by Goldstein and

NagelGN its use was primarily for hidden surface removal App els metho d did

determine whether a p ointwas in shadow but it remained for WhittedWhi to

extend raycasting to raytracing to handle reections and refractions

In simplest terms raytracing determines the visibility and shading of ob jects

in an environmentby tracing imaginary rays of light from the viewers eyetothe

ob jects This metho d casts an eyeray from the center of pro jection the viewers

eye through a pixels center in a window on an arbitrary view plane and into the

environment The pixel through whichtheray passes has its color set to that of

the rst intersected ob ject as determined by the current lighting mo del In App els

system a ob jects pixel is in shadow if a surface is b etween the rayob ject p ointof

intersection and the light source This is determined by ring an additional ray

from this p ointofintersection to the ob ject and checking for intersections

Whitteds extension to App els metho d res reection rays and refraction rays in

addition to App els shadow rays Reection rays trace from the p ointofintersection

in a direction of the incidentray reected ab out the surface normal Refraction rays

trace into the ob ject in a direction determined by the incidentray and Snells law

Each reection and refraction raymay recursively spawn more reection refraction

and shadowrays This pro cess is seen in Figure

Lightsource

L2 L1 T2

Specular Object T1 R2 N2

L1,L2: Shadow rays R1 N1 N1,N2:Surface normals

R1,R2:Reflected rays

T1,T2:Transmitted (refracted) rays

Viewpoint

Figure RayTrace Rays

As can b e seen with this approach intersection testing is very imp ortant Much

attention has b een paid to reducing the time sp ent p erforming intersection checks

this will b e addressed later in the context of our system As the general nature of ray

tracing is as a noninteractive image generator we will only fo cus on the illumination

asp ects of ray tracing not the computation costs involved

There are manyvariations of this basic approach which attempt to account for

physical prop erties of materials and illumination Many reection mo dels havebeen

develop ed for computer graphics some are empirical mo dels and some are based

on classic wavetheory The Phong mo delPho is the most commonly used re

ection mo del in computer graphics and bases the bidirectional sp ectral reectivity

on diuse and sp ecular co ecients and the viewing angle to the surface Other

mo dels BliCT generate direct illumination eects using statistical microfacet

approximation for sp ecular eects and a classical Lamb ertian mo del for diuse ef

fects Many more complex metho ds exist based on light disp ersal from and through

an ob ject These mo dels are to o exp ensivetobeinvestigated in the context of any

hardwarebased solution

Ray tracing from the eye or forward ray tracing as it is known has many short

comings esp ecially in its mo del of shadows and refraction As can b e seen in Fig

ure the shadowray L is not refracted on its path to the light b ecause such

refraction would cause it to miss the light source Because of this deciency only

images b ehind a refractive surface are refracted light and any shadow resulting

from that light passing through the surface is not refracted

Caustics the brightoverlap of reected refracted and primary lightrays are like

wise imp ossible in standard forward ray tracing without costly random ray spawning

This is again due to the inabilitytorea raywhich is reectedrefracted raytoa

light source

Backward RayTracing

As mentioned ab ove standard forward ray tracing omits all indirect illumination

except pure sp eculative comp onents resulting from refraction or reection to the

light source Reection and refraction rays typically miss light sources This diuse

teraction is instead approximated by a lo cal Phong reection and transmission in

term Toachieve this and other eects found in radiosity systems backward ray

tracing was develop ed Arvo Arv rst suggested this metho d of casting rays

from the light source in It has typically b een implemented as a twopass

raycasting technique in several systems CFZPL

The necessityofthistwo pass approach is seen in the complexity of a solution

based on forward ray tracing To detect these indirect illumination results enough

feeler rays would havetobespawned at eachpointofintersection to have a high

of detecting illumination from indirect sources This exp onential growth

of rays proves extremely prohibitive and only a few systems have attempted to

handle this Ka jWar

The two pass metho d obviates these spawned feeler rays by rst determining

indirect illumination eects by casting rays from the light source These rays reect

refract and intersect with surfaces pro ducing by spatial density the diuse illumi

nation of the surfaces In addition to providing the diuse illumination of the scene

this pro cess also enables caustics to form where a sp eculartodiuse light transp ort

mechanism takes place a empirical notion termed byWallace et al WCG Here

lightrays b oth direct and indirect converge and diverge on a surface pro ducing bright

and dark patches

In order to p erform this twostep pro cess illumination eects from the rst step

must b e stored for consideration in the second step Arvo suggested using an illu

mination map for each ob ject Other metho ds rely not on sho oting individual rays

but instead on creation of caustic p olygons or lightvolumesWWKG

t b eam tracingHH cast rays from the light These metho ds known as ligh

source to eachvertex of a p olygon of a sp ecular refractivereective ob ject Sec

ondary transmitted lightrays are created from these vertices in the direction indi

cated by reection or refraction to the surface normal Anyintersection of these

rays with a diuse p olygon form a caustic p olygon to b e created on the plane of that

p olygon The vertices of the caustic p olygon are at the intersection of the transmit

ted rays from the sp ecular p olygon with the plane of the diuse p olygon Examples

of these caustic p olygons can b e seen in Figure During the second rendering

phase the diuse comp onent of a diuse p olygon is combined with the intensities Light source

Specular polygons

Caustic polygons

Diffuse polygons

Figure Caustic Polygons from Light Beam Tracing

of any caustic p olygons asso ciated with that p olygon This intensity of the caustic

p olygon is similar to the form factor from the radiosity metho d

Radiosity

Where diuse illumination is dicult and exp ensiveinray tracing systems the na

ture of radiosity systems is based on calculation of these diuse interactions through

energy transfer b etween surfaces Radiositywas rst applied to computer graphics

byGoral et al GTGB based on theories of heat transfer b etween surfaces SH

In radiosity systems all surfaces are assumed to b e Lamb ertian diusers emitters

or reectors Surfaces are sub divided into planar patches over which the radiosity

is constant The radiosity of eachpatch is the total rate of energy leaving the sur

face which is equal to the sum of the emitted and reected energies The reected

energies are the sum of the reected energies resulting from the individual incident

energies on the patch from every other patch whichisderived from a geometric

relationship b etween anytwopatches known as a form factor

Inherent in the radiosity metho d are a variety of illumination eects whichpro

duce extraordinarily photorealistic images including shadows color bleeding and

color ltering The cost of this realism is in very high prepro cessing computation and

storage requirements for computing the form factors In addition the environment

is relatively static except for view changes as any ob ject movement requires recom

putation of the form factors There are systems whichhave tried to address this

static nature but none have supp orted a full dynamic environment The backbuer

extensionBWC relies on predened ob ject paths Other metho ds GSG Che

rely on propagation of mo died form factors in a progressive solution Even with

metho ds maintaining complex ob ject interactions FYTMS rates are near in

teractive for only small changes In addition the viewindep endent nature of the

radiosity computation usually precludes the supp ort of sp ecular reection

TwoPass Metho ds

Because radiosity systems handle diuse comp onents more readily than ray tracing

and the converse is true for sp ecular comp onents these two metho ds have b een com

bined in another twopass approach originated byWallace et al WCG In this

mo del diuse lighting eects are stored implicitly in the nal radiosity solution itself

during stage one with viewdep endent sp ecularities added through standard distri

bution ray tracing in the second stage While pro ducing more physicallyrealistic

images these twopass metho ds suer from the double cost shortcomings of b oth

metho ds for a dynamic environment One noteworthy exception is the imagebased

rendering techniquesCW which sacrice some image quality for interactive view

manipulation as well as some scene dynamicsNDR These systems create inter

mediate views through interp olation of selected keyframe images

Beam Tracing

Analogous to ray tracings metho d of casting rays from the eye p oint and spawning

new rays at sp ecular surface intersections is Heckb ert and Hanrahans metho d of

b eam tracing HH which uses pyramidal b eams instead of rays This metho d

relies on the spatial coherence of neighb oring rays that is neighb oring rays tend to

follow the same path

Beam tracing starts with an initial b eam dened by the viewing pyramid Poly

gons are sorted from this view using a version of the WeilerAtherton hidden surface

removal The view b eams intersection with ob jects causes reection and

refraction b eams to b e generated and the intersecting area to b e subtracted from

the original b eam As this pro cess pro ceeds recursively the view p osition is up dated

and the p olygons are sorted from the spawned views An intersection beam tree

is created during this recursion with links representing rays of light a b eam and

no des representing the surfaces intersected by that b eam The resulting b eam tree

is recursively rendered using a p olygon scanconversion algorithm

The b eam tracing metho d has advantages over traditional ray tracing in that it

do es not spawn rays on a pixel by pixel basis and do es not suer from the aliasing

artifacts asso ciated with this individual pixel basis This metho d do es have several

limitations over ray tracing though foremost is that it do es not op erate on nonplanar

surfaces This is due to the assumption of spatial coherence of the b eam Unlikeray

tracing where a single view ray is reected or refracted an entire view frustum is

bent This creates a new virtual viewp oint representing the ap ex of the secondary

view pyramid This is seen in Figure The reected rays of a b eam intersecting

acurved surface would not intersect at a single virtual viewp oint therefore this

metho d is incompatible with curved surfaces For reection of planar surfaces the

virtual eyep oint can b e represented as a linear transform rotation and translation

of the original view

The second limitation also stems from this necessity of a virtual eyep oint Unlike

reection o a planar surface refraction rays do not converge to a single p oint

ys are refracted according to Snells Law refraction is a nonlinear phenomenon Ra

which relates the incident and refracted angles sin sin In b eam

 

tracing refraction is approximated with a linear transformation This transformation

is describ ed in App endix B Eye point

Incident beam

Specular surface Reflected beam

Virtual Reflected

Eye Point

Figure Virtual viewp oint for reected b eam

Database Transform to Trivial Accept/ Viewing Traversal World Coords. Reject Trnasformation

View Volume Divide by w, Rasterization Display

Clipping Map to Viewport (lighting, Z-buffer)

Figure Z buer based rendering pip eline

Hardwarebased Rendering

While sophisticated illumination eects have b een achieved in radiosity and ray

tracing systems these eects are achieved with signicant precomputation over

head in relatively static environments Most graphics hardware systems provide

only empiricallybased Phong shading an ambient term to approximate diuse in

teractions and alpha blending to simulate partial transparency Very few of the

complex illumination eects have b een achieved using the graphics hardware archi

tectures provided by most workstation manufacturers in essence forfeiting the ad

vanced pip eline features for software calculations and oline pro cessing A typical

hardware rendering pip eline is seen in Figure Lightsource

Opaque Object

0

+1-1=0

+1

Figure Shadowvolumes

One exception to this is the increased use of shadows in realtime systems Real

time shadows have b een achieved using hardwareprovided stencil planes Heias



well as through the use of pro jective textures SKvW Of course these imple

mentations are also susceptible to all of the usual imagespace aliasing problems such

as missing pixels at coincident p olygon edges or texture magnication ltering

Heidmann uses a hardware implementation of Brotman and BadlersBBvari

ation of the shadowvolume technique prop osed byCrowCro This metho d gener

ates shadow p olygons for each silhouette edge of the ob ject These shadow p olygons

are invisible p olygons which extend from the edge away from the light For every

pixel in the scene the stencil buer is used to keep trackofhowmany shadow p oly

gons are b etween the viewer and the pixels ob ject If the numb er is o dd the pixel

is in the middle of a shadowvolume and therefore in shadow If the value is even

the pixel is not inside a shadowvolume and therefore lit This pro cess can b e seen

in Figure

Pro jective textures were originally intro duced by WilliamsWil and havebeen



recently implemented using texture mapping hardwareSKvW This metho d

creates a lightview image of the scene and uses this as a pro jected texture in the

environment

In addition to the hardware provisions for shadows nonrefractive transparency

has b een lo osely supp orted through the use of alphablending This metho d is in

compatible with Z buer sorting however and no provision is made for refractive

transparency Nonrefractive transparency metho ds are addressed in Chapter

Other approximations for illumination eects havebeenintro duced such as reec

tion eects using cubic environment maps to create sp ecular highlightsVF Such

metho ds generally rely on creation of these eects without regard to the other ob

jects in the environment ie reections are based on an image not the surrounding

ob jects

The following chapters describ e new rendering and illumination techniques which

are based on and use standard graphics hardware architectures These eects are

recalculated for each frame at interactive to nearinteractive rates All describ ed

functionality has b een implemented using existing SGI hardwarebased graphics sup

p ort

Denitions

For the purp oses of this discussion we shall intro duce terms common to users in the

GL environment and some describing implementation techniques

Stencil Planes

Stencil planes are essentially an enhanced Z buer intro duced but not named by

Brotman and BadlerBB In its simplest form pixels are written only if the current

stencil value analogous to the current Z value of the destination pixel passes the

dened stencil test Dep ending on the result the pixel is written and the stencil

value is changed Stencil plane calls take the form

stencilCOMPFUNC COMPVALUE PASSFUNC PASSVALUE

where

COMP FUNC The compare function for the stencil op eration to pass

The current pixels stencil value is compared against the

COMP VALUE using this function to return the b o olean

result Choices are EQUAL GREATER LESS GTE

QUAL LTEQUAL NONE

VALUE The value which the current pixels stencil value is com COMP

FUNC pared with using the COMP

PASS FUNC The function which is applied to the current pixels stencil

value using the PASS VALUE as the parameter Choices

are REPLACE CLEAR INC DEC KEEP

PASS VALUE The value used to up date the current pixels stencil value

according to the chosen PASS FUNC

Accumulation Buer

An accumulation buerCar is a secondary image buer to which the current

image can b e added The resulting image can also b e divided by a constant This

enables a blending of images or image features Accumulation buer calls are in the

form

accbufOPERATION

where OPERATION is one of the following

ACC CLEAR Clear the accumulation buer

ACCUMULATEAdd the contents of the current framebuer to the ACC

contents of the accumulation buer

ACC RETURN Copy the contents of the accumulation buer back

to the current framebuer

Alpha Blending

Blending of the tob edrawn pixels RGBA comp onents with existing values is ac

complished using the blendfunction The format of this is

blendfunctionsfactor dfactor

where sfactor and dfactor are the blending factors bywhich to scale the source

and destination pixel values C and C resp ectively EachRGBA comp onentis

s d

determined by the following sp ecication

C C S F AC T O R C DF AC T OR

d s d

where SFACTOR and DFACTOR are one of the following choices

SA source alpha BF

MSA source alpha BF

BF DA destination alpha

MSA destination alpha BF

ONE BF

BF ZERO

ShadowVolumes

Shadow volumes are volumes of shadow cast by opaque ob jects For p olygonal ob

jects the shadowvolume is comprised of silhouette faces generated from the ob jects

silhouette edges A silhouette edge is an edge which divides a lit facing light face

and an unlit facing away from light face A silhouette face is a face created for

each silhouette edge of an ob ject by extending that edge away from the light source

along the lightray direction Pixels inside the volume are in shadow pixels outside

are lit

LightVolumes

Light volumes are volumes of light b ounded by silhouette faces of reecting and

refracting ob jects As with shadowvolumes silhouette faces are created for each

edge of a sp ecular ob ject by extending that edge away from the virtual light source

p osition along the lightray direction

InOut Refractions

Inout refractions are refractions which o ccur when light passes from one medium to

another and back to the rst suchaslight traversing through a piece of glass There

is an entry refraction and an exit refraction pro ducing a refracted ray parallel to

the incidentray in surfaces where the inout surfaces are parallel such as a sheet of

glass

Chapter

Sp ecular Surface Rendering

Sp ecular surfaces surfaces which exhibit sp ecular reection or transmission are

commonplace in many realworld environments This ranges from the presence of

reective mirrors and transparent refractivewater to partially sp ecular surfaces such

as a shinywet o or or frosted glass While reective and refractive surfaces have

b een supp orted in ray tracing since their intro duction by Whitted Whi only

nonrefractive partiallytransparent surfaces have b een readily available in hardware

based graphics Each of these sp ecular surface typ es is further examined in regards

to a multipass hardwarebased pip eline metho dology

Reections

Reective images have b een generated in ray tracing systems by tracing individual

reection rays which are spawned from intersections with reective surfaces Addi

tionally a reective image can b e seen as corresp onding to an inverted image from a

secondary viewp oint In other words the reected image is the ipp ed image from

a viewp oint on the other side of the mirror This provides the basis for

planar mirror reection in several hardwarebased systemsKSCHH

In multipass pip eline rendering mirrors are implementedby rendering the entire

environment exclusive of the mirrored surface The mirrored surface is drawn with

Z buering creating a stencil mask of pixels where the mirror is visible A second

rendering of the environment is then p erformed from the reected viewp oint drawing

only over the previously masked pixels Because the reected angle angle from

mirror plane to reected viewp oint is the negative of the incident angle and b ecause

the image is ipp ed the reected image directly ts onto the mirror

The calculation of the virtual camera p osition follows from reection of the inci

dent line of sight with the plane on which the sp ecular surface lies Not only do es

this result in a virtual camera p osition but this transform involves scaling the scene

ab out the Y axis to ip or mirror the image This scaling is implicit in the transform

given in HH which is derived from the equation representing reected p oints P

r

in terms of the original p oints P and the plane equation LP having normal N As

i

LP also gives the distance from the plane for anypoint

P P LP N

r i i

expresses in vector form the transformation involved This transform as expressed

as a homogeneous x matrix is included in App endix B

The virtual viewp oint transform is applied during a second rendering of the

environment This second rendering is p erformed only in the area masked during the

rst pass Before this second rendering is p erformed the Z values must b e cleared in

this masked area As the necessary area is already masked rendering a p olygon with

ALWAYS as the compare function resets the appropriate Z co ordinates and with ZF

the Z values for the sp ecular surface The second rendering can then b e p erformed

using the normal Z buer depth sorting All necessary viewp oint culling can also b e

applied from this virtual viewp oint

In addition this pro cess can b e rep eated recursively when multiple reective

surfaces exist The virtual viewp oint gets transformed for each sp ecularsp ecular

transp ort thereby pro ducing reections of reections to a chosen depth This pro cess

is demonstrated in Figure for three sp ecular surfaces at various recursion depths

The recursive pro cess and accompanying stenciling metho d are detailed in Chapter

a d b d

c d d d

Figure Recursive Reection

As in b eam tracing the ab ove describ ed virtual viewp oint metho d also relies on

planar reective surfaces Again this is due to a single reected virtual viewp oint

resulting from a linear transformation of the original eyepoint Where a nonlinear

transformation is involved no single virtual viewp oint exists and an approximate

viewp ointmust b e selected Such an approximation is required for refractive planar

surfaces

RefractiveTransparency

Refractive transparency has generally only b een available in ray tracing Ray trac

ing implementations are based on one of several illumination mo dels b oth empirical

and physically based Hall HG intro duced an extension of the Phong reection

mo del for the transmission term As with the Phong reectance mo del this mo del

accounts for the spread of transmitted light through a refractive medium Distri

bution ray tracing CPC supp orts blurred refractions through through a jittered

sampling of refraction rays Attenuation of transmitted light usually o ccurs based

on Fresnel transmission co ecients These applications supp ort translucentaswell

as transparent materials

While refractive transparency has b een unknown in hardwarebased rendering

wecanprovide the refractive surface rendering itself using a metho d analogous to the

reection metho d describ ed in Section Although refractive images are similar

in to reections they are more complex in practice

Whereas a mirrored image directly corresp onds to the reective surface to which

it maps a refracted image maps to a distorted image space Simply p erforming a

second rendering in the stenciled area do es not overlay the correct image p ortion

This is demonstrated in Figure The area visible through the transparent surface

in the refracted view is dierent than the image area from the original viewp oint

areas outside the refracting surface and even in frontmay b e visible in the refracted

image This dierence is due to two factors the dierence b etween incidentand

Figure Refracted Image vs Camera Image

refracted viewp oints and the p ersp ective distortion

Because the incident angle do es not equal the refracted angle the refracted image

is rotated with resp ect to the original image This is further comp ounded by the ro

tated image plane undergoing a p ersp ective distortion dierent than the p ersp ective

distortion of the original plane The p ersp ective transformations are the same but

b ecause the planes have dierent orientations the resulting distortions are dierent

The result is that a refractive square planar face for example maps to two dierent

quadrilaterals in the original versus the refracted images

The refractiveimageI do es corresp ond to the original image I through a x

r o

D bijective pro jective mapping M This mapping is the intersection of the x

D image mapping set M with the reective planar surface

Figure Undistorted and Distorted Refraction

I I M

o r

where

M j M



and

M P C C P

r o

In equation P is the p ersp ective transform and C and C are the original and

o r

refracted camera transforms resp ectively

This results in a D pro jective transform of arbitrary quadrilateral to quadri

lateral describ ed in Hec and included in App endix A This transform describ ed

by a x homogeneous matrix can b e applied directly to the screenviewp ort map

ping to distort the refractive image into the normal image space In hardware which

supp orts userdened op erations this transform can b e inserted directly at the end

of the rendering pip eline In systems where this is not p ossible such as the Silicon

Graphics architecture this transform can b e implemented as a x homogeneous

transform inserted in the worldtounit pip eline The resulting transform is con

structed with a zero scale factor for Z so that the mapping is to the Z plane

Without this mapping the tap ering and skewing eects from the quadrilateral dis

tortion aect the Z co ordinates Unlike the D transform the D do es however

preclude the use of the Z buer for hidden surface removal as all image p oints now

have the same Z value This metho d also do es not allow for the fog translucency

simulation describ ed in Section due to the loss of depth

Note also that this metho d do es not pro duce true refractions merely a close

approximation to the refractive image In a true refractiveimageevery ray incident

with the refractive plane b ends according to its angle with the plane and Snells Law

this metho d as in b eam tracing uses only one incident angle In practice two angles

are used to provide more realistic results with the system First the incidentrayis

taken from the camera lo cation to the refracting face center to determine whether

the incident angle is greater than the critical angle If this is the case the surface is

taken to b e wholly reective If the angle is less than the critical angle the incident

angle for Snells Lawistaken at the p ointofintersection of the view vector cameras

negative Z axis and the plane in which the refracting face lies This metho d insures

that the critical angle is reached as the plane moves tangentially to the view yet the

refracted image is seen as a smo oth scrolling of the background b ehind the face

In the original implementation of this work the virtual camera p osition was

determined by refraction or rotation of the original camera p osition around the ab ove

describ ed p ointofintersection according to Snells Law While this approximated the

b ending of light along that ray it is not the b est approximation for the distortion

which takes place due to the varied refraction of individual rays In practice a

refracted image app ears at n times the actual distance from the refracting medium

nAsnotedbyHeckb ert and HanrahanHH and seen whose index of refraction is

in Figure this approximates for paraxial rays to a parallel displacementofthe

viewp oint to the plane termed the TangentLaw Expressed in vector form we see

the equation

P P n LP N

r i i

and its corresp onding x transform given in App endix B is analogous to the reec

tion equation presented ab ove In fact the reection is simply the sp ecial instance where n

Virtual focus

Eye Point

D2

D1

Figure TangentLaw for paraxial rays

a Snells Law b TangentLaw

Figure Uncorrected Refraction

The contrast b etween the two metho ds of determining the refractive viewp oint

can b e seen in Figure where image b is based on paraxial displacement As can

b e seen this metho d further necessitates the use of the detailed pro jective transform

as clipping discrepancies b ecome far more apparent The corrected images are seen

in Figure

a Snells Law b TangentLaw

Figure Corrected Refraction

Because of TangentLaws closer approximation for paraxial rays our original im

plementation was therefore mo died to include this TangentLaw transform Again

two angles are used to account for tangential movement of the refracting plane

While this metho d is more accurate than the Snell Lawimplementation it is also

simply a close approximation Other such approximations might b e p ossible based

on further viewp ort manipulation such as manipulation of the eld of view rather

than displacement of the viewp oint A more exible rendering pip eline providing

additional vertex and image transformation control would enable greater accuracy

in simulating such phenomena

Note also that this recursive metho d provides automatic sorting of transparent

faces for the alpha blending Blended transparency requires surfaces to b e drawn

in sorted order a feature not supp orted bycurrent Z buer architectures The

recursive nature of the traversal dictates that transparency blending o ccurs after the

refracted image containing other transparent ob jects has b een rendered

NonRefractiveTransparency

Transparency in hardware based systems is almost always nonrefractive transparency

with an exception rst noted in DB and many times this approximation over

refractive transparency suces Most Z buerbased systems supp ort screendo or

dithered transparency or simply render transparent p olygons last using alpha

blending based on interp olated transparency FvDFH If the transparent p oly

gons are not depthsorted the resulting image is incorrect There are other issues

and approaches the following summary details existing metho ds

ScreenDo or Transparency

Screendo or or dithered transparency uses a mask to implement a mesh of the trans

parent ob jects pixels Pixels are only written if their corresp onding p osition in a

transparency bit mask is set Spatial proximity of neighb oring pixels results in color

blending pro ducing interp olated values

There are many problems with screendo or transparency foremost b eing mask

conicts and dithering artifacts Two transparent ob jects cannot share a mask or

one will completely overwrite the other In addition extensive masking results in

noticeable dithering eects pro ducing an undesirable artifact pattern on the ob jects

Even subpixel algorithms are not accurate for multiple transparency layersAkeb

Hardware Sorted Transparency

Most Z buer based systems rely on an empirical approach to transmitted lightat

tenuation Nonrefractive transparency is approximated using a linear transparency

parameter t to blend the ob ject pixel intensity I and the background pixel intensity

o

I in Z space using the combination

b

I tI tI t

o b

Although this formulation requires an ordered depth blending back to front of

overlapping transparent pixels Z buered sorting do es not provide this facilityIn

addition presorting the transparent surfaces suers from all the traditional hidden

surface removal problems suchasintersecting surfaces and the need to nd a p ossibly

nonexistent correct sort order egNNS

In Mam Mammen describ es a metho d which renders the transparentobjects

in the correct backtofront order without presorting To accomplish this blending

o ccurs at the pixel level in a series of iterations Ateach iteration over the transparent

set the transparent pixels closest to the opaque pixels are determined and blended

in with the opaque value This farthest transparent pixel replaces the opaque pixel

for the next iteration

This metho d relies on secondary sort buers used to store depth and tag informa

tion an opaque pixel map initially maintains the opaque rendered ob jects Ateach

pass every transparent pixel which is pro cessed has its depth compared with the

stored opaque depth and the stored sort depth If the pixels depth is greater than

the sort depth and in front of the opaque depth that pixels information is stored

in the transparency buers This information includes the depth color and alpha

values After all pixels have b een pro cessed opaque colors and depths are up dated

on a pixel by pixel basis using the resulting transparency buers This metho d is

rep eated for the numb er of transparency layers

Mammen also intro duced a variation of this metho d in which only the depth

transparency buer is required Two passes are made at each iteration the rst

writes depth information which acts as a tag for the second pass During the second

pass the scene is rendered and color is blended with the opaque buer Each stage

requires Z buer sorting

Mammen fails to note the successive reduction of transparency area whichpermits

a simple swap of Z buers and his metho d suers from certain depth problems

describ ed in Section He mentions other applications of this metho d but he

do es not observe that he can use his twobuer technique for clipping



KelleyKGP intro duces a hybrid of Mammens metho d and a simple list sort

This metho d uses a multiple layered Z buer or which maintains the four

closest visible layers p er pixel If overow of the buers o ccur layers are comp osited

into one layer and the pro cess continues with three free layers This metho d has

much additional storage requirements as well as costs asso ciated with comp ositing

of the buers during overow

Other multiple buer techniques exist SSa WHG whichrelyon Z asso ciated

information but these are to o exp ensive to act as a lowcost interactive solution

Denitions

Dual Z Buer

A Dual Z buer or DZ buer consists of two distinct but functionally equivalent

Z buer areas each with its own compare function One is designated as the current

write buer Depth comparisons must pass b oth buers designated test in order for

the pixels Z value to b e written to the designated write area With the two buers

designated Z and Z a sample conguration mightbe

zfunctionzZFLESS

zfunctionzZFGTEQUAL

zwritez

providing depth sorting of each subsequent write to buer z which has greater

or equal depth than the contents of buer z

An extension of the DZ buer concept is the Tri Z buer or TZbuer which

presents a third functionally equivalent Z buer for depth comparison

Accumulation Buer

A mo died accumulation buer is presented in this section in which the return op

eration do es not p erform a pure copy Instead this op eration acts as a typical

framebuer op eration and makes use of the blendfunction This can b e readily sim

ulated using the current implementation and lrectread and lrectwrite buer functions

which read and write the framebuer area using the designated blending op erations

ACC RETURN Blend the contents of the accumulation buer back to the

current framebuer according to the current blendfunction

Alpha Blending

A mo died blending function is also presented whichprovides the facilities nec

essary for implementing the fronttoback transparency blending The additional

SFACTOR choice is

SAxBF MDA source alpha destination alpha BF

which is necessary to implement the transparency blending equation

The blendfunction is also split into two blending comp onents the RGB color

comp onents and the A transmittance comp onent This is necessary since the trans

parency blend equation is not symmetrical for the color and alpha comp onents The

two new functions are

cblendfunctionsfactor dfactor

ablendfunctionsfactor dfactor

whose use is identical to the existing blendfunction for color and alpha comp onents

resp ectively

BacktoFrontTransparency

Partially transparent surfaces can b e rendered correctly using the DZ buer in a

backtofront manner similar to Mam After all fullyopaque ob jects havebeen

rendered all pixels of the transparent ob jects which are visible yet furthest away

are rendered A second iteration over the transparent ob jects then renders all pixels

second furthest away This pro cess rep eats to the depth of visibly overlapping trans

parent surfaces each iteration rendering the next closest pixel with the appropriate

alpha blending This pro cess insures that the transparent ob jects are blended in the

correct order demonstrated in Figure

a Opaque Ob ject

b Iteration c Iteration d Iteration

e Iteration f Iteration g Iteration

Figure Backtofront rendered area

The use of the DZ buer in this metho d is seen in each iteration At each iter

ation one Z buer is designated as PREVIOUS one is designated as CURRENT

The buer to b e written to is CURRENT Z comparisons must b e closer than PRE

VIOUS yet further than CURRENT in order to b e written to CURRENT As

CURRENT gets up dated during this iteration only the furthest pixels which are

closer than the last iteration are written

drawscene

PREVIOUS CURRENT

zclearPREVIOUSZMIN init z values

zclearCURRENTZMAX

zwriteCURRENT

zfunctionCURRENTZFLESS

zfunctionPREVIOUSZFNONE normal Zbuffer operation

drawopaqueobjects draw opaque with color

blendfunctionBFSABFMSA alpha blending

do

tempPREVIOUS swap buffers

PREVIOUSCURRENT

CURRENTPREVIOUS

btfPREVIOUSCURRENT

while something was drawn

btfPREVIOUSCURRENT

zfunctionCURRENTZFGREATER furthest and

zfunctionPREVIOUSZFLESS closer than last

zwriteCURRENT

if TWOPASS if using only one color buffer

wmpackx then disable color

for i insurfs i loop through surfaces

stencilNONE ZERO REPLACE i

drawsurfacei Z sort further than CLIPBUF

if TWOPASS

wmpackxffffffff enable color

zfunctionCURRENTZFNONE turn off Zbuffering

zfunctionPREVIOUSZFNONE

for i insurfs i

stencilEQUAL i KEEP STENVALUE

drawsurfacei Z sort further than CLIPBUF

At each iteration two passes must b e made of the transparent ob jects The

rst pass do es not write color information its purp ose is to compare and write

depth values and tag the stencil buer with an identier of the surface to drawin

the second pass Where Mammen uses depth values to determine which ob jects to

render in his twopassvariation the pro cess here uses the stencil mask during the

second pass to render the colors where indicated An ob ject is only rendered in the

second pass at pixels whichwere tagged in the rst pass

An extension to this metho d requiring only one pass is p ossible using the mo died

accumulation functionality as describ ed ab ove Color information can b e written to

a second image buer with pixel blending set for overwrite This generates an image

of that particular level of transparencywhich is then blended with the nal image

based on transmittance values

In each metho d alpha values are used as a transmittance value for the alpha

blending function

I k I k I

tS tS

D S D

where I and I are the source and destination intensities and k is the source

tS

D S

ob jects transparency

The pro cess b etween depth rendering iterations is what further dierentiates our

approach from a pixel iteration metho d such as Mammens and provides the low

cost and high p erformance hardware and feasibility Whereas Mammen relies on

individual comparison and copying of Z values from the transparency Z buer to

the opaque Z buer this metho d uses a simple buer switch Mammen and similar

metho ds rely on copying the up dated Z values from each iteration to a single buer

for maintaining a complete list of all rendered Z values With the DZ buer the

CURRENT and PREVIOUS buers are simply switched and the CURRENT buer

is cleared so that on the next iteration all pixel values must b e closer than those

wo Z functions assigned to each just written This is accomplished by switching the t

buer as well as by switching the designated write buer Costs are minimized as

one Z buer must b e cleared only one time for each iteration and no pixelbypixel

op erations are necessary

With the buer switch metho d only the pixels Z values whichwere rendered

during that iteration are present for comparison in the next iteration since the CUR

RENT buer is cleared at the b eginning of each iteration Maintaining only the

previous iterations Z values is p ossible by noting that in all later iterations the

transparent surfaces which need to b e rendered will fall in the screen area of those

a Stage b Stage c Stage

d Stage e Stage f Stage

Figure Backtofront iterations

already rendered By limiting the draw area of the screen to only the area which

was drawn during the last iteration rerendering of a transparent surface during a

later iteration is eliminated This limiting pro cess is similar in concept to Weiler and

Athertons hidden surface removal technique WA limiting the rendering area is

accomplished using the stencil mask which p ermits a practical implementation

One metho d of accomplishing this masking is the following stencil function At

each iteration the valid rendering area contains stencil values equal to that iteration

numb er During that iterations rendering of the transparent surfaces those pixels

stencil values are incremented by one These b ecome the valid rendering area during

the next iteration This prevents an area drawn in an earlier iteration from b eing

drawn again as the viable drawing area eectively shrinks with each iteration With

this shrinking the simple Z buer switch b ecomes feasible While this stencil func

tion is one of the most simple there are other stencil metho ds which are not limited

by the stencil buer depth

FronttoBackTransparency

While correct sorted transparency is often a requirement for visualization approxi

mation often suces at times in interactiveenvironments For example manywalk

through systems do not render in full detail while the view is changing resolution

increases as so on as the user stops traversal In a situation such as this complete

sorting and display of all transparent surfaces may not b e necessary at all times

only a correct lo oking approximation is desired

While the ab ove metho d do es provide correct rendering of transparent surfaces

it is highly dep endentonthenumber of overlapping transparent surfaces to achieve

a correct app earing image Since the closest of overlapping surfaces is rendered last

it is not until the last iteration over the transparent surfaces that a correct app earing

image is pro duced Until the correct numb er of iterations is reached surfaces may

app ear to have holes in them which show transparent segments b eneath This section

describ es a technique which renders the transparent surfaces fronttoback thereby

displaying the closest transparent segments rst and successively adding additional

transparency which app ears b ehind the currently displayed transparent ob jects This

pro duces a more correct lo oking image at each iteration with no transparent surfaces

ever demonstrating the hole eect This metho d can provide a realistic lo oking

image even if the correct depth is not reached and the contrast b etween the two

approaches can b e readily seen at each iteration in Figs and

ttoback rendering metho d uses the DZ buer analogous to the reverse The fron

of the backtofront metho d except the blending function and iteration initializa

tion are dierent In the backtofront metho d all opaque ob jects are rst rendered

and the transparent pixels are layered on top In the fronttoback metho d opaque

ob jects are rendered with no color to initialize the Z buer at each iteration which

can b e reduced to one iteration as detailed b elow and a nal color rendering is

blended only at the end of the pro cess This second rendering is unnecessary if two

framebuers such as in doublebuering are available as the color can b e rendered

a Stage b Stage c Stage

d Stage e Stage f Stage

Figure Fronttoback iterations

initially and subsequent transparency renderings can b e made to the other frame

buer The nal blending of opaque and transparent buers can b e accomplished

using a mo died accumulation buer

In the rst iteration all pixels of the transparent ob jects which are visible and

closest to the viewer are rendered A second iteration over the transparentobjects

then renders all pixels second closest This pro cess rep eats to the depth of visibly

overlapping transparent surfaces each iteration rendering the next furthest pixel

with the appropriate alpha blending As with the backtofront metho d this can

be a two pass pro cess using the stencil planes or a one pass pro cess using a second

image buer

Again the use of the DZ buer in this metho d is seen in each iteration At

each iteration one Z buer is designated as PREVIOUS one is designated as CUR

RENT The buer to b e written to is CURRENT Z comparisons must b e closer

than CURRENT yet further than PREVIOUS in order to b e written to CURRENT

As CURRENT gets up dated during this iteration only the closest pixels which are

further than the last iteration are written

drawscene

accbufACCCLEAR clear accumulation buffer

PREVIOUS

CURRENT

zclearPREVIOUSZMIN init z values

zclearCURRENTZMAX

zwriteCURRENT

zfunctionCURRENTZFLESS

zfunctionPREVIOUSZFNONE normal Zbuffer operation

blendfunctionBFONEBFZERO normal opaque drawing

drawopaqueobjects draw opaque with color

accbufACCACCUMULATE save image

clear clear screen

cblendfunctionBFDAxBFMSABFONE coloralpha blending

ablendfunctionBFZEROBFMSA alphaalpha blending

do

ftbPREVIOUSCURRENT

tempPREVIOUS swap buffers

PREVIOUSCURRENT

CURRENTPREVIOUS

while something drawn

blendfunctionBFDABFONE set final blend

accbufACCRETURN blend in opaque image

ftbPREVIOUSCURRENT

zfunctionCURRENTZFLESS closest and

zfunctionPREVIOUSZFGREATER further than last

zwriteCURRENT

wmpackx disable color

drawopaqueobjects initialize z values

if TWOPASS if using only one color buffer

wmpackx then disable color

else else

wmpackxffffffff enable color

for i insurfs i loop through surfaces

stencilNONE ZERO REPLACE i

drawsurfacei Z sorted further than PREVIOUS

wmpackxffffffff enable color

if TWOPASS

turn off Zbuffering zfunctionCURRENTZFNONE

zfunctionPREVIOUSZFNONE

for i insurfs i

stencilEQUAL i KEEP STENVALUE

drawsurfacei

Like the backtofront metho d two passes must b e made of the transparent ob

jects at each iteration Alpha values are used as a transmission co ecientforthe

blending function which is not supp orted in the current Silicon Graphics GL de

nition

I I k k I

tD tS

D D S

k k k

tD tD tS

where the destination transparency k is up dated at every iteration

tD

After the transparent surfaces have b een rendered they are blended with the

opaque rendered surfaces using a variation of the interp olated transparency equation

As the colors have already b een multiplied by the transmission values during the

iterative steps the resulting nal blend b ecomes

I I k I

D S tS D

This is accomplished through use of the mo died accumulation buer whichpermits

blending on the return mo de

As can b e seen in the ab ove description the opaque ob jects are rerendered at

each transparency iteration to reinitialize the CURRENT Z buer This can b e a

hhavenumerous opaque p olygons very costly pro cess in typical environments whic

This costly step can b e eliminated by noting that the Z values pro duced are the

same at each iteration and therefore need only b e rendered once and stored While

this can b e accomplished with a buer copyateach iteration the buer transfer

is typically exp ensive without sp ecialized hardware An extension of the DZ buer

to include a third buer creating a Tri Z Buer or TZbuer obviates this buer

transfer This third buer named STATIC for our purp oses is used as the write

buer for the initial opaque rendering The metho d then pro ceeds as ab ove with

the addition of a third Z compare function

zfunctionSTATICZFLESS

replacing the rendering of the opaque ob jects within the fronttoback sorting func

tion This third zfunction additionally compares the transparency Z values against

the opaque values at each iteration

Translucency

Refraction and reection need not b e limited to purely transparent or purely sp ecular

surfaces resp ectively We can create a multitude of materials by rendering the

refractive surface again after the refractive image is drawn This second rendering

TM

is alphablended with the refracted scene On systems such as the Iris Indigo

system this actually requires two additional renderings of the refractive face since

lit faces cannot have a source alpha value The rst rendering is done without color

and sets the destination alpha value to the appropriate value The second rendering

is with lighting and a blending function dep ending on the destination pixel chosen

This p ermits hardware shading eects and other hardware rendering features such

as textures to b e blended with the refracted scene By adjusting the alpha blending

values the shininess or sp ecularity of the material can b e controlled Co ordinated

with texturing materials such as p olished marble or wet tile can b e simulated as

seen in Figure

a Dull Flo or b ShinyFloor

Figure Partial Sp ecular Image

In addition translucent and other light disp ersing materials can b e simulated

using the hardware fog feature and the stencil planes Translucent ob jects act as a

lter with closer ob jects more clearly visible than farther ob jects due to the random

refractions whichtake placeKG This eect can b e approximated using hardware

fog features with the minimum fog set at the refractive plane distance and the max

imum at the desired distance dep ending on the material prop erty Although fog is

linear with resp ect to the view the approximation is many times fairly accurate due

to the limited angular displacement of the refracting plane b ecause of the critical

angle A more versatile fog function supp orting a general linear transform would pro

vide much more accurate translucency and light scattering for reective surfaces By

incorp orating multisampled sto chastic X Y shearing ab out the sp ecular surfaces

normal axis light scattering through the translucent material is additionally simu

lated This is accomplished by accumulating using an accumulation buerHA

intermediate sto chasticallysheared sp ecular images to pro duce the nal scattered

eect

A x transform is created which includes a sto chastically generated X Y shear

This transform is premultiplied by the global inverse transform of the sp ecular face

normal and p ostmultiplied by the global transform of the normal with the resulting

transform pushed onto the view matrix stack This creates a shear linear with resp ect

to the p erp endicular distance from the sp ecular surface

An overview of the entire rendering pro cess is seen in the following co de

maskfacespecface set stencil area for face

reclassifycameraCamera move camera to virtual viewpoint

enableclipplanespecface clip geometry on wrong side of face

for i i inumsamples loop over all stochastic samples

specfacesamplei shear view stochastically around specular face normal shearviewCamera

drawwindowCamera recursively draw reflectedrefracted view

accbufACCUMULATE accumulate intermediate image

disableclipplanespecface turn off clipping plane

accbufRETURN display composite image

By adjusting the jitter amount and the fog parameters this metho d can provide

a range of eects similar to those pro duced by analytic metho dsArv although

atamuchlower cost Figure compares analyticallygenerated images a and

b with our multipass images c and d Our frosted glass images were each

generated in less than seconds as compared to seconds in the analytic ap

proach Figure compares the two approaches for a scattering reective surface

Here analyticallycomputed image a required minutes of rendering time as

compared to under seconds for multipass images b and c

Figure demonstrates the disp ersing nature of a translucent scattering surface

under varying shear multipliers m and fog parameters fmax Note how the elon

gated blue b eam is clearer the closer it is to the glass Figure again demonstrates

this eect in conjuntion with texture mapping to simulate a shiny marble surface

As this metho d uses hardwarebased rendering image size has minimal eect on

timings in contrast to ray tracing In systems where rasterization is indep endentof

p olygon size ie PixelPlanesFP image size is not a consideration

Buttery design byElsaSchmidScha

a LowScattering Glass Analytic b HighScattering Glass Analytic

c LowScattering Glass MultiPass d HighScattering Glass MultiPass

Figure Frosted Glass

a Glossy Table Analytic

b Glossy Table MultiPass c Glossy Wo o d MultiPass

Figure Glossy Table

a m fmax b m fmax

c m fmax d m fmax

Figure Frosted Glass

Figure Glossy Marble Wall

Chapter

Shadow and Light Volumes

Whereas the previous chapter discussed one of the characteristic features usually

found only in ray tracing namely sp ecular image generation this chapter addresses

another This feature is shadowing resulting b oth from direct and sp ecular illumi

nation

ShadowVolumes

Our shadows are implemented based on Brotman and BadlersBB extension of

Crows shadowvolume metho dCro This technique uses the plusminus principle

of silhouette faces to mask regions inside the shadowvolume The use of shadow

volumes is more suitable to our application than the shadow buerRSC metho d

due to several factors These include the limited eld of view of shadow buers

as well as limited resolution Finally the most serious drawback for a dynamic

environmentisthatmovement of one ob ject requires recalculation of all of the shadow

buer images

With the shadow buer metho d pro jective textures are used to cast shadows

in the environment This is accomplished by creating a lightview depth texture of

the environment and using this image to map the shadow Yet as this lightview

analogy implies this shadow buer is directed along a particular view and therefore

is limited by the eld of view from the light source To pro duce the eects of an

omnidirectional light source multiple depth maps from several views must b e used

This requires multiple renderings of the scene one rendering must b e p erformed for

each view

In addition to this multiple view requirement shadow resolution is limited by

the resolution of the shadow buer image for each view A shadow buer pixel is

mapp ed into the cameraspace image to determine the shadowed area in that scene

Therefore a single shadowed pixel may b e mapp ed to a large numb er of image pixels

dep ending on relative image ratios lo cation of the light source to the ob jects and

lo cation of the camera Even with texture sampleltering functions this can often

result in blo ck pixelation of the shadow

Finally the most serious limitation of the shadow buer metho d is that since

shadows are based on lightview images ob ject shadows are not indep endent Move

ment of one ob ject requires recalculation of all of the shadow buer images This is

not conducive to a dynamic environment

Where the shadow buer metho d suers from the ab ove limitations the shadow

volume metho d obviates them The shadowvolume is an omnidirectional metho d

casting shadows in every direction from a light source Shadow resolution is at

viewspace resolution resulting in pixel or subpixel accuracyObjectshadows are

mostly indep endent which p ermits recalculation of shadows from only ob jects

whichhavemoved

The shadowvolume technique do es however suer from several shortcomings of

its own These include its reliance on manifold brep ob jects and uncomplicated

p olygonal geometry for accurate determination of the silhouette volume and its in

ability to correctly render shadows when the camera is lo cated inside the shadow

volume itself Both of these problems were addressed within the scop e of this work

The dep endent case of refractive shadows is stated in the following section

Silhouette Volume

As describ ed previously the shadowvolume metho d casts a volume from an ob ject

in the direction of the lightrays This volume is drawn for the silhouette edges of

the ob ject as seen from the light source casting the shadow These edges represent

the b oundary b etween light and dark areas of the ob ject and the silhouette volume

generated thereby encompasses all areas in shadow This metho d therefore requires

b oth the ability to determine which edges of the ob ject are silhouette edges and the

ability to assign a direction from the corresp onding silhouette face for that edge

The latter is necessary for determining the plusminus count of the shadowed area

Determining the silhouette edges of a manifold convex b oundary representation

brep requires a simple inner pro duct check of face normals to lightrays Eachof

the two faces sharing an edge have their normals dotted with the incoming lightray

A silhouette edge is found when these inner pro ducts are of opp osite sign

If an ob ject is not a manifold brep constructing the silhouette volume is not

straightforward An ob ject whose b oundaries are manifolds has b oundary p oints

with neighb orho o ds of top ological disks Therefore anypoint lying on an edge can

b e examined as a silhouette edge by examining the direction of a single top ological

disk In essence manifolds guarantee that every edge is a true b oundary edge

which can b e therefore b e tested as a silhouette edge as describ ed ab ove If an

ob ject is not a manifold an edge can lie in more than one top ological disk and

therefore b e shared by more than two faces Considering that the silhouette face

gets its normal direction from the normal direction of the shared faces having more

that two shared faces can present a conict

For ob jects which are not manifold breps rendering the silhouette volume for

every face of the ob ject pro duces a correct but more costly shadowvolume This

metho d is also necessary for certain instances where the orientation of a face of

a manifold brep may b e directly inline with the shadowray and therefore the

innerpro duct test may fail Lightsource       Shadow  Volumes 



Figure Shadowvolume clipp ed by front clipping plane

View in Volume

Inherent in the shadowvolume metho dology is the assumption that the camera is

not lo cated in the shadowvolume b ecause a p ositivecount as opp osed to zero

represents a shadowed region In interior environments the light is often obscured

by some geometry such as a lamp shade If the camera is inside the shadowvolume

the shadow test must b e mo died In essence we need to cap the shadowvolume

b efore the camera

If the entire camera viewp ort image plane lay within the volume capping could

simply b e accomplished by shifting the shadowvolume pixel values by one for the

entire scene Unfortunately the shadowvolume often straddles the viewp ort with

part of the viewp ort lying within the volume and part outside the volume as seen in

Figure In this gure most of the rightmost cub e is incorrectly shadowed

An actual cap of the volume at the front clippingplane is most appropriate in

practice however it is dicult and exp ensive to compute This requires p erforming

the same p ersp ective transform and clipping which the hardware already p erforms

therefore determining the cap face is a redundant imaging pro cess and is not assisted

by the rendering pip eline In addition this pro cess would intro duce other errors in

that this face could not lie on the clipping plane as it would b e also clipp ed away

An oset would have to b e added which could pro duce incorrect shading in close

ob jects

While calculating a capping face is exp ensive and intro duces clipping errors

there is a metho d which uses the shadowvolume rendering itself to eectively cap

the volume at the clipping surface To p erform this note that the cap only need take

place in image space not in D co ordinate space The cap must simply overlaythe

pixel area enclosed by the silhouette volume Based on this we note that the image

space area of the cap of a clipp ed enclosed volume is equal to the visible internal

pixels of the volume This observation was noted in the capping of clipp ed surfaces

demonstrated byAkeleyAkeb

While the shadowvolume is not an enclosed volume it is simple to render it as

a single op enended volume whose op ening is at the clipping plane Normallythe

shadowvolume for a single face b ounded plane with the light source p ositioned

directly b ehind it is an op enended truncated conical p olyhedron with op enings at

the top and base The top op ening is b ounded by the opaque face itself with the

b ottom b ounded by the front clipping plane If the shadowvolume faces are extended

to the light source the volume b ecomes an op enended pyramid whose sole op ening

at the base is dened by the clipping plane

Using this rendering metho d in conjunction with the cap area relationship the

shadowvolume cap can b e readily generated for viewenclosed volumes by p erforming

a mo died second rendering of the aected shadowvolume This second rendering is

p erformed replacing the shadowvolume vertices along the ob ject with the light source

lo cation In addition this rendering is p erformed with Z buering disabled as the

cap is not aected by other geometry since it is lo cated at the front clipping plane

This rendering also uses the plusminus stencil function however the incrementing

ting functions are reversed to thereby cap the internallyvisible area and decremen

The use of this metho d is valid for any shadowvolume geometry from any view

point This metho d do es not suer from any p otential gapping problems which

could o ccur in actually generating a cap In addition this metho d do es not cause

false shadows if the view actually lies outside of the volume the shadowvolume

remains unchanged While this fact makes correct shadow generation p ossible by

doublerendering every shadowvolume this metho d is impractical as it is twice as

costly Instead we p erform an intersection check of the lighttoview vector with

the b oundingb ox of the generating ob ject and subsequently an ob jectintersection

check if the b oundingb ox test passes If an intersection o ccurs a magnitude check

of the viewtolight and viewtoob ject vectors is p erformed to validate the the view

is indeed inside the volume and not on the other side of the lightsource Only then

is the second rendering of the shadowvolume p erformed

Sp ecular Shadows

While the sp ecular image metho ds describ ed in Chapter do pro duce accurate re

ective images and close approximations for refractive surfaces they do not pro duce

accurate lighting aects from these surfaces Light reects o a mirror and refracts

through glass pro ducing dierent shadows than if not present To pro duce a more

accurate image these eects must also b e taken into account Therefore anyshadow

generation metho d must not only work in co op eration with the stenciling metho d

describ ed ab ove but it must also b e aected by the reective and refractive surfaces

in a scene

To understand how the shadowvolume metho d must b e extended for refractive

ys the complex shadow patterns surfaces examine Figure This gure displa

caused by ob jects on b oth sides of a refracting surface Note that this is not an exact

representation but instead a hybrid mo del used in our system to greater demonstrate

the refracting eects The rays are refracted as in a change of medium they do not

represent true inout refraction of a material with a thickness With inout refraction 1. Primary Light 5. Cube’s Refracted Shadow 2. Primary and Refracted & Sphere’s Shadow from Light (not present when Refracted Light exit refractions) 6. Sphere’s Shadow from 3. Cube’s Refracted Shadow Refracted Light & Primary Light 7. Refracted Light 4. Cube’s Refracted Shadow 8. Refracting Surface’s

Shadow

Figure Lightinteraction with refractive surface

the refracted rays are parallel to the incidentrays and merely oset thereby not

p ermitting direct light to fall within the lightvolume KG Although our included

images were generated with this changeofmedium mo del inout refractions are

achieved merely bychanging the refracting function or by placing backtoback

refracting faces with opp osing indices of refraction in the currentmodel

To accurately mo del shadows each of the ab ovementioned features must b e

included in our shadowmodelTo accomplish this we require a multipass rendering

metho d to pro duce the eects demonstrated in Figure

Intersection Recursion

Our rst implementation of sp ecular shadow generation intro duced in DB was a

twopass approach The rst phase generated all shadows and lighting falling within

the refracted light area The second pass renders all lighting and shadows outside

this area This metho d creates b oth the shadow and caustic eects of the refractive

Figure Refracted Caustics and Shadows

surface

In the rst pass a lightvolume Nis was generated for the refracting face

Shadowvolumes were then generated for shadows falling inside this volume This

itself included two cases namely ob jects inside the volume generating shadows and

ob jects outside the volume whose shadows get refracted into the volume In the

rst case the shadowvolume cannot intersect the refracting plane for to do so

would place the ob ject outside the lightvolume In the second case the shadow

volume must intersect the refracting plane in order to b e refracted into the light

volume Because true inout refraction results in refracted rays parallel to incident

rays ob jects outside the lightvolume cannot cast shadows into the lightvolume

directly from the primary light source Both intersection cases can b e checked during

the shadowvolume generation A simple preshadow generation check using dot

pro ducts can determine if the ob ject is on the appropriate side of the refracting

plane and can savehaving to generate the shadowvolume

The case of reection was similar and the ab ove holds true for single shadow 1. Primary and 3. Primary and Reflected Light Reflected Shadows 4. Reflected Light’s Shadow

2. Reflected Shadow Only Only

Figure Reected light and shadows

levels umbra only Reection proved more complicated however due to the light

volumes ability to cast shadows into the lightvolume of the primary light source

With this ob jects b oth outside and inside the reected lightvolume can cast reected

shadows Additionally ob jects outside the reected lightvolume can cast shadows

into the volume This resulted in several levels of blending from one ob ject as seen

in Figure

The second pass of the Intersection Recursion metho d created shadows for the en

tire environment Even the refracted lightvolume region is included This captured

the shadow eect caused by the refractive surface itself

Virtual Position Recursion

The original Intersection Recursion implementation of this work suered from sev

eral shortcomings most imp ortantly were unneeded shadowvolume recomputation

and incorrect multisource blending These issues were addressed in the current

implementation

As noted in Section lightrays and therefore shadowrays are refracted or

reected up on interaction with a sp ecular surface In the implementation describ ed

in Section as shadowrays were traced from the generating ob ject they were

checked for intersection with sp ecular ob jects If an intersection o ccured the ray

was b ent appropriately and the intersection check recursively pro ceeded for the b ent

ray

Unlike the refraction approximation for the secondary viewp oint from Section

eachray in the Intersection Recursion metho d was indep endently b ent according

to Snells Law This metho d therefore pro duced exact shadow tracing for b oth

reective and refractive surfaces It can however b e an exp ensive computation

since intersection checks must o ccur for each original as well as refractedreected

shadowray If the environment has many sp ecular ob jects the intersection checks

can b e prohibitive for an interactive system There exist manyray tracing based

metho ds to facilitate this intersection check but most require some prepro cessing

sub division of the environmentFTI In our dynamic environment these metho ds

are inappropriate

As the intersection testing is very exp ensive an approximate solution analogous

to the secondary viewp oin t metho d now replaces the Intersection Recursion metho d

in the current system This metho d Virtual Position Recursion or VPR requires

no intersection checks and p erforms recomputation of shadowvolumes only when

necessary This metho d is exact for reective surfaces and as accurate as the sp ecular

image generation for refractive surfaces refracted shadowrays use the same paraxial

approximation previously describ ed

Whereas the Intersection Recursion metho d traced shadowrays to sp ecular ob

jects the current VPR metho d computes shadowrays for all refracted and reected

virtual light sources A reected shadowray traces the same path as a shadowray

pro duced from the asso ciated light sources reected p osition For refractive surfaces

this virtual light source p osition approximates the refracted shadowray direction for

paraxial rays Therefore instead of recursively p erforming intersection checks for

each shadowray an approximate solution can b e achieved by generating shadow

rays for all virtual reectedrefracted light p ositions In addition since eachrayis

asso ciated with a light source and a sp ecular face these rays can b e stored with the

generating ob ject and need b e regenerated only when either the ob ject the sp ecular

face or the light source has moved

In the actual VPR implementation shadow silhouette faces are stored rather

than individual rays Each silhouette face is stored as four vertices and a face normal

All silhouette faces for an ob ject are stored together along with information on the

casting ob ject the generating light source the recursivelevel and the sp ecular ob ject

face creating the virtual light p osition An up date ag is also maintained whichis

set when one of the shadowasso ciated ob jects is moved Only shadowvolumes

whose ag has b een set are recalculated during the appropriate image pass While

this storage do es signicantly sp eed the rendering lo op it do es require signicant

overhead For our bathro om environment with four light sources at one level

of recursion and two sp ecular faces this required megabytes of shadow storage

During each image pass which includes the camera view as well as any reected

or refracted views shadowvolumes are stenciled for the entire environment In

addition this pro cess is recursively called for each sp ecular face in the environment

This generates shadows and incrementally adds lighting from secondary sources

While this metho d is exact for secondary sources tertiary and higher order sources

are mere approximations unless additional rendering passes are made as the light

volumes b ecome much more complex

As noted ab ove the current implementation now stores shadowvolumes to save

recomputation at each frame While this metho d signicantly reduces rendering

times it requires signicant amounts of memory for multiple levels of transp ort

This is due to the exp onential nature of shadow generation when sp ecular transp ort

is involved To understand this examine Figure in whichtwo sp ecular surfaces

are involved To accurately render the lighting eects emitting from mirror R at

 Virtual Lights

Reflecting Planes Virtual Objects

Object

R1 Lightsource R2 L12=R2(R1(L)) L1=R1(L) O2 O12 O1 L  O S12O   SO   S1O  S1O2    SO1  SO12    SO = Shad(L,O) SO1 = Shad(L1,O1) = R1(So) S1O = Shad(L1,O) SO12 = Shad(L12,O12) = R2(SO1) = R2(R1(So)) S1O2 = Shad(L12,O2) = R2(S1O) S12O = Shad(L12,O)

Figure Shadowvolumes generated involving R

two levels of recursion it can b e seen that ob ject O pro duces two shadowvolumes in

volving sp ecular transp ort from mirror R only and three shadowvolumes involving

sp ecular transp ort from mirror R to mirror R Each additional sp ecular surface



or level of recursion requires an exp onential increase b ecause it intro duces shadows

not only from its nal virtual p osition but from eachofitsintermediary p ositions

at each level of recursion In essence this is due to the fact that a single ob ject can

blo ck the light b oth reaching and coming from a mirror at each sp ecular transp ort

The resulting shadows for that ob ject at that level of recursion is the intersection of

each of these intermediate shadowvolumes

ShadowVolume Reclassication

While it rst seems that all of the real and virtual p osition shadowvolumes must b e

indep endently stored and in fact originally were this is not necessary and indeed

proved to o costly Instead it can b e noted that shadows which pass through a sp ec

ular surface are simply a transformed version of the original shadow and therefore

a transformed version of the original shadowvolume is valid Again referring to Fig

ure it can b e seen that shadowvolumes S and S are reected versions of

O  O 

S and S resp ectively the former itself a reected version of the shadow directly

O O

from the light S OnlyS and S are distinct shadowvolumes in addition to

O O O 

the original S

O

This presents a basis on which to reuse or reclassify previouslycomputed

shadowvolumes For each virtual light resulting from a sp ecular surface shadows

are generated from an ob jects real p osition ie S and from its virtual p osition

O

ie S These virtualp osition shadows are transformed versions of the original

O

shadows as b oth the ob ject and lighthave b een transformed by the same op eration

Therefore the original shadowvolume can b e transformed by this same reection or

refraction This saves recompution of these shadowvolumes and only one additional

matrix multiply of the sp ecular transformation is required b efore all virtualp osition

sp ecular shadowv olumes are rendered

Whereas the current implementation generates shadows from ob jects on b oth

sides of the sp ecular face shadows which are reected and shadows from lightwhich

is reected it also limits its generation of shadows from previous sp ecular transp orts

to two levels While this metho d works correctly for interaction of two sp ecular

faces it may add light where it should b e obscured in higher orders of sp ecular



sp ecular transp ort This problem is somewhat minimized due to the r nature of

light disp ersion

It must b e noted that although the same shadowvolume maybeusedatseveral

recursion levels viewinvolume checks must b e p erformed at eachlevel based on its

transformed p osition As mentioned in Section the viewinvolume checkis

p erformed on the b ounding b ox of each segment To accomplish this a conventional

rayversus axialaligned b oxintersection metho d is used The lighttoview vector is

transformed by the inverse of the transformed global segments p osition which places

it in the lo cal co ordinate system of that segment This transformed vector can then

be checked against the static precomputed lo cal b oundingb ox of the segment

In addition further checks must b e p erformed to omit shadows whichwere

presentinalower level of recursion but are on the wrong side of one of the successive

sp ecular faces along the transp ort Otherwise an ob ject b ehind a mirror could cast

a false shadow after its volume is transformed This is accomplished with checks

of the shadowvolume against the view and light p osition relative to the sp ecular

surface While this must b e p erformed for each recursion level as the shadowvolume

is transformed the check is readily accomplished with a simple innerpro duct test

of the transformed p ositions The simplicity and inexactness of this viewside check

intro duces further complexities however

As is readily apparent sp ecular shadows should b e only those shadows which fall

in the lightvolume Exclusion of shadows based solely on which side of the sp ecular

face they originate fails to eliminate manyshadowvolumes which fall outside this

lightvolume These false shadowvolumes currently also get rendered although

they aect only ob jects outside the stenciled lightvolume While it rst seems it

would not cause a problem as only ob jects in the lightvolume stenciled area are

rerendered it can for certain views create erroneous shadows This is due to the

reclassication of shadowvolumes and the nature in which they are generated

When shadowvolumes are generated they are assumed to b e capp ed at one

end by the generating ob ject itself This assumption is no longer valid when these

shadowvolumes are reclassied and moved for sp ecular shadows The generating

olume is now ob ject is no longer rendered and therefore the transformed shadowv

op enended For certain environments where shadowvolumes are rendered outside

the lightvolume the view may b e lo oking into the shadowvolume from where the

generation ob ject would normally b e This presents a situation analogous to the

viewinvolume problem addressed previously subsequently an analogous and even

simpler solution is available

As the reclassied sp ecular shadowvolumes now originate on the light side of

the sp ecular surface enclosure of ob jects b efore the sp ecular surface can b e ignored

as those pixels are not rerendered Therefore we can make a closure of the op en

end by again extending the vertices which originated at the silhouette edges up to

the light source This extension prevents viewing the inside of the volume through

the op ening where the originating ob ject would normally b e This creates an op en

ended pyramid which can then b e pro cessed as normal including full viewinvolume

checks

LightVolumes

When lightinteracts with a sp ecular surface some of the light is reected or trans

mitted thereby creating an enclosing volume of the sp ecular light This light

volume can b e viewed as light coming from a secondary source whose p osition is

the virtual p osition of the lightsource reected or refracted by the sp ecular sur

face and whose intensity is mo dulated by the sp ecular material This light source

reclassicationCRMT in conjunction with an asso ciated lightvolume is the basis

for sp ecular illumination in our system

SingleBounce LightVolumes

Lightvolumes from secondary sources are generated in the same Virtual Position

Recursion manner as the shadowvolumes Just as shadowvolumes stencil the area

in shadow lightvolumes stencil the area in which light from secondary sources can

b e added and subsequently omitted thereby causing shadows The lightvolume is

generated using the same plusminus shadowvolume metho d however the stencil

values are in essence inverted to p ermit rendering only in the shadowed area

Whereas with shadows the zero area represents the area not in shadow and therefore

the valid rendering area with lightvolumes the zero area is the area outside the

lightvolume We therefore set all pixels of shadowzero to b e the refractivezero

value and therefore not valid in subsequentshadow calculations and conversely

all nonzero stencil values which are greater than the shadow minimum to b e the

shadowzerovalue Just as the shadowzero value determines the valid render area

for rendering shadows in a mirrored image it also determines the valid area within

a lightvolume This refractive use of the shadowzerovalue is further detailed in

Chapter

MultipleBounce LightVolumes

The original VPR implementation of this metho d which eliminated the intersec

tion check of shadow and lightvolumes with sp ecular faces generated the light

volumes based solely on the virtual secondary light source p osition and the cur

rent sp ecular face This is not accurate however with sources involved in multiple

sp ecularsp ecular transp orts With each sp ecularsp ecular transp ortWCG the

lightvolume is b entbyeach sp ecular face through which it passes and therefore the

corresp onding virtual light p osition is transformed as well Furthermore the light

volume is reduced byeach of the sp ecular faces This pro cess is seen in Figures

and This rst implementation correctly calculated the virtual light p osition

but the lightvolume was only eectively clipp ed by the nal sp ecular face In cases

where the visibility of the mirrors to each other is obscured or where only a sliver

of light is pro duced this may cause lighting errors These errors are in manycases



minimal considering the r fallo of light sources as well as nonp erfect reectance

of materials

tvolume rendering is fairly accurate it While in many cases the describ ed ligh

is not an exact solution There are however metho ds to overcome this limitation

whichwere not fully implemented within the scop e of this work due to their rendering

costs but are briey discussed for completeness These metho ds are based on the

same principles as describ ed ab ove but involve further reduction of the lightvolume

and additional rendering of shadowvolumes

As demonstrated in Figure a lightvolume from a tertiary source should b e Light Volume refracted by R1 & R2 Lightsource Virtual Lights Refracting Planes L R1(L) R1 R2(R1(L)) R2              



Figure Lightvolume clipp ed by b oth refracting planes

clipp ed by b oth sp ecular faces through which it passes This includes the sp ecular

face R reecting the image of a sp ecular face R reecting the lightaswell as the

face directly reecting the light This can b e accomplished by either direct reduction

of the lightvolume through intersection checks or bymultiple renderings of the light

volume through each face to determine the valid overlapping region

The rst metho d is the basis for the earlier Intersection Recursion work whereby

shadows and lightvolumes were checked for intersection with sp ecular faces Where

intersections were found recursive pro cessing of the spawned volumes pro ceeded

This metho d while more exact suers to some degree from the same p erformance

restrictions whichray tracers face

The second metho d is based on additional rendering passes of the VPR implemen

tation As the correct lightvolume is the lightwhich passes through each sp ecular

face along its path this volume is also the intersection of the lightvolumes generated

through each sp ecular face Therefore the correct lightvolume can b e created by

stenciling each of the constituentlightvolumes to determine the overlapping area

This is the metho d which is implemented in the current system for twolevels of

sp ecular transp ort which is seen Figures and for reective and refractive

surfaces at a recursive depths of one and two Figure shows the generated light Virtual Lights Lightsource      R1(L)L R2(R1(L)) R1    R2      Reflecting Planes 

Light Volume reflected by R1 & R2

Figure Lightvolume clipp ed by b oth reecting planes

volumes for the corresp onding shadowed environments in Figure Whereas this

metho d is exact for reective surfaces as demonstrated in Figure it is an ap

proximation for refractive surfaces Again referring to Figure it is apparent that

the intersection of disjointlightvolumes fails to account for instances where only a

partial lightvolume intersects a refracting face

In addition the current VPR implementation only uses twolevels of lightvolume

clipping While higher orders of sp ecular transp ort are p ossible in the current system

the lightvolume clipping itself is limited to the latter two sp ecular faces This

approximation is not an inherent limitation of the metho dologyratheritwas an

implementation decision based on the goal of an interactive rendering system

Light Accumulation

While the ab ove implementation addressed the need to recompute shadowvolumes

at every frame its b enet is limited without prop er blending of the resulting shadow

eects The original implementation of this work relied on use of the accumulation

buer to overlay the indep endent shadow eects from real and virtual light sources

This not only pro duced incorrect blending as ambientlightwas multiply added but

a reective surfaces depth b reective refractive surface depth

c reective surfaces depth d reective refractive surface depth

Figure VPR LightVolumes

a reective surfaces depth b reective refracive surface depth

c reective surfaces depth d reective refracive surface depth

Figure VPR Lighting and Shadows

it also precluded the use of the accumulation buer for other purp oses such as light

jittering or antialiasing

The current implementation p erforms the comp ositing of indep endent light con

tributions without using the accumulation buer Instead the hardware blending

function is used in co ordination with successive rendering passes which rely on vis

ible surface computations from the initial rendering pass The accumulation buer

is then used with jittered lights to accumulate complete images and pro duce soft

shadows These metho ds of light blending and accumulation are fully detailed in

Section

In addition to prop er blending the lighting contributions from the virtual light

sources must themselves b e mo dulated from the original source This mo dulation

o ccurs due to the partial transmittance or reectance of the sp ecular material as well

as other surface prop erties of the material We include two means of mo dulation of

the reclassied virtual light these are through attenuation and ltering

Attenuation

As no surface is purely sp ecular not all light impinging on a surface is sp ecularly

transmitted or reected The amount of lightwhich is transmitted or reected is

attenuated by the material prop erties of the surface

For pip eline rendering each sp ecular surface and lightsource has an asso ciated

lightvolume which demarcates the sp ecularly illuminated area This area is illumi

nated during a successive rendering pass by a lightsource at its virtual p osition This

lightsources emittance is rst attenuated by the sp ecular prop erties of the reecting

or refracting surface Thus the light is eectively dimmed by each sp ecular surface

with whichitinteracts In addition further attenuation o ccurs simply b ecause of the

virtual p osition of the lightsource This virtual p osition along with the attenuated

color are used in the hardware lighting equation

Filtering

In the previous sections lightvolumes were created for light passing through trans

parent glass or reecting o mirrored surfaces These metho ds assumed sp ecular

surfaces of uniform coloration density and transparency There are however many

common materials which do not fall into this category This includes b eveled glass

prisms and stained glass Lightinteraction with these materials can however b e

simulated using a technique common to rendering complex materials namely texture

mapping

As light passes through a surface it may b e ltered through the surface thereby

pro jecting a distorted image of the sp ecular surface onto some diuse surface This

pro jection of an image is the same principle used in pro jective texture mapping

Pro jective textures have b een previously used for a variety of purp oses such

as a slide pro jector onto a surface DSG and as a shadow caster based on a

depth texture map Wil In addition this metho d has b een used to apply a

corrective p ersp ective transformation to an image pro duced from a given vantage



point KNN and conversely DorseyDor used this principle to pro duce the

p ersp ectively distorted image itself

While the principle of pro jective textures has app eared in computer graphics for

years it is only recently that many mo dern graphics architectures supp ort some



metho d of texture pro jection in hardware Segal et al SKvW demonstrated

how pro jective co ordinate transformations p ermit texture co ordinate assignment

based on the depth maps describ ed in RSC With this facilityavailable we

can simulate lightinteraction with many of the ab ovementioned materials

e texturing relies on transformation of world co ordinates The principle of pro jectiv

to clip co ordinates as well as of world co ordinates to light co ordinates In eachof

these cases the transformed D co ordinates are mapp ed into a D lo cal co ordinate

frame of the light pro jector and the eye These D co ordinates are given by

x xw and y yw for the screen co ordinates and x x w and y y w

s s t l t l

for the texture co ordinates The corresp onding transformation relating the texture



co ordinates to screen co ordinates as given in SKvW is

l l

aQ w bQ w





t

Q

l l

aw w bxw w





t

where Q is the texture co ordinate corresp onding to a linearly interp olated p oint

along a line segment in screen co ordinates

As seen in Figure a this transformation is dep endent on lo cal axialaligned

D co ordinate frames Unfortunately this is seldom the case with arbitrary envi

ronments where the texture plane ie the sp ecular surface is not axial aligned with

the light view This is demonstrated in Figure b

As can b e seen in Figure c there is a corresp onding pro jection M of



this texture plane onto the axial aligned texture co ordinates As with the image

mapping of a refractive image onto a sp ecular plane describ ed in Section this

transformation is a D pro jection of quadrilateral to quadrilateral In fact this

instance is the simpler mapping of rectangle to quadrilateral describ ed in Hec

and included in App endix A

The application of this transformation is seen in Figure where the rst

image shows normal hardwarebased texture pro jection and the second shows the

same pro jection with a nonaxial aligned texture plane As with the refraction

transformation this requires a D pro jective transform whichissimulated with a

corresp onding D transform Note that this approximation also aects Z buering

which in turn aects the visible surface mapping in the environment A more exible

pip eline architecture would enable true and correct implementation of arbitrarily

oriented texture plane pro jections

While the ab ove implementation is suitable for general slideshow pro jection it

alone is not suitable for simulation of light passing through a sp ecular surface This

requires only pro jection of the texture on surfaces receiving light In the previously

describ ed shadow implementation all contributions of b oth real and virtual light

This sources o ccur after all shadowvolumes of that source have b een stenciled Object Q2 Object Q2 Q Q Q1 Q1

(x,y,z,w) (x,y,z,w) (xl,yl,zl,wl) (xl,yl,zl,wl)

ys ys (x/w,y/w) (x/w,y/w) yt ytd (xl/wl,yl/wl) (xtd,ytd) Ep Ep xs xs Lp Eye View Lp Eye View xt (screen) zs xtd (screen) zs Light View Light View (texture) (distorted texture)

zt zt

a Axial aligned pro jection b Nonaxial aligned pro jection

Object Q2 Q Q1

(x,y,z,w)

(xl,yl,zl,wl)

ytd ys (x/w,y/w) yt M33(xtd,ytd) Ep xs Lp xt Eye View xtd (screen) zs Light View (distorted texture mapped to axial−aligned zt

coordinates)

c Mapp ed nonaxial aligned pro jection

Figure Light and view co ordinate systems

a Axial aligned pro jection b Nonaxial aligned pro jection

Figure Pro jected Textures

provides a convenient rendering stage for the pro jected texture namely the rendering

of the lightvolume By applying the texture only during this stage the lightvolume

is thereby mo dulated by the texture This pro duces the desired eect which includes

pro jection of the material prop erties only in areas not obscured by shadow A detail

of these eects is seen in Figure where four real lights and four virtual sources

shining through a b eveled glass do or pro duce eight partially overlapping texture

patterns which are o ccluded by the do or handle shadows

Figure Pro jected Texture LightPattern

Chapter

Multipass Pro cess

In the previous chapters the individual comp onents of a multipass rendering sys

tem were intro duced This includes shadow generation for real and virtual light

sources and sp ecular image generation from a virtual viewp oint Both features rely

on manipulating the hardware matrix stack to create these virtual p ositions Both

features require multiple passes to render for these virtual p ositions Additionally

b oth features use stenciling to mask to appropriate screen regions for each rendering

pass This chapter examines the interaction of these two similar rendering pro cesses

Recursion

The co ordination of the separate pro cesses is seen in the following pseudoco de

drawwindowCamera

if SPECON if reflectionrefraction enabled

drawspecobjectsCamera draw reflectedrefracted view

if SHADOWSON if shadows are enabled then

turnlightsoff turn all lights off

drawobjects draw nonspecular objects unlit

for each light loop over each light and

makeshadows draw shadows in light volume areas

else else if shadows arent enabled then

drawobjects draw nonspecular objects lit

drawspecobjectsCamera

for each specface loop over all reflectedrefracted faces

maskfacespecface set stencil area for face

reclassifycameraCamera move camera to virtual viewpoint

drawwindowCamera recursively draw reflectedrefracted view

makeshadowslight

envstenAFTSPEC objects inside light volume after light has reflected

if shadowlevel if shadowing a light volume

envstenPRESPEC objects outside light volume before light has reflected

turnlightonlight

drawobjects add light to scene except where shadow volumes

turnlightofflight

makespecshadowsshadowleveladd light from virtual sources reflectedrefracted light

makespecshadowslight

loop over all reflectiverefractive faces for each specularface

speclightlightspecface move light to virtual reflectedrefracted position

makelightvolumelight mask area where light volume falls

makeshadows create shadows in light volume area

This pro cess is readily apparent in Figure which demonstrates the rendering

sequence for an image with two sp ecular faces The entire scene is rendered for

each sp ecular face from the virtual viewp oint of that face with the face attribute

then blended with the nal sp ecular image This represents the comp ounded virtual

p osition if several levels of sp ecular transp ort are involved For each scene rendering

multiple passes are required for complete lighting and shadow generation This

pro cess is detailed in Chapter

Stenciling

Both the rendering of shadows and the rendering of sp ecular images require use of

the stencil planes For sp ecular image generation the stencil plane not only masks

the valid rendering area but also acts as a recursive counter to enable depthrst

traversal in the recursive rendering pro cess

Wecho ose zero for our render area stencil value this is the stencil mask value

a b c d e f

g h i j k l

m n o p q r

Figure Recursive Image Rendering

for drawing at every level of recursion Ateachlevel of sp ecular image recursion

all stencil values are incrementedby one setting the previous rendering area to

one and the new sp ecular surface is drawn setting the stencil value to zero This

creates a mask of zero stencil values at the pixels where the sp ecular surface was

rendered the visible p ortions The scene from the refracted or reected virtual

viewp oint is then drawn in this zero stencil area and the pro cess is rep eated for

all other sp ecular surfaces Once the desired recursion level has b een reached all

stencil values are decremented with zero capp ed which essentially p ops backone

level of recursion The pro cess is then rep eated for the next recursive surface with

stencil values incremented by one and the surface creating a stencil mask of zero

This pro cess is illustrated in Figure

Figure Recursive Stenciling

Ateachlevel of recursion shadows must b e drawn in the valid area This area

may include previously rendered sp ecular surfaces so that shadows may b e cast on

partially sp ecular ob jects These surfaces are rendered rst as purely sp ecular and

then blended with their resp ective material prop erties As describ ed ab ove sp ecular

surfaces have their stencil values reset to zero after their sp ecular image has b een

rendered The valid rendering area is therefore p opp ed back to zero with lower

recursive depths maintaining higher stencil values

While it might seem that the zero sp ecular stencil mask value would b e a logical

choice for the zero value in the plusminus shadow algorithm this is not the case

In order to have recursive reections and refractions we instead cho ose a value

which is threefourths of the maximum stencil value for our zero shadowvalue

SHAD ZERO and onehalf of the maximum for our minimum shadow stencil value

SHAD MIN This provides half of the stencil buer for shadow calculation and half

for recursive sp ecular levels These values can b e adjusted according to the recursion

level needed or the shadow ob ject complexity

ZEROisnowapparent it avoids conict with The for our choice of SHAD

our recursive refraction levels All stencil values of zero at each level are changed

ZERO and shadows are then rendered as describ ed ab ove using the plus to SHAD

minus metho d SHAD ZEROshouldbechosen so that the plusminus metho d do es

not go b elow the shadow minimum SHAD MIN or ab ove the maximum stencil

value in order to prevent conict with the sp ecular recursion stencils After all

MIN are reset to zero for shadows are drawn all stencil values greater than SHAD

continuation of the sp ecular recursion The basic stenciling pro cedure is seen in the

expanded pseudoco de functions b elow

drawwindowCamera

if SPECON

SPECLEVEL

drawspecobjectsCamera

SPECLEVEL

if SHADOWSON

applystencilEQUAL ZERO REPLACE SHADZERO

turnlightsoff

drawobjects ambient only

for each light

makeshadows

applystencilGREATER SHADMIN REPLACE SHADZERO

applystencilGREATER SHADMIN REPLACE ZERO

else

drawobjects

applystencilCOMPFUNC COMPVALUE PASSFUNC PASSVALUE

stencilCOMPFUNC COMPVALUE PASSFUNC PASSVALUE

applytoscreen apply stencil to every pixel

REF ROUTINES Refraction and Reflection

drawspecobjectsCamera

if SPECLEVEL

clearstencilZERO

applystencilEQUAL ZERO REPLACE SPECLEVEL

for each specface

stencilEQUAL SPECLEVEL REPLACE ZERO

drawfacespecface SPECLEVEL where face

VirtualCamerareclassifycameraspecfaceCamera

drawwindowVirtualCamera draws only where stencil

applystencilEQUAL ZERO REPLACE SPECLEVEL

applystencilEQUAL SPECLEVEL REPLACE ZERO

stencilEQUAL ZERO STKEEP ZERO draw where

SHADOW ROUTINES

makeshadowslight

envstenAFTSPEC objects inside light volume after light has reflected

if shadowlevel if shadowing a light volume

envstenPRESPEC objects outside light volume before light has reflected

turnlightonlight

stencilEQUAL SHADZERO STKEEP SHADZERO

drawobjects add light to scene except where shadow volumes

turnlightofflight

makespecshadowsshadowlevel add light from virtual sources reflectedrefracted light

envstenside

if sidePRESPEC

transformshadowvolumes shadow volumes move to virtual position

for all silhouette faces

stencilGREQUAL SHADMIN facingviewsilfaceSTINCRSTDECR inout method

drawsilhouettefacesilface draw face without color to create stencil mask

Chapter

Illumination Mo del

While the preceding chapters presented light and shadowvolume metho ds for real

and virtual light sources the discussion was without regard to overall scene illumina

tion The illumination of a scene consists of several factors Illumination can o ccur

either directly from a primary light source or from a secondary virtual source such

as the reection of a light in a mirror or from refraction through glass Illumination

of this typ e from a real or virtual source is termed direct il lumination Direct illu

mination which is dep endent on scene geometry as shadows may o cclude primary

and virtual sources is termed the global direct il lumination Lightwhich do es reach

a surface is governed by the material prop erties of that surface and by the prop er

ties of the light source This is called the local direct il lumination In addition the

surfaces of the environmentmay reect lightbackinto the environment pro ducing

indirect il lumination This illumination is global in the sense it is dep endentonscene

geometry and o cclusions Each of these factors as they relate to a pip eline rendering

system are examined in turn

Global Direct Illumination

While the recursive shadow algorithm do es handle the constituent eects of direct

primary and secondary source illumination it alone do es not guarantee correct blend

ing of these eects When used with traditional pixel overwrite ie no blending

it pro duces only the umbra of the shadows Areas in shadow from only some light

sources app ear as bright as those in no shadow and only areas in shadow from all

light sources app ear in total shadow In order to pro duce the p enumbra eects or

areas only in partial shadow an accumulation of the lighting eects from the pri

mary as well as reectedrefracted lights must o ccur Fortunatelywe can use an

accumulation buer or blending functions to do just this

There are two metho ds for p erforming this accumulation of lighting eects The

rst metho d is to treat each shadow calculation as indep endent and sum each result

ing image Areas which receivelight from b oth the source and reectedrefracted

lightvolume pro duce caustic eects This metho d has limitations when using a

single accumulation buer due to the inability to create intermediate images while

preserving the nal accumulated image but this can b e overcome through the use

of hardware pixel blending during the rasterization phase

The second metho d uses an extension of shadowvolumes by BB for soft shad

ows By pro cessing all shadowvolumes without pro ducing intermediate images the

stencil value of each pixel represents a darkness level due to encasement in several

shadowvolumes The actual lighting of the scene is p erformed after all shadowvol

umes have b een generated Assuming the darkness level is only greater than the zero

stencil value the scene is redrawn with diuse lighting for areas whose stencil value

is less than the shadowzerov alue All stencil values are then decremented and the

image is redrawn This pro cess is rep eated for each darkness level accumulating

eachintermediate image This pro duces a nal image with intensities based on the

numb er of enclosed shadowvolumes This metho d do es not suer from the problems

inherent in the rst metho d however extensive stencil value juggling of the image

is necessary

LightAccumulation

The VPR implementation describ ed in Chapter uses the ab ove metho d of accumu

lating individual lighting contributions by rendering each separately The original

Intersection Recursion implementation was limited in the shadow eects which are

p ossible as it relied on blending of nal images using the accumulation buer as

describ ed in Section Refracted shadows from a single light source with caustics

and multiple shading levels were supp orted bytheIntersection Recursion metho d

however reected lightorlight from multiple sources results in incorrect blending

see Section This was due to the limitations of the accumulation buer blend

ing whichprovides only addition to a single image multiple lights require blending

of several nal images

Toovercome this shortcoming blending was removed as a nal stage pro cess

and instead individual light contributions are added in RGB space at every iteration

based on the principle of linearityoflight transp ortDor The rstpass rendering

is p erformed with ambient light only successive iterations are p erformed with no

ambientlight The resulting light contributions are added to the comp osite image

through use of the blending function This mo dication required changing the Z

buer comparisons of lit rendering passes

With the original implementation each rendering pass rerendered the entire

scene using Z value sorting for hiddensurface calculations While this metho d is

sucient when blending o ccurs using nal visiblesurface images it is not valid when

Z buered sorting hidden surfaces may b e temp orarily blending on the y With

rendered until later overwritten by a closer surface Because of this hidden pixels

would b e blended in the nal image even though they are overwritten later in the

rendering lo op To pro duce a correct image only visible pixels should b e added to

the accumulated image and therefore only visible pixels should b e rendered in the

lit stages of the rendering cycle Luckily this can b e accomplished as these pixels

are eectively tagged after the rst unlit rendering iteration This enables successive

rendering passes to eectively skip Z buer sorting

The initial rendering with only ambientlight is p erformed using normal Z buer

sorting This pro duces not only an unlit rendering of the scene but also a Z buer

containing all rendered pixel values By p erforming all successive renderings with a

EQUAL only those Z buer comparison function checking for equal values ZF

visible ob ject pixels are rerendered and thereby blended with the nal image

This metho d relies on the assumption that all successive renderings of the same

ob ject will generate the exact same Z values as the initial rendering This implies

that the camera p osition must not move as in jittering b etween iterations Also

b ecause of mo dern hardware rasterization p olygons must b e rendered in the same

vertex order as the original rendering rendering the same p olygon in dierentver

tex order may result in slightly dierentZ value rasterization Even with these

constraints certain hardware may still generate dierent Z values as the rasteriza

tion is aected by the lighting mo de path This is a serious hardware limitation

when it arises in that to eectively handle it pixel tagging must b e p erformed by

some other metho d such as the stencil buer which is already severely taxed bythe

other pip eline rendering comp onents

Soft Shadows

While the accumulation buer proved unworthy for generating partial shadowing

eects due to its reliance on blending nal images this functionalityiswell suited

for blending lightjittered images to pro duce soft shadows

In the original implementation the accumulation buer was used for accumu

lation of shadow eects With the replacement of the accumulation buer with

the rasterization blend function the accumulation buer was freed to p erform

nal image blending While the initial use was for antialiasing this same jittered

multirendering pro cess was mo died to p erform softshadow blending

Whereas antialiasing is p erformed by jittering the camera p osition in a sto chastic

manner soft shadows are created by jittering the light sourceBB This in eect

approximates an area light source with a sto chastic sample of p ointlight sources

The actual implementation of this source jittering is an approximation to an

actual movement of the source p oint Recall that shadowvolumes are computed once

and stored with the corresp onding ob ject For each silhouette edge a quadrilateral

shadowvolume face exists These faces contain no information on the original light

p osition Therefore mo dication of the light p osition would require recomputing

every shadowvolume face at each jitter iteration or initial storage of the entire set of

jittered faces Neither of these options is an attractive prop osition for an interactive

system

The alternative solution notes that with a jittered source only two of the shadow

volume faces four vertices are moved and that their movement is a linear transfor

mation of the sto chastic jittering of the source As the distribution is centered

around the p oint source p osition any rotation of this distribution also pro duces a

valid sto chastic sample

Another factor in this approximation is the assumption that the jitter distance

is relatively small compared to the distance of the light source to the ob ject The

ob ject eectively acts as a pivot p ointintheshadowvolume calculation whereby

if the ob ject is close a levereect pro duces a large movementinthetwomoving

shadowv olume face vertices Assuming all ratios of jitter distances to lighttoob ject

distances are relatively equal we can approximate the source jittering with a linear

multiple jittering of the two distantvertices

As can b e seen in Figure a movement of the light pro duces a linear multiplied

movement of the end of the shadowvolume Figure demonstrates the ab ove

approximation along with the error intro duced by this metho d This approximation

can b e made exact by including with each silhouette face the distance of the lightto

each of the two no des b ordering the segment As the segmentactsasapivot the d*(b2/b1)

B Light Position b2 b1

d a1

Jittered Position A

a2

d*(a2/a1)

Figure Jittered Light ShadowVolumes

d*C

B Light Position b2 b1

d a1 Jittered Position A

Approximation Positions a2

d*C

Figure Jitter Approximation ShadowVolumes

ratios bb aa relate the displacement of the two generated no des of the shadow

face to the two no des b ordering on the generating face edge itself In practice

the jitter size as compared to the distances involved is relatively minute and a

constantmultiplier usually suces The current implementation of this work uses

this constantmultiplier to reduce memory requirements

The ab ove jitter approximation therefore provides a simple way to jitter the

shadowvolume without recomputing it In addition the jitter amount can b e scaled

according to the numb er of samples b eing blended through a simple multiplier The

same multiplier can b e used to control the area light source and the shap e can b e

directed by the jitter sample distribution itself

Lo cal Direct Illumination

In Section we demonstrated techniques for overlaying and blending multiple ren

derings to create more realistic images This involved the linear combination of

indep endent lighting contributions as well as shadowing to provide global illumina

tion eects Wehavenotyet considered the lo cal illumination mo del The lo cal

mo del represents the surface shading from direct illumination from real and virtual

light sources These lighting contributions are generated using the available graph

ics hardware shading mo del which is generally Gouraud shading using the Phong

In this mo del the cosinebased Phong lighting mo del is used to de lighting mo del

termine vertex intensities while Gouraud shading interp olates b etween neighb oring

vertices While the use of the Phong mo del is widespread due to its simplicity it is

however a purely empirical mo del and not a particularly go o d one and is widely

known for its inaccuracies foremost b eing its nonrecipro city

There has b een much study devoted to b oth realistic and physically accurate

lighting mo dels This work is generally centered around the rendering equation

Although weandthehardware designers themselves refer to the various implementations

of exp onentbased mo dels as Phongs mo del many are actually BlinnsBli mo del Usually

mistakenly though of as equivalent the two mo dels are dierent alb eit closely relatedFW

intro duced by Ka jiyaKa j This equation expressed in terms of reectance

Z Z

 

L L cos sin d d

r r r i i i bd i i r r i i i i

 

states the relationship b etween incidentlight and reected light from a surface based



on a bidirectional reectance distribution function BRDFNRH The BRDF

itself is a function dep endentonthetwoincidentandtwo reected angles as seen in

Figure It is bidirectional in that it dep ends on b oth the incident and reected

angles and these directions can b e reversed without changing the evaluation of the

function This implies the recipro city prop ertyoflight scattering which the Phong

mo del lacks

z

dr

Θr Θi y di

Φr x

Φi

Figure BRDF Scattering Angles

Many BRDF formulations have b een develop ed to try to represent this com

plex highorder function This includes creation of theoretical physicsbased mo dels

HTSG TS CT and the use of spherical harmonics CMS SAWG

KV More recently these metho ds have b een extended for anisotropic surfaces

War WAT

While these advances havecontinued to improveraytraced and radiosity images

hardware rendering quality has b een relatively static with its basis on the Phong

mo del The hardware Phong mo del is not a true BRDF

C C C C

p me sa ma

X

E

C

mss

l

C N N C N N

md pl p ms b p

K K D

sdaf sdav lp

l

where the C values are the scene material and light colors the N vectors represent

the normal bisector vector and direction to light as seen in Figure the K values

are xed and variable attenuation factors and E is the glossiness exp onent In

mss

fact the Phong mo del do es not guarantee conservation of energy as the sp ecular

term actually acts like a second diuse term for low glossiness terms

Np

Npv Nb δ

Θr Θi y Npl

Φr x

Φi

Figure Phong BRDF Notation

While the traditional Phong representation is not founded in the physical trans

p ort of light it can b e made more accurate if t to a physicallybased mo del While

the moregeneral anisotropic mo dels are far b eyond the capabilities of the Phong

mo del isotropic mo dels provide a go o d reference scattering function close to the

range of the Phong mo del In particular we used the isotropic Gaussian mo del pre

sented byWard War to p erform a ChiSquare t of the Phong lighting mo del

This mo del provides a metric against realworld illumination in that it incorp orates

actual measured material parameters and has itself b een tested against gathered

data The sp ecular comp onent of the Gaussian BRDF is

tan



e

q

s



jcos cos j

i r

and the corresp onding Phong sp ecular term rewritten from Equation is

E

mss

cos

s

where is the p olar angle b etween the halfvector and the surface normal as seen in Figure

Gaussian Specular Phong Specular

1 6

5 0.8

4 0.6

3

0.4

2

0.2 1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4

theta_r theta_r

a Gaussian b Phong E ss

m

Figure Sp ecular Scattering Comp onents

i

As can b e seen with comparison of the sp ecular comp onents of the Phong ver

sus the Gaussian mo del in Figure b oth functions p eak where the incidentand

reection directions are opp osite although op eak phenomena as detailed in the

TorranceSparrow mo del is not demonstrated The Gaussian mo del uses a rough

ness term to mo dulate the concentration of the sp ecular reection while the Phong

mo del uses a glossiness exp onent E to accomplish the same A low roughness value

mss Gausian Specular Term Phong Specular Term

1 6

5 0.8

4 0.6 BRDF 3 BRDF 0.4 2 0.2 1

0 0 0 0

0.2 0.2 0 0.4 0.8 0.4 0.6 20 delta 0.6 delta 0.6 0.4 40 0.8 alpha 0.8 Emss 0.2 60

1 0 1 80

a Gaussian b Phong

Figure Gaussian vs Phong Sp ecular Term

is analogous to a high glossiness term with and the resp ectivevalues for a pure



sp ecular mirror But while the Gaussian mo del falls o to zero as increases the

Phong mo del actually approaches one as E decreases see Figure

mss

By examining the Phong sp ecular term it can b e seen that the glossiness is

not the only parameter which determines the sp ecular comp onent The sp ecular

reectance also mo dulates the sp ecular comp onent and through mo dication of

s

the reectance based on the glossiness the Phong reectance term can approximate

the isotropic Gaussian mo del By p erforming a twoparameter ChiSquare t of the

Phong sp ecular term using a parameterized glossiness value as well as a linear scale

of the glossiness value as a sp ecular reectance multiplierin

P

P P cos

s



we can achieve an almost exact approximation ie at any

given of the following mo died Gaussian sp ecular term

tan



e

s



This simplied term omits the parameters unavailable in the Phong representation

as seen in Figure Even with the full Gaussian sp ecular term a twoparameter



This fallo to zero is actually incorrect b ehavior Ideally the Gaussian mo del should fall o to

 For the purp oses of our approximation where roughnesses are under a zero limit suces

s



ChiSquare t can b e p erformed using the following mo del

P



cos P P

s 



with the glossiness parameter replaced with a second parameter which additionally

relates the Phong glossiness term to the Gaussian roughness This is demonstrated

for several roughness terms in Figure

Simplified Gaussian vs. Fit Phong Simplified Gaussian vs. Fit Phong

1.4 1.4 1.2 1.2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0 0

0.2 0.2

0.4 1 0.4 1 0.8 0.8 delta 0.6 delta 0.6 0.6 0.6 0.8 0.4 alpha 0.8 0.4 alpha 0.2 0.2

1 1

a Wireframe b Contours

Figure Simplied Gaussian vs Fit Phong

It can b e seen by this comparison that the mo dels diverge as the angles b ecome

more obtuse This b ehavior and inadequacy of the Phong mo del was vividly noted

by Blinn Bli

While the Phong mo del can b e t to the Gaussian BRDF this t mo del is

not applicable to the hardware lighting mo del If we examine the full diuse and

sp ecular comp onents of the hardware mo del we note that cos isusedtomodulate

i

the light received by the angle incident to the surface normal by the source The

rendering equation eq similarly uses a dierential solid angle to represent

the pro jected solid angle subtended by the source This intro duces an additional

sin term not included in the hardware equation While this term integrates out

i

for a true p ointlight source it is presentinMonte Carlo sampling systems such

as Radiance War where there are no true p oint sources Therefore to provide

some consistency b etween the diuse and sp ecular computations as related to a Fit Phong vs. Full Gaussian (alpha=0.04) Fit Phong vs. Full Gaussian (alpha=0.8)

35 50

30 40 25

20 30

15 20 10 10 5

0 0 0.2 0.2 0.4 0.4 0.6 0.6 1.4 1.4 0.8 1.2 0.8 1.2 theta_i 1 theta_i 1 1 0.8 1 0.8 1.2 0.6 theta_r 1.2 0.6 theta_r 0.4 0.4

1.4 0.2 1.4 0.2

a b

Fit Phong vs. Full Gaussian (alpha=0.15) Fit Phong vs. Full Gaussian (alpha=0.20)

5 14 12 4 10 3 8 6 2 4 1 2 0 0 0.2 0.2 0.4 0.4 0.6 0.6 1.4 1.4 0.8 1.2 0.8 1.2 theta_i 1 theta_i 1 1 0.8 1 0.8 1.2 0.6 theta_r 1.2 0.6 theta_r 0.4 0.4

1.4 0.2 1.4 0.2

c d

Figure Full Gaussian vs Fit Phong

physicallybased system the sin term is dropp ed from the rendering equation

i

and the Phong mo del is t to the rendering equation The sp ecular term therefore

b ecomes

tan



e

q

cos

i s



jcos cos j

i r

A ChiSquare twoparameter t of this mo del pro duces the following Phong term





cos

s





with a corresp onding value of The exactness of this t is seen graph

ically for several in Figure i

Fit Phong vs. Partial Gaussian (theta_i=Pi/6) Fit Phong vs. Partial Gaussian (theta_i=Pi/4)

10 10

8 8

6 6

4 4

2 2

0 0

0.1 0.1

0.2 1.4 0.2 1.4 1.2 1.2 alpha 0.3 1 alpha 0.3 1 0.8 0.8 0.4 0.6 theta_r 0.4 0.6 theta_r 0.4 0.4

0.5 0.2 0.5 0.2

a b

i i

Fit Phong vs. Partial Gaussian (theta_i=Pi/3) Fit Phong vs. Partial Gaussian (theta_i=Pi/2.1)

10 10

8 8

6 6

4 4

2 2

0 0

0.1 0.1

0.2 1.4 0.2 1.4 1.2 1.2 alpha 0.3 1 alpha 0.3 1 0.8 0.8 0.4 0.6 theta_r 0.4 0.6 theta_r 0.4 0.4

0.5 0.2 0.5 0.2

c d

i i

Figure Partial Gaussian vs Fit Phong

This mo del provides a much b etter physical approximation than the unnormal

ized Phong mo del as seen in Figures and As can b ee seen in the gures

the intensity is mo dulated according to in b oth the Gaussian and Fit Phong yet

remains constant in the standard Phong mo del Note however that the hardware

Phong mo del is limited in its accepted value for E values are capp ed at This

mss

is seen in overlybroad sp ecular highlights for and Even

with the Phong mo del limitations a true implementation p ermitting a full range of

exp onents would provide a b etter illumination mo del than is currently feasible

a a a a b

Gaussian

a a a a b

Phong

a a a a b

Fit Phong

Figure Full Gaussian vs Phong vs Fit Phong view

While the Fit Phong mo del is founded in a more physical basis it also do es not

include the grazing eects of the original Gaussian It tends to overstate sp ecular

highlights for acute s and conversely understates obtuse grazing angles This b e

havior is apparent in Figure which shows contrasts of the sp ecular highlights

of Figure Because the Phong mo del has only a notion of and not the cor

resp onding s for sp ecularity the mo del itself is severely limited as to the sp ecular

a a a a b

Gaussian

a a a a b

Phong

a a a a b

Fit Phong

Figure Full Gaussian vs Phong vs Fit Phong view

eects it can provide Even with these limitations the b enets of the t mo del are

further magnied by the abilitytonow incorp orate measured material parameters

as in War therebyproviding physicsbased material prop erties

Gaussian vs Overstated Phong Small s

Gaussian vs Understated Phong Large s

Figure Gaussian vs Phong Sp ecular Highlights

In addition the hardware lighting mo del is also limited in its lighting calculations

and display in general All lighting calculations are generally limited to bits p er

channel and even when its unitless calculations are t to a physical mo del as ab ove

displayconversion is itself severely limited Wehave not attempted to overcome

these hardware limitations Work byWar and TR suggest how they could b e

addressed in an expanded realvalue lighting mo del incorp orating tone repro duction

in its calculations

Indirect Illumination

As mentioned previously radiosity systems are well suited for calculation of global

illumination eects This includes direct diuse illumination from light sources as

well as indirect illumination reected from surfaces in the environment It do es not

however capture the sp ecular illumination whichraytracing and pip eline rendering

are capable of Raytracing and pip eline rendering can additionally capture the

direct diuse illumination which the radiosity solution includes

While a radiosity solution do es include the direct diuse illumination of the

environment this is simply the rst iteration or b ounce of the total radiosity

solution If this rst iteration is discarded from the nal solution the radiosity

renderer pro duces the global indirect illumination of the environment If no ambient

term is included this provides a linearly indep endentlighting calculation whichcan

b e simply added to the lighting calculations as dened ab ove This metho d is b etter

suited for dynamic environments than other twopass metho ds WCG PSVin

that the global direct comp onent is not incorp orated in the rst pass solution and

the indirect comp onent is not as aected bymoving geometry

The addition of the radiositycontribution readily o ccurs in imagespace as demon

strated by Dorsey et alDAG This metho d however can prove more compli

cated if sp ecular surfaces are present The radiosity image typically will not contain

secondary images as in a mirrored image and therefore the entire image cannot

b e added to the pip eline rendering image This region needs to b e rst masked to

prevent blending of one image containing the sp ecular image the pip eline render

ing one with one image without sp ecular reections the radiosity one Another

solution is to bring the radiosity solution into the pip eline rendering system and

add only secondary sp ecular images there This pro duces an image containing the

indirect contributions of the scene itself as well as of the sp ecular images of the scene

as visible in mirrors etc This image can then b e directly added to the pip eline

rendering image whichcontains all of the direct illumination eects based on the

linearityoflight transp ort

In fact the converse of this image combination metho d can b e used to obtain

the indirect radiosity image contribution by dierencing the nal radiosity image

with the image pro duced after sho oting only the emitting patches This metho d

is demonstrated in Figure where the image a is generated from a radiosity

solution after patch sho otings and the image b is generated from sho oting

only the emitting lightsource patches Image c is the dierence of the rst

two thereby representing only the indirect contributions This image can then b e

combined with image d pro duced with the pip eline rendering metho ds as describ ed

ab ove pro ducing the last comp osite image e

A more exible solution is to incorp orate the indirect radiosity solution into the

hardware rendering pro cess instead of adding it to the nal image The indirect

radiosityvertex coloring is rendered using the hardware shading in the rst pass of

the pip eline pro cess This replaces the ambientonly rendering stage This obviates

the sp ecular image problem describ ed ab ove since sp ecular images are generated

also during the ambientonly stage which the indirect solution replaces The direct

solutions then use the original attribute information for all subsequent calculations

ignoring the radiosityvertex colorings The hardware blending functions detailed

previously p erform the comp osition of the individual illumination eects

Total Scene Illumination

The previous sections detailed the individual comp onents of the scene illumination

mo del It is the combination of these techniques which pro duce the nal images

While some discussion has b een made ab out the individual errors which result from

hardware limitations physical approximations and p erformance tradeos it is only

through evaluation of the image pro duced by these metho ds that we can gauge the

eectiveness of these techniques as a system

Figure compares an image pro duced through the RadianceWar system

a Full radiosity b Direct radiosity

c Indirect radiosity ab d Direct pip eline c Full solution cd

Figure Calculating Indirect Illumination Image

and one pro duced through the pip eline rendering pro cess The pip eline image uses

the ab ovetechniques with a constantambient lighting of replacing the inclusion

of a radiosity solution indirect comp onent Image a shows the reference Radiance

image which required sec and image b shows the pip eline image generated

in sec Images c and d are the dierence images b etween the two with a

grayRGB value of from a range of representing the zero value

equal intensities The RMS error b etween the two images as shown in Image e

is under over the range of luminance values versus over error for traditional

singlepass rendering

Further accuracy can b e gained using the radiositygenerated indirect comp onent

as describ ed ab ove however the prepro cessing computation cost often outweighs the

increased realism b enet The indirect comp onent required patch sho otings

for a total of CPU seconds While this indirect term is not as eected by

a dynamic environment as the direct comp onents it can require recomputation if

the scene geometry changes to o drastically This high computation cost as with

any radiositybased system prevents incorp oration into a fullydynamic interactive

system For a lowambient scene which is dominated by sp ecular reections a

constantambienttermmay b e adequate for interactive p erusal

a Gaussian illumination b Phong direct illumination

c Phong Gaussian d Gaussian Phong e RMS dierence

Figure Gaussian vs Phong Scene Illumination

Chapter

Performance

There is a tradeo b etween p erformance and realism in computer graphics While

hardwarebased rendering p ermits realtime interaction it has b een limited in the

quality of the image it pro duces Ray tracing systems pro duce very accurate scenes

of sp ecular environments but each image requires signicant computation time

Radiosity systems provide accurate D diuse lighting representations of relatively

static environments and additionally require considerable precomputation Graph

ically the relationship b eteen p erformance dynamics and quality is seen in Fig

ure Note that this do es not include any notion of asso ciated overhead

Pip eline rendering intro duces a series of related multipass techinques whichpro

vide added quality It in essence lls the gap of Figure b etween the hardware

based techniques and the softwareintensive pro cesses as well as providing for a more

dynamic environment at a high level of quality Because it relies on multiple render

ing passes to add additional levelsofdetailineach pass it can easily b e tailored to

suit the particular users desired p erformancequality needs This therebyprovides

a broad sp ectrum of image quality and realism Figure

Wehave detailed the co ordination b etween the pip eline rendering techniques in

cluding shadow generation and sp ecular surfaces As b oth of these features pro ceed

recursively p erformance is primarily determined by the numb er of rendering passes

including b oth scene and shadowvolume rendering image size has minimal eect real−world

hardware Framerate image−based

radiosity

Image Quality

ray−trace

Scene Dynamics

Figure Rendering Relationship real−world

hardware Framerate image−based

radiosity

Image Quality pipeline

ray−trace

Scene Dynamics

Figure Pip eline Rendering Relationship

Bathro om Rendering Iterations

Pass TimeIteration of Iterations Total Time

ShadowVolume Rendering e

Ob ject Rendering e

Viewinvolume Check e

Misc Checks e

Total stored shadows NA NA

ShadowVolume Generation e

Total calc shadows NA NA

Table Frame rendering statistics for bathro om environment

as compared to ray tracing For the rendering of Figure consisting of

p olygons the scene required shadow passes and scene passes which together

represent of the total rendering time of seconds for the x image A

x image required seconds The corresp onding Radiance images without

textures required and seconds resp ectively All renderings were p erformed

on a singlepro cessor MHz R Onyx Reality Engine A breakdown of the ren

dering pro cess is seen in Table A detailed breakdown of the rendering pro cess for

one of the four lightsources illuminating the mirrored image is included in Table

This represents the lighting of the mirror sp ecular image of the scene view of the

bathro om lo oking through the mirror from one lightsource This is comp osed of

four scene passes encompassing eightshadow passes

As can b e seen by this data sp ecular image generation and shadow generation

can b ecome very costly for highly sp ecular environments for large depths of re

cursion While much of this cost could b e eliminated by more accurate viewp oint

cullingieMeaAir for b oth sp ecular image and shadow generation some is

inherent in the nature of the pro cess Yet even this cost can b e reduced if some

sacrice of image quality or accuracy is p ermitted The following discusses this

qualityp erformance tradeo

Bathro om Single Lighting Rendering

Level Sp ecular Ob ject Pass Iterations Time sec

z

NA ShadowVolume Rendering

z

Ob ject Rendering

x

Viewinvolume Check

x

Misc Checks

mirror ShadowVolume Rendering

Ob ject Rendering

Viewinvolume Check

Misc Checks

shower do or ShadowVolume Rendering

Ob ject Rendering

Viewinvolume Check

Misc Checks

shower do or ShadowVolume Rendering

Ob ject Rendering

Viewinvolume Check

Misc Checks

y

Total

Table Single imagesingle light rendering statistics for bathro om

environment

Numb er of ob jects checked based on which side of sp ecular ob ject

y

Total do es not equal the sum of the indep endent times due to overhead and

functions not mentioned as well as roundo error

z

ofPolygons

x

of Ob jects

Diuse Transp ort

Section discussed incorp orating the diuse lighting calculations from a radiosity

sho oter to include full global and lo cal sp ecular and diuse lighting comp onents

in the pip eline rendering system While this pro duces the most accurate images

this additional step is often unwarranted While useful in adding sp ecularityor

sp ecular transp ort to a radiosity scene the diuse comp onents mayprovide very

little additional information in environments with little diusediuse transp ort low

ambient lighting or dominating sp ecular surfaces In addition a highly dynamic

environment with signicant diusediuse transp ort might require several radiosity

recalculations whichmake inclusion extremely costly

Shadows

Shadows are known to b e an imp ortant visual cue for pro cucing realistic and un

derstandable images In a sp ecular environmentoranenvironment with many light

sources shadow patterns can b ecome extremely complex as seen in Chapter For

an environmentwith n sp ecular surfaces each visible to the others shadows are

highly dep endent on the depth d of the recursivelight reections and refractions As

describ ed in Section shadow generation is required for all virtual light sources

resulting from the sp ecular b ounces as well as shadows which o ccur from the original

lightsource and are subsequently b ent This requires



d d

O n

shadow generations p er image p er light

Often however shadows are desired to simply provide visual cues and some level

of realism and are not required to b e totally accurate In these situtations the

numb er of rendering passes as well as the memory overhead can b e greatly reduced

In addition selective rendering of only signicantlight sources and therefore shadow

volumes can also provide a means to increase p erformance

a sec b sec c sec

d sec e sec

Figure Varying Shadow Settings

Figure a shows the lighting passes for a single lightsource with one reective

and one refractive suface In Figure b shadowvolumes which encounter more

than one sp ecular surface are eliminated This eect is apparent in the shadow

pattern on the tub Figure c shows the original situation without reected

and refracted shadows ob ects b efore the sp ecular surface Note the lightvolume

is larger b ecause the frame no longer blo cks light from reaching the mirror and

the shower do or handle no longer casts a shadow through the do or Figure d

replaces the refractiveshower do or with a purely transparent nonrefracting surface

Bathro om Rendering Iterations

Fig d Sp ec Passes Total

Shad Bounces sceneshad sec

a YES YES

b YES NO

c NO NO

d used NA NA

e YES NA

Table Varying Shadow Settings

The rst set of numb ers represents the actual numb er of passes made The sec

ond set represents the predicted numb er based on Eq if no additional visibility

testing was p erformed

Finally Figure e demonstrates the original scene with a shadow recursion depth

of one instead of two A summary of the required numb er of rendering passes is seen

in Table

Sp ecular Images

Likeshadows sp ecular images also provide a measure of depth p erception as well

as added realism In a complex sp ecular environment the interaction of sp ecular

surfaces can require many rendering iterations Yet likeshadows it is often not

required to have total physical realism This tradeo of realism for p erformance can

again b e accomplished by reducing the recursive depth Partial sp ecular surfaces

can b e selectively rendered with the secondary sp ecular image Furthermore simple

nonrefractive transparency can often replace refractive surfaces as an approximation

as we did with shadows in Figure

Because shadows are generated within each sp ecular image as well as recursively

for each sp ecular surface the numb er of sp ecular surfaces as well as recursion depth

proves doubly imp ortant For complete shadow rendering shadowvolume passes

are required for shadows on eachsideofevery sp ecular surface encountered for the

lightsource at that recursive level For n sp ecular faces at a recursive image depth

of d

d

O n

images are generated Again viewp oint culling can greatly reduce the actual number

of passes required

The images in Figure and the timings in Table demonstrate the rela

tionship b eteen image quality and p erformance for varying sp ecular environments

Figure a shows three sp ecular surfaces mirror shower do or and o or at a sp ec

ular depth of two and Figure b at a sp ecular depth of one In Figure c

the partial sp ecular o or image has b een excluded at a depth of two This is then

rep eated at a depth of one in Figure d In Figure e the refractiveshower

do or is replaced with a nonrefractive transparent surface at a depth of two

a sec b sec c sec

d sec e sec

Figure Varying Sp ecular Settings

Fig n d Scene Iter Shad Iter Time Rad Time

sec sec sec sec Perf

a x

b x

c x

d x

e x

Table Varying Sp ecular Settings

The rst numb er represents only the shadowvolume rendering time The second

numb er represents the total shadow rendering time

As can b e seen the rendering times range greatly yet many of the images are

similar In an interactiveenvironment complete rendering may not b e needed until

motion stops This metho d is ideal for a progressive renement situation which

could substitute from e to d to c to a as the necessary framerate drops ie

the camera slows

Scene Dynamics

As detailed in Section shadows maintain ob jectasso ciativity information

Therefore unlike in radiositymoving scene geometry has little eect on rendering

times For the sequence of images in Fig from an animated sequence the move

ment of the mirror required the most recomputation secframe as it aected

virtual light source shadows The movement of the animated human required less

recompuation secframe The entire animated sequence of frames required

less than hours of rendering time

a sec bsec

c sec dsec

e sec f sec

Figure Varying Scene Geometry and Lighting

Chapter

Scene Uncluttering

The metho dologies presented thus far have b een directed toward increasing scene

realism at varying levels of p erformance Yet with the added details intro duced by

additional scene geometry and eects comes an added level of image complexityItis

often desired to concentrate on a particular asp ect of a scene with little or no regard

to the remaining p ortions With current systems able to displayvast amounts of

data scene uncluttering b ecomes an essential to ol in a userfriendly environment

Background

There has b een much use of transparency and clipping throughout computer graph

ics In addition to the added realism which transparency can bring b oth features

are p owerful to ols for the interactive minimization of environment details Unwanted

or unneeded details can b e made clear or eliminated completely through these two

metho ds These features are readily available in raytracing software yet their func

tionalityisseverely limited in hardwarebased Z buer rendering systems where

their use is of p otentially more b enet The following discusses hardwarerelated

implementations

Clipping Planes

Clipping planes are an essential part of any D graphics rendering pip eline D

world co ordinates are generally normalized and clipp ed against a canonical view

volume b efore b eing submitted to the Z buer algorithm during rasterization In a

typical pip eline clipping is p erformed in homogeneous co ordinates for Z clipping

X and Y clipping may take place after the homogeneous divide for eciency

Clipping of the canonical view volume can use extensions of D clipping algo

rithms such as the CohenSutherland and CyrusBeckCB metho ds Although

clipping may b e p erformed on a p ersp ectivepro jection canonical view volume it is

often convenient to transform it to a parallelpro jection canonical view volume and

clip against the trivial plane equations

X WX WY WY WZ WZ

In some hardware systems other userdened clipping planes are provided for

clipping in eyeco ordinates SGI provides up to six arbitrary clipping planes for

sp ecifying halfspaces based on the usersupplied plane equation This plane is trans

formed to eyeco ordinates using the inverse of the Mo delView matrix and eachvertex

is dotted with the transformed plane equation Negative inner pro ducts are clipp ed

Both userdened and the parallelpro jection clips o ccur in parallel pro cessors Cla

which implement the CohenSutherland algorithm when necessary

Although these userdened clipping surfaces do p ermit some user selection of

scene geometryeven a single clip eliminates an entire geometric halfspace Selection

of multiple clipping surfaces provides limited control as the resulting clipping region

is the interior of up to halfspaces and therefore always convex

CSG

Constructive Solid Geometry CSG and similar b o olean op eration systems requires

geometric clipping for dierencing intersection and halfspacing Direct CSG ren

dering from the ob jects binarytree representation can b e accomplished in either

imageorder or ob jectorder rasterization Although a direct imageorder raster

izer has b een develop edKE a more general solution uses ob jectorder rasteriza



tion based on Z buer algorithmsGHF Kelley et als KGP extension of

Mam stores multiple ZRGBA values for visibility clipping of CSG ob jects

Clipping Surface

Clipping surfaces are a p owerful metho d for conveying depth information of a scene

As noted in FvDFH dynamic mo dication of the view volume gives the viewer a

go o d sense of spatial relationships b etween parts of the environment as well as serves

to eliminate extraneous ob jects and allowtheviewer to concentrate on a particular

p ortion of the world

As can b e seen in the previous section the DZbuer p ermits two individual Z

value comparisons with two distinct buers This makes it ideal for implementing

a view dep endent arbitrary clipping surface This is accomplished by writing the

furthest Z values of the clipping surface to one buer designated as CLIPBUFFER

All ob jects are then rendered to the other buer using normal Z buer rendering

The additional comparison is made on CLIPBUFFER so that all rendered pixels

have Z values greater than CLIPBUFFER The entire pro cess is seen in the following

pseudoco de

clipsurfaceSegment clipsurface

zclearCLIPBUFZMIN init to minimum Z

zfunctionCLIPBUFZFGREATER

zfunctionDRAWBUFZFNONE

zwriteCLIPBUF

wmpackx dont write color

drawsegclipsurface reverse Z sort into CLIPBUF

X init to maximum Z zclearDRAWBUFZMA

zfunctionCLIPBUFZFGREATER

zfunctionDRAWBUFZFLESS

zwriteDRAWBUF

wmpackxffffffff write color too

drawallsegs Z sort further than CLIPBUF

For a visual representation of the clipping surface it is often useful to render the

surface in wireframe after all other segments havebeendrawn and clipp ed

Transparency Surface

The arbitrary clipping surface can b e a p owerful to ol for elimination of unwanted

geometry It is often desirable however to simply minimize this geometry while still

using it for p ositional reference Transparency is ideally suited for this purp ose it

provides a p owerful mechanism for uncluttering complex worlds In combining the

clipping surface detailed ab ove with the correct transparency rendering an arbitrary

transparency surface is achieved Instead of clipping away geometry on one side

of the transparency surface the enclosing geometry b ecomes transparent thereby

displaying the opaque geometry b eneath it This like clipping surfaces p ermits the

user to concentrate on desired asp ects of the environment Correct sorted blending

is however still required to prop erly display these relationships

To p erform a transparency clip an ordinary surface clip is p erformed as describ ed

ab ove The backtofront transparency pro cess is then p erformed using the clip

rendering in place of the initial opaque rendering The resulting DRAW buer from

the clip is used as the initial PREVIOUS buer in the transparency routine and the

subsequent transparentlayers are rendered on top of the clip rendering Rendering

the clipping surface in wireframe after the clip will pro duce a valid image as the

transparentlayers will layer on top of it also

Software Emulation

All of the clip and transparency surface features were implemented in software on the

SGI GL platform Each surface was individually rendered to its own b oundingb ox

size buer and Z comparison and comp osition was p erformed on a pixelbypixel

basis Even with software Dual Zbuering of the b ounding b oxofeach drawn face

a Clip Surface b Cutawayarea

c Clipp ed Area d Transparent Area

Figure ClippingTransparency Surface

the rendering proved quite ecient due to the simple buer swap provision The clip

in Fig of over p olygons was p erformed in under one minute realtime on an

SGI VGX MHZ R system for an NTSC size window and under one and a half

minutes at fullscreen The transparency clip to ok under minutes Fronttoback

had slightly lower p erformance due to the additional software implementation of the

unsupp orted accumulation buer function

Ahardware implementation would have eliminated the cost of these pixel op er

ations and all cost would have b een in the multiple iterations whichisgoverned by

the number of overlapping transparency layers A typical unsorted Z buered frame

of the same scene is rendered in ms Using this as the metric the transparency

clip whichcontains a transparency depth of would render in approximately

seconds assuming b oth Z comparisons could b e made in parallel Only minute de

tails were added in the latter transparency stages therefore for many applications

the full depth may not b e necessary and the required numb er of iterations reduced

Chapter

Extensions

Wehave presented a collection of hardwarebased multipass rendering techniques

which can provide manyraytrace quality eects in a userinteractive dynamic en

vironment While these techniques are not applicable to all typ es of environments

and are not desirable when absolute physical accuracy is necessary they do provide

a wide range of solutions in terms of cost versus p erformance In order to demon

strate the value and application of the pip eline rendering system a discussion of the

current limitations and p ossible extensions follows

Limitations

While the techniques describ ed can pro duce fast complex images they suer from

several shortcomings based on their use of empirical metho ds hardware limitations

and approximations Because pip eline rendering relies on multiple viewp oints re

fractive and reective surfaces must b e planar Shadows may suer from aliasing

eects due to the use of image space precision in the calculation In addition hard

ware lighting typically the Phong mo del Pho is used for the illumination mo del

which is widely know for its inadequaciesWar In the same regard the system is

hardware dep endentonthenumb er of stencil and accumulation bits as well as the

viewp ortscreen transforms supp orted

The rendering phase is also very timedep endent on the complexity of the en

vironmentaswell as the recursion level for refractions reections For an entire

scene rendering at a given shadow and sp ecular recursive depth



d d

d d

n O n n

shadow passes are required for complete scene rendering and illumination

For complex recursively refractive and reective surfaces at an arbitrary depth

this exp ense can quickly b ecome prohibitiveeven when compared to raytracing

This metho d is more applicable for environments with few mirrored surfaces suchas

in a virtual building or where recursion is limited to a minimal depth

For scenes with a limited numb er of planar refractive and reective surfaces or

with a low recursive depth this system is very eectiveeven with minimal hardware

TM

supp ort The system currently runs eectively on an Iris Indigo XS with

stencil bits and a bit accumulation buer

Hardware Extensions

Shadows reection and refraction are all provided using the currently available

rendering pip eline supp ort These metho ds require stencil bit planes an accumula

tion buer alpha channels and a standard Z buer implementation For refractive

transparency additional pip eline transformation control ie D image mapping

would greater facilitate implementation The pro jective texture mapping while par

tially hardwaresupp orted could also b enet from additional matrix stack op erations

as with the refractive surfaces Even without changing the Phong lighting mo del

greater realism could also b e accomplished through b etter hardware supp ort for it

through a more dynamic range of glossiness exp onents

The DZ buer and TZbuer extension is hardware implementable with lit

tle or no p erformance loss This facility requires only one two additional Zv alue

comparator which could b e implemented in parallel with the current comparison

and additional Z buer bitplanes With this evolutionary extension sorted trans

parency comp ositing is p ossible at interactive rates on a pixelbypixel basis with

out the pixelbypixel buer op erations of other multiple Z buer metho ds This

comp ositing can b e p erformed b oth backtofront or fronttoback with currentand

slightly mo died stenciling functions and a slight mo dication of the accumula

tion buerHA return function The DZ buer extension also provides arbitrary

clipping surfaces which combined with the transparency metho ds enables arbitrary

transparency prob es of the environment geometry

This work is not only an intro duction of new metho ds but it also serves as a

platform from which to incorp orate other hardware techniques to build a complete

interactive renderer as describ ed in tHKT There are many issues which could

b e addressed at the hardware level such as more control for direct manipulation

of the rendering pip eline as well as more complex hardware lighting mo dels With

advanced hardware features nding exotic uses in pro ducing eects such as texture

mapp ed shadows this stresses the need for greater exibility in user access to these

features

Additional hardware supp ort would provide greater facilities for creating more

complex images Additional pip eline control such as viewp ort transforms or addi

tional fog features would enable distorted refractions in conjunction with translu

cency Multiple accumulation buers would enable handling partially reectiveand

partially refractive surfaces instead of merely switching at the critical angle While

many of these hardware supp ort features are not found on our chosen hardware

platform some are readily available in other graphics architectures

Hardware Platforms

ntation of a The rendering techniques wehaveintro duced are based on impleme

serial pip eline architecture such as the SGI platformAkeAkea While the

SGI rendering pip eline is currently dominant in D graphics hardware rendering

there are alternative systems whichhave addressed D rendering by other means

Although many dierent graphics rendering architectures havebeendevelop ed most

of the rasterization approaches can b e categorized in regards to their concurrency in

b oth the frontend mo del traversal and backend primitive rasterization Todate

most systems rely on serial traversal with parallelism found only in the rasterization

stages This backend parallelism can b e found in b oth ob jectorder rasterization

ie Z buer depthsort and imageorder rasterization ie scanline algorithms

In addition hybrid systems exist whichmake use of b oth ob jectorder and image

order parallelism

We examine these architectures with resp ect to their rendering capabilities and

limitations We then address using our pip eline rendering metho ds to expand the

rendering capabilities on the various systems and suggest ways to overcome inherent

architectural limitations preventing even greater realism

Ob jectOrder Rasterization Systems

Ob jectorder rasterization systems p erform rendering ob jectbyob ject without re

gard to which pixels they aect Parallelism is achieved by partitioning image mem

ory so that several pro cessing elements simultaneously rasterize the same ob ject

primitive

This SGI platformAkeaprovides serial pip elined traversal of the displaymod

el parallelism is provided in its ob jectorder imageparallel rasterization and multi

stage pip eline Parallel span processors each handles a fraction of the screen columns

PixelPlanesFP also provides imageparallel rasterization although it replaces

SGIs parallel span rasterization with massive pixellevel parallelism in the raster

ization of seriallypro cessed primitives through the use of SIMD pixel processors

PixelPlanes obviates the p olygon rasterization limitations of the SGI platform by

providing scan conversion which is indep endent of a p olygons size Systems also ex

ist whichhave made enhancements and mo dications to the ab ove Gouraudshaded

p olygonrendering architectures Realtime antialiasing alphablending and textur

ing have b ecome commonplace through framebuer extensions and some systems

ie Stellar GSABM has generalized this concept by providing virtual pixel

maps on whicha variety of p ostpro cessing can o ccur

While these systems do generally provide fast antialiased p olygonal rendering

they have little direct supp ort for the rendering features detailed in our metho d

ology Shadows are supp orted on the SGI platform through b oth texture mapping

and shadowvolume stenciling however neither is readily available for true interactive

systems The shadow buer metho d requires fast recomputation of shadow textures

which is currently not supp orted Shadowvolumes currently require duplication

of eort in that frontfacebackface determination is required for stencilfunction

sp ecication yet this is automatically p erformed by the hardware rasterization algo

rithm Great sp eedup could b e achieved with a stencil function based on primitive

normal direction

The Stellar GSABM family provide a muchmoreopenarchitecture for imple

mentation of our multipass techniques In fact the DZ buer features we describ e

are based on a multiZ buer implementation on the GSMam Their Vir

tual Pixel Map concept p ermits denition of a pixel to b e whatever the application

requires This in essence p ermits frame buers Z buers stencil buers accumu

lation buers and any other typ e of related pixel information to b e stored op erated

on and retrieved The exibility of this system provides a sup erior testb ed for a

multipass implementation of imagemasking and comp osition techniques The op en

ness of this platform do es however have its cost display requires transfer of image

data from the virtual pixel maps to the frame buer requiring extra bandwidth and

time

Despite varying supp ort for our advanced rendering features image qualityon

each system is limited by more basic factors Although the ob jectorder rasterization

systems havevastly dierent architectures there are many similarities in regards to

the physical accuracy of the images they can generate Most of the mentioned sys

tems rely on a Z buer to p erform visible surface rendering and therefore may suer

from coincident p olygons resulting from roundo error Most also use a hardware

based Gouraudshaded Phong illumination mo del and p erform their calculations in

RGB space therebyseverely limiting their luminance dynamic range

Image resolution can b e improved however by addressing the limitations of each

of these factors Z buer resolution is dep endent on the depth of the bitplanes where

each additional bit doubles the resolution Current architectures such as Silicon



Graphics RealityEngine provide bit Z buer depth compared to bits in their

older systems Akea thereby pro ducing times the resolution Hierarchical Z

buersGK have also b een prop osed to minimize the oatingp oint limitations In

addition aliasing problems with hidden surface elimination intro duced by subpixel

masks have b een addressed with expanded Z buer systemsSSCar

Other improvements can b e made through use of b etter illumination and shading

mo dels As demonstrated in Section even the current Phong mo del can b e made

more physically based Other lighting mo dels havebeendevelop ed for improved

sp eed Sch and accuracyHTSG Much b etter shading metho ds exist including

Phong shadingPho BW which despite interp olation inaccuracies such as from

p ersp ective foreshortening can pro duce much b etter sp ecular highlights esp ecially

for large primitives This itself can reduce the need for p olygonal tessellation and

thereby recapture some p erformance lost by the costlier metho d

Finally the calculation space of these illumination evaluations can b e expanded

alues Typical radiance to b etter handle the wide disparity of realworld luminance v

values in a scene can often vary over a range of War with real world lumi

  

nances ranging from to cdm TR however most lighting calculations

are p erformed in bit integer space p er color channel with monitor luminances in



the range of to cdm Ward War presents a simple extension for bit

real valued pixels already available in most architectures whichhave a far greater

dynamic range

ImageOrder Rasterization Systems

Whereas ob jectorder rasterization p erforms rendering based on ob jecttraversal

imageorder rasterization systems p erform rendering on a pixelbypixel basis Little

or no regard is made for neighb oring pixels

Variations on scanline rasterizers have b een implementedinhardware whichmake



use of screen partitioningFucNIM as well as scanline partitioningKWG

Systems such as the NASA I I ight simulatorBEhave fo cused on ob jectparallel

rasterization where each primitive is assigned to object processor and a priority

multiplexor combines the resulting rasterizations Other primitivepro cessor strate

gies have b een designed including pip elined architectures ie by Cohen and Deme

trescuDem which successively lter color and Z streams and treestructured ar

chitectures ie byFussellFR which hierarchically merge these streams Other

imageorder systems have abandoned primitivebased rasterization completely for ray



tracingbased architectures ie Ray Casting MachineKE LINKSNOK

which access primitives only on a p erneed basis

Because most ob jectparallel imageorder systems p erform image comp osition

based on Z values and color blending their structure is very similar to the multi

pass techniques presented There is therefore a natural extension of our metho ds for

ob jectparallel architectures Sp ecial attention however must b e paid to comp osi

tion order esp ecially for recursive sp ecular images Our metho ds rely on depthrst

trav ersal of the images which do es not directly result from ob jectparallel rendering

Multiplelight source illumination and accumulation as well as shadow and view jit

tering do es lend itself to this parallel implementation as these pro cesses already are

fo cused on image comp osition

Ray tracing and partitioning imageorder architectures which do not supp ort

ob ject parallelism are not as wellsuited for our multipass techniques in that they

make no use of primitive asso ciation Ray tracing systems do however typically

have full supp ort of the features we detailed in that they pro ceed in an ob jectspace

tracing and therefore our metho ds provide little or no b enet

As these systems op erate on an indep endent p erpixel basis they all tend to

suer from problems inherent in such discrete rendering algorithms This includes

aliasing artifacts such as jaggies jagged highlights and Moire patterns in textures

WW These aliasing problems as in distribution ray tracing can b e addressed

by sup ersampling adaptive sampling and sto chastic sampling In addition the

problems and solutions addressed in Section for ob jectorder systems also

apply for many systems

Hybrid Systems

Hybrid systems attempt to make use of b oth ob ject and imageparallelism at var

ious stages in the architecture This usually entails ob jectparallel rasterization in

separate screen regions or yettob emerged frame buers

Hybrid graphics systems generally either exploit bucket sorting ie PixelPlanes



FPE to sort primitives according to screen region for parallel pro cessing

or use image composition ie PixelFlowMEP to combine separately generated

frame buers which contain the distributed primitive renderings

The PixelPlanes system sorts primitives into bins corresp onding to patch

sized regions of the screen The transformation engine maintains the full list of

sorted primitives which are transformed sorted and stored and assigns all avail

able Renderers the corresp onding bins for their screen patch Once a renderer is

completed with a patch it transfers the color information to a backing store and

is assigned a new patch and asso ciated bin While this bucket sorting approachis

compatible with our multipass metho d the backing store approachisnot The

PixelPlanes backing store typically maintains only color information Z values

and other pixelasso ciated information is discarded by the Renderer This precludes

the masking and color overlays which comprise our metho ds Fuchs do es mention

olume rendering so an extension for other implementations of the backing store for v

additional bitplanes should b e feasible

In contrast the PixelFlow system also p erforms ob jectparallel rendering but

do es not rely on screen partitioning It instead assigns a separate pro cessor for

each primitive and renders that primitive to a separate frame buer As with the

pro cessorp erprimitive approaches describ ed in Section the PixelFlow image

comp osition metho d is directly analogous to our multipass image metho d This

system however p erforms deferred shadingwhich comp osites pixels based on at

tributes instead of color This provides an even more general framework for our

metho ds as comp osition op erations need not take place in RGB space

With sp ecialized rendering pro cessors these systems have b een able to imple

ment some moveadvanced rendering features suchasBishopandWeimers Fast

Phong ShadingBW In addition the generality of the pro cessorenhanced mem

ories p ermit presorting transparent p olygons when p ossible and using a multipass

approach approach for complex intersecting surfaces They do however intro duce

some additional problems and limitations in addition to those describ ed in Sections

and This includes shadowvolume pro cessing which do es not readily t



into PixelPlanes screendivision bucket sorting strategyFPE

Chapter

Conclusion

With multipass pip eline rendering wehave presented a platform for bridging the

gap b etween static oline rendering systems and dynamic hardwarebased graphics

We demonstrated a practical implementation of shadow and lightvolumes and incor

p orated this into a recursive paradigm p ermitting interaction with sp ecular surfaces

This includes the sp ecular direct comp onent similar to RadiancesWar virtual

lights for planar sp ecular surfaces Weshowed an extension to pro jected textures

to approximate complex material transmission and have tried to wrestle as much

physical realism out of the lighting mo del itself without compromising p erformance

Consistent with the broad sp ectrum of achievable qualitywe also presented a metho d

to even include indirect lighting eects Even with minimal viewp oint culling this

pip eline rendering metho d demonstrated typical p erformance rates to times

that of ray tracing for our test environments Future work will fo cus on taking full

advantage of available culling strategies for dynamic environments

Contributions

Wehave describ ed a series of techniques for adding realism to interactiveenviron

ments and making these environments more visually comprehensible These tech

niques have the common thread of using the hardware rendering pip eline itself to

pro duce illumination eects commonly found only in noninteractive renderers such

as raytracers These techniques exploit implicit pixelparallel op erations and there

fore have application not only in the currentmultipass pip elined metho d but also in

amultithreaded op eration on rendering architecture whichsupportssuch coherence

In summary the primary individual contributions of this work

Sp ecular surface rendering including

Recursive sp ecular surfaces through secondary viewp oint image mapping

Corrective image transform for refractive surfaces

Backtofront and fronttoback transparency blending through an ex

tended Dual Tri Zbuer

Lightscattering ie translucent sp ecular surfaces

Practical shadowvolume rendering techniques

Recursive sp ecular shadow and lightvolume rendering via virtual light sources

Lightvolume ltering through texture pro jection

Co ordination of shadowvolume and sp ecular surface rendering techniques

Presentation of pip elinebased illumination mo del including

Global Direct Illumination jitter and accumulation eects

Lo cal Direct Illumination Gaussiant Phong mo del

Indirect Illumination integration of indirect radiosity comp onent

Scene uncluttering through geometry clipping and transparency surfaces

Future Work

While these contributions have b een demonstrated in a practical realistic and dy

namic rendering system there are future extensions and areas of researchtobe

explored While many of these extensions rely on hardwarebased innovations some

derive from research from other rendering techniques Two of the most imp ortant

topics of research for interactive rendering include

Fast intersection metho ds for viewinvolume checks and for sub division for

partial intersection of silhouette faces with sp ecular surfaces

Viewp oint culling metho ds for dynamic environments

Intersection determination is a muchstudied area of research for ray tracing

systemsGlaaswell as for collision detectionGF Viewp oint culling and envi

ronment partitioning metho ds are used in architectural walkthrough systemsTS

ARB and in ray tracing sub divisionFTI strategies but most of this work re

lies on having a static environment or limitations on the geometric layout Use of

these metho ds to aid in visibilitychecks b etween sp ecular surfaces can also b e used

to reduce the numb er of scene renderings

Other extensions fo cus on the rendering quality of the metho ds and include

anisotropic reectionsWar which could b e simulated based on the reecting planes

orientation In this vein the lighting mo del itself could b e further mo died to b e

mo dulated by the surface orientation to include grazingeect sp ecularity

In addition to the improvements which are p ossible in sp eed and rendering fur

ther research is needed into the p erceptual asp ects of the constituent eects in a

progressive renemen t implementation More sp ecically the imp ortance of each

of the describ ed eects at varying levels of detail and recursive depths needs to b e

investigated based on psychovisual p erception metrics While wehave gained some

insightinto the desired tradeos suchasthenumb er of sp ecular surfaces b eing

generally more imp ortant than the recursive depth wehave not fo cused on ascer

taining the qualitativeweight of these varying factors based on given timing criteria

As our framework mirrors a progressive renement approach adjustment of the sys

tem parameters is critical in creation of the most realisticallyp erceived image in

ainteractive application Adopting this system to realtime constraints is still an

op enended topic

This work also demonstrates the need to develop more op en graphics architec

tures which p ermit pip eline and image control while providing parallelism to exploit

the indep endent pass rendering and image coherence of this technique As describ ed

in Section this technique is readily applicable to b oth image and ob ject par

allel architectures and esp ecially useful in virtual pixel architectures such as the ex

tinct Stellar GS family A more concentrated eort on an op en transformation

pip eline a b etter illumination mo del and increased pixel op eration supp ort would

p otentially provide for greater realism than the fo cus on simply more p olygons and

faster texture mapping Exploiting the inherent parallelism of our metho ds further

makes this approach desirable for increased realism in future hardware systems

Conclusion

In summarywehave fo cused on using multipass pip elinebased rendering tech

niques from existing or slightly mo died architectures to provide highly interactive

comprehensible and realistic environments With the high investment in existing ar

chitectures b oth monetarily and in terms of the software base the motivation exists

to use the provided platform for greater realism and interaction

While there will always b e a need for complex very accurate rendering packages

many situations require fast approximate solutions The techniques outlined in this

thesis provide such solutions for creating realistic illumination features in complex

teractive and usercomprehensible environments in

This work intends not only to demonstrate the quality of eects achievable

through pip eline rendering but also to serve as a call for more fo cus on the use

of the graphics hardware to p erform realistic rendering Indeed many more ex

tensions of this metho dology are p ossible with current hardware from achieving a

b etter lighting mo del through individual vertex normal mo dulation to use of multiple

pro cessors With other hardware platforms and future hardware systems parallel

rendering pip elines may b e able to exploit the indep endent nature of our multipass

pro cess In this vein we also call for more op en accessible hardware pip elines which

provide access in ways which the develop ers may never had imagined or intended

App endix A

Pro jective Maps

D Quadrilateral Pro jectiveMap

The mapping of a quadrilateral to quadrilateral can b e accomplished by comp osing

the mapping of a quadrilateraltosquare with a squaretoquadrilateral The two

mappings are adjoints of each other symb olically and therefore only the squareto

quadrilateral mapping will b e given

a d g

M A

b e h

sq

c f i

where

P

x x x x

 

g A

P

y y y y

 

P

x x x x



h A

P

y y y y



This mapping is also equal to the p ersp ective transform of the camera rotation with resp ect to

the normal of the refracting plane whichisavailable directly from the matrix stack

a x x gx d y y gy

 

A

b x x hx e y y hy

     

c x f y

 

D Quadrilateral Pro jectiveMap

Given the D quadrilateral pro jective transform

a d g

A M

b e h

qq

c f i

we create the D transform

a d g

h b e

A M

qq

c f i

which clears depth values and disables Z buering and fog

App endix B

Refraction Approximation

Linear Refraction Approximation

Heckb ert and Hanrahan HH base their approximation of the refraction trans

formation on limiting its scop e to paraxial rays For these rays ob jects through a

refractiveindex ap ear to b e times their actual distance Included is a repro duc

tion of their diagram demonstrating their approximation as a scaling transformation

p erp endicular to the plane

P P LP N M P

t t t t

where L is the co ecients of the plane equation N is the normal to the plane and

P and P are the real and refracted p oints

t

The corresp onding transform M gives the paraxial approximation for the virtual

t

fo cus p oint This transform represents the scaling transformation p erp endicular to

the plane



A AB AC AD



AB B B C B D

B M

t



AC B C C C D

where

Note that b ecause this transformation is indeed p erpindicular to the plane the

reection transform M is simply determined by substituting

r



A AB AC AD



AB B BC BD

M B

r



AC BC C CD

Biblio graphy

ABM B Apgar B Bersack and A Mammen A display system for the

Stellar graphics sup ercomputer mo del GS In Computer Graphics

SIGGRAPH Proceedingsvolume pages August

Air J Airey Increasing Update Rates in the Building Walkthrough Sys

tem with Automatic ModelSpaceSubdivision and Potential ly Visible

Set Calculations PhD thesis UNC Chap el Hill

Ake K Akeley The silicon graphics DGTX sup erworkstation IEEE

Computer Graphics and Applications July

Akea K Akeley RealityEngine graphics In Computer Graphics SIGGRAPH

Proceedingsvolume pages August

Akeb K Akeley Simple capping demonstration App eared in SGI Develop ers

To olb ox Release January

App A App el Some techniques for shading machine renderings of solids

In AFIPS Spring Joint Computer Conferencevolume pages

ARB J Airey J Rohlf and F Bro oks Jr Towards image realism with inter

active up date rates in complex virtual building environments Computer

Graphics Symposium on Interactive D Graphics

March

Arv J Arvo Backward ray tracing In SIGGRAPH Developments in

Ray Tracing seminar notesvolume August

Arv J Arvo Applications of irradiance tensors to the simulation of non

lamb ertian phenomena In Rob ert Co ok editor Computer Graphics

SIGGRAPH Proceedings pages

BB L Brotman and N Badler Generating soft shadows with a depth buer

algorithm IEEE Computer Graphics and Applications

Octob er

BE M Bunker and R Economy Evolution of GE CIG systems Scsd

publication General Electric CompanyDaytona Beach FL

Bli J Blinn Mo dels of light reection for computer synthesized pictures

Computer Graphics SIGGRAPH Proceedings July

BW G Bishop and D Weimer Fast phong shading Computer Graphics

SIGGRAPH Proceedings August

BWC D Baum J Wallace and M Cohen The backbuer algorithm an

extension of the radiosity metho d to dynamic environments The Visual

Computer

Car L Carp enter The Abuer an antialiased hidden surface metho d In

dingsvolume pages Computer Graphics SIGGRAPH Procee

July

Cat E Catmull ASubdivision Algorithm for Computer Display of Curved

Surfaces Phd thesis Department of Universityof

Utah December

CB M Cyrus and J Beck Generalized two and threedimensional clipping

Computers Graphics

CF S Chattopadhyay and A Fujimoto Bidirectional ray tracing In

Tosiyasu L Kunii editor Computer Graphics Proceedings of CG

International pages Tokyo SpringerVerlag

Che S E Chen Incremental radiosity An extension of progressivera

diositytoaninteractive image synthesis system Computer Graphics

SIGGRAPH Proceedings August

Cla J Clark The geometry engine A vlsi geometry system for graphics

Computer Graphics SIGGRAPH Proceedings July

CMS B Cabral N Max and R Springmeyer Bidirectional reection func

tions from surface bump maps In Computer Graphics SIGGRAPH

Proceedingsvolume pages July

CPC R Co ok T Porter and L Carp enter Distributed ray tracing Com

puter Graphics SIGGRAPH Proceedings July

CRMT S E Chen H Rushmeier G Miller and D Turner A progressive

m ultipass metho d for global illumination Computer Graphics SIG

GRAPH Proceedings July

Cro F Crow Shadow algorithms for computer graphics Computer Graphics

SIGGRAPH Proceedings July

CT R Co ok and K Torrance A reectance mo del for computer graphics

ACM Transactions on Graphics January

CW S E Chen and L Williams View interp olation for image synthesis

Computer Graphics SIGGRAPH Proceedings Au

gust

DAG J DorseyJArvo and D Greenb erg Interactive design of complex

timedep endentlighting IEEE Computer Graphics and Applications

March

DB P Diefenbach and N Badler Pip eline rendering Interactive refrac

tions reections and shadows Displays Special Issue on Interactive

Computer Graphics

Dem S Demetrescu A VLSIbased realtime hiddensurface elimination dis

play system Masters thesis Department of Computer Graphics Cali

fornia Institute of TechnologyMay

Dor J Dorsey Computer Graphics Techniques for Opera Lighting Design

and Simulation Phd thesis Program of Computer Graphics Cornell

University January

DSG J Dorsey F Sillion and D Greenb erg Design and simulation of op era

lighting and pro jection eects Computer Graphics SIGGRAPH

Proceedings July

FP H Fuchs and J Poulton Pixelplanes a VLSIoriented design for D

raster graphics Pro c of the th Canadian ManComputer Communi

cations Conf pages



FPE H Fuchs J Poulton J Eyles T Greer J Goldfeather D Ellsworth

S Molnar G Turk B Tebbs and L Israel PixelPlanes A heteroge

neous multipro cessor graphics system using pro cessorenhanced memo

ries In Computer Graphics SIGGRAPH Proceedingsvolume

pages July

FR D Fussell and B Rathi A VLSIoriented architecture for realtime

raster display of shaded p olygons In Proceedings of Graphics Inter

face pages Canadian Information Pro cessing So cietyMay

FTI A Fujimoto T Tanaka and K Iwata Arts Accelerated raytracing

system IEEE Computer Graphics and Applications pages April

Fuc H Fuchs Distributing a visible surface algorithm over multiple pro ces

sors In Proceedings of the ACM Annual Conference Seattle WA

pages Octob er

FvDFH J FoleyAvan Dam S Feiner and J Hughes Computer Graphics

Principles and Practices nd Edition AddisonWesley

FW F Fisher and A Woo RE versus NH sp ecular highlights In Paul

Heckb ert editor Graphics Gems IV pages Academic Press

Boston

FYT D Forsyth C Yang and K Teo Ecient radiosity in dynamic envi

pages ronments Fifth Eurographics Workshop on Rendering

June

GF E Gilb ert and C Fo o Computing the distance b etween general convex

ob jects in threedimensional space IEEE Transactions on Robotics and

Automation February

GHF J Goldfeather J Hultquist and H Fuchs Fast constructivesolid

geometry display in the PixelPowers graphics system In Computer

Graphics SIGGRAPH Proceedingsvolume pages Au

gust

GK N Greene and M Kass Hierarchical Zbuer visibilityInComputer

Graphics SIGGRAPH Proceedings pages

Gla A Glassner editor Introduction to Ray Tracing Academic Press Palo

Alto CA

GN R Goldstein and R Nagel D visual simulation Simulation

January

GSG D George F Sillion and D Greenb erg Radiosity redistribution for

dynamic environments IEEE Computer Graphics and Applications

July

GTGB C Goral K Torrance D Greenb erg and B Battaile Mo deling the

interaction of lightbetween diuse surfaces Computer Graphics SIG

GRAPH Proceedings July

HA P Haeb erli and K Akeley The accumulation buer Hardware sup

p ort for highquality rendering Computer Graphics SIGGRAPH

Proceedings August

Hec PHeckb ert Fundamentals of texture mapping and image warping

Masters thesis University of California Berkley

Hei T Heidmann Real shadows real time IRIS UniverseNovemb er

Decemb er

Hew W Hewitt PHIGSprogrammers hierarchical interactive graphics sys

tem Computer Graphics Forum Decemb er

HG R Hall and D Greenb erg A testb ed for realistic image synthesis IEEE

Computer Graphics And Applications Novemb er

HH P Heckb ert and P Hanrahan Beam tracing p olygonal ob jects Com

puter Graphics SIGGRAPH Proceedings July

HTSG X He K Torrence F Sillion and D Greenb erg A comprehensive

mo del for light reection Computer Graphics SIGGRAPH Pro

ceedings July

Ka j J Ka jiya The rendering equation Computer Graphics SIGGRAPH

Proceedings August

KE G Kedem and J Ellis The raycasting machine In Proceedings of the

IEEE International Conference on Computer Design VLSI in Com

puters ICCD pages IEEE Computer So ciety Press

KG D Kay and D Greenb erg Transparency for computer synthesized im

ages Computer Graphics SIGGRAPH Proceedings

August



KGP M Kelley K Gould B Pease S Winner and A Yen Hardware

accelerated rendering of CSG and transparencyInComputer Graphics

SIGGRAPH Proceedings pages July



K Kaneda E Nakamae T Nishita H Tanaka and T Noguchi Three KNN

dimensional terrain mo deling and displayforenvironment assessment

Computer Graphics SIGGRAPH Proceedings July

KSC E C Kingsley NA Schoeld and K Case SAMMIE Computer

Graphics SIGGRAPH Proceedings August

KV J Ka jiya and B Von Herzen Ray tracing volume densities In Com

puter Graphics SIGGRAPH Proceedingsvolume pages

July

KWG M Kelley S Winner and K Gould A scalable hardware render ac

celerator using a mo died scanline algorithm In Computer Graphics

SIGGRAPH Proceedingsvolume pages July

Mam A Mammen Transparency and antialiasing algorithms implemented

with the virtual pixel maps technique IEEE Computer Graphics and

Applications July

Mea D Meagher The OctreeEncoding Method for Ecient Solid Model

ing PhD thesis Electrical Engineering Dept Rensselaer Polytechnic

Institute

MEP S Molnar J Eyles and J Poulton PixelFlow Highsp eed render

ing using image comp osition In Computer Graphics SIGGRAPH

Proceedingsvolume pages July

MS S Muller and F Schoel Fast radiosity repropagation for interactive

virtual environments using a shadowformfactor list In Fifth Euro

graphics Workshop on Rendering pages Darmstadt Germany

June

NDR J Nimero J Dorsey and H Rushmeier A framework for global

illumination in animated environments In Sixth Eurographics Workshop

on Rendering pages Dublin Ireland June



NIM H Niimi Y Imai M Murak ami S Tomita and H Hagiwara A parallel

pro cessor system for threedimensional color graphics In Computer

Graphics SIGGRAPH Proceedingsvolume pages July

Nis T Nishita A shading mo del for atmospheric scattering considering

luminous intensity distribution of light sources Computer Graphics

SIGGRAPH Proceedings July

NNS M Newell R Newell and T Sancha A solution to the hidden surface

problem Proceedings of the ACM National Meeting



NOK H Nishimura H Ohno T Kawata I Shirakawa and K Omura

LINKS A parallel pip elined multimicro computer system for im

age creation Proc of the th Symposium on

SIGARCH pages June



NRH F Nico demus J Richmond J Hsia I Ginsb erg and T Limp eris Ge

ometrical considerations and nomenclature for reectance NBS Mono

graph National Bureau of Standards US Dept of Commerce

Washington DC Octob er

Pho B T Phong Illumination for computer generated pictures Communi

cations of the ACM June

PSV C Puech F Sillion and C Vedel Improving interaction with radiosity

based lighting simulation programs In Computer Graphics Sym

posium on Interactive D Graphics pages March

Rot S Roth Ray casting for mo delling solids Computer Graphics and

Image Processing February

RSC W Reeves D Salesin and R Co ok Rendering antialiased shadows

oceedings with depth maps In Computer Graphics SIGGRAPH Pr

volume pages July

SAWG F Sillion J Arvo S Westin and D Greenb erg A global illumina

tion solution for general reectance distributions Computer Graphics

SIGGRAPH Proceedings July

Scha E Schmid Beholding as in a Glass Herder and Herder New York

Schb L Schumaker Some algorithms for the computation of interp olating

and approximating spline functions In T Greville editor Theory and

Applications of Spline Functions Academic Press

Sch C Schlick A fast alternative to phongs sp ecular mo del In Paul

Heckb ert editor Graphics Gems IV pages Academic Press

Boston

SGI Graphics Programmers Guide Silicon Graphics Inc Mountain

View CA

SH R Siegel and J Howell Thermal Radiation Heat Transfer Hemisphere

Publishing Corp Washington DC



SKvW M Segal C Korobkin R van Widenfelt J Foran and P Haeb erli

Fast shadows and lighting eects using texture mapping Computer

Graphics SIGGRAPH Proceedings July

SSa D Salesin and J Stol The ZZbuer a simple and ecient rendering

algorithm with reliable antialiasing In Proceedings of the PIXIM

pages

SSb C Sequin and E Smyrl Parameterized ray tracing Computer Graphics

SIGGRAPH Proceedings July

A Schilling and W Straer EXACT Algorithm and hardware archi SS

tecture for an improved Abuer In Computer Graphics SIGGRAPH

Proceedingsvolume pages August

Sut I Sutherland Sketchpad A manmachine graphical communication

system In Proceedings of the Spring Joint Computer Conference

tHKT P ten Hagen A Kujik and C Trienekens Display architecture for

VLSIbased graphics workstations Advances in Computer Graphics

HardwareIRecord of First Eurographics Workshop on Graphics Hard

ware pages

TR J Tumblin and H Rushmeier Tone repro duction for realistic im

ages IEEE Computer Graphics and Applications Novem

b er

TS K Torrance and E Sparrow Theory for osp ecular reection from

roughened surfaces Journal of Optical Society of America

TS S Teller and C Sequin Visibility prepro cessing for interactivewalk

throughs Computer Graphics SIGGRAPH Proceedings

July

VF D Vo orhies and J Foran Reection vector shading hardware Com

puter Graphics SIGGRAPH Proceedings July

WA K Weiler and PAtherton Hiddensurface removal using p olygon area

sorting Computer Graphics SIGGRAPH Proceedings

July

War J Warno ck A hiddensurface algorithm for computer generated half

tone pictures Technical Rep ort TR NTIS AD University

of Utah Computer Science Department

War G Ward Real pixels In J Arvo editor Graphics Gems II pages

Academic Press San Diego CA

War G Ward Measuring and mo deling anisotropic reection Computer

Graphics SIGGRAPH Proceedings July

War G Ward The radiance lighting simulation and rendering system

Computer Graphics SIGGRAPH Proceedings July

WAT S Westin J Arvo and K Torrance Predicting reectance functions for

complex surfaces Computer Graphics SIGGRAPH Proceedings

July

WCG J Wallace M Cohen and D Greenb erg A twopass solution to the

rendering equation A synthesis of ray tracing and radiosity metho ds

Computer Graphics SIGGRAPH Proceedings July

WHG H Weghorst G Ho op er and D Greenb erg Improved computational

metho ds for ray tracing ACM Transactions on Graphics

Whi T Whitted An improved illumination mo del for shaded display Com

munications of the ACM June

Wil L Williams Casting curved shadows on curved surfaces Computer

Graphics SIGGRAPH Proceedings August

dAnimation and Rendering Techniques WW A Watt and M Watt Advance

Theory and Practice AddisonWesley New York NY

ZPL Y Zhu Q Peng and Y Liang PERIS A programming environment

for realistic image synthesis Graphics