<<

ToappearintheSIGGRAPH98conferenceproceedings

RenderingSyntheticObjectsintoRealScenes: BridgingTraditionalandImage-basedGraphicswithGlobalIllumination andHighDynamicRangePhotography PaulDebevec

UniversityofCaliforniaatBerkeley 1

ABSTRACT 1Introduction Renderingsyntheticobjectsintoreal-worldscenesisanimportant Wepresentamethodthatusesmeasuredsceneradianceandglobal applicationofcomputergraphics,particularlyinarchitecturaland illuminationinordertoaddnewobjectstolight-basedmodelswith visualeffectsdomains.Oftentimes,apieceoffurniture,aprop,or correctlighting.Themethodusesahighdynamicrangeimage- adigitalcreatureoractorneedstoberenderedseamlesslyintoa basedmodelofthescene,ratherthansyntheticlightsources,toil- realscene.Thisdif®culttaskrequiresthattheobjectsbelitcon- luminatethenewobjects.Tocomputetheillumination,thesceneis sistentlywiththesurfacesintheirvicinity,andthattheinterplayof consideredasthreecomponents:thedistantscene,thelocalscene, lightbetweentheobjectsandtheirsurroundingsbeproperlysimu- andthesyntheticobjects.Thedistantsceneisassumedtobepho- lated.Speci®cally,theobjectsshouldcastshadows,appearinre¯ec- tometricallyunaffectedbytheobjects,obviatingtheneedforre- tions,andrefract,focus,andemitlightjustasrealobjectswould. ¯ectancemodelinformation.Thelocalsceneisendowedwithes- timatedre¯ectancemodelinformationsothatitcancatchshadows andreceivere¯ectedlightfromthenewobjects.Renderingsare createdwithastandardglobalilluminationmethodbysimulating Distant theinteractionoflightamongstthethreecomponents.Adifferen- Scene tialrenderingtechniqueallowsforgoodresultstobeobtainedwhen light−based onlyanestimateofthelocalscenere¯ectancepropertiesisknown. (no reflectance model) Weapplythegeneralmethodtotheproblemofrenderingsyn- theticobjectsintorealscenes.Thelight-basedmodelisconstructed fromanapproximategeometricmodelofthesceneandbyusinga lightprobetomeasuretheincidentilluminationatthelocationof thesyntheticobjects.Theglobalilluminationsolutionisthencom- light positedintoaphotographofthesceneusingthedifferentialrender- ingtechnique.Weconcludebydiscussingtherelevanceofthetech- Local Synthetic niquetorecoveringsurfacere¯ectancepropertiesinuncontrolled Scene Objects lightingsituations.Applicationsofthemethodincludevisualef- estimated known reflectance reflectance fects,interiordesign,andarchitecturalvisualization. model model

CRDescriptors:I.2.10[Arti®cialIntelligence]:Visionand SceneUnderstanding-Intensity,,photometryandthreshold- ing;I.3.7[ComputerGraphics]:Three-DimensionalGraphicsand Realism-Color,,shadowing,andtexture;I.3.7[Computer Figure1:TheGeneralMethodInourmethodforaddingsynthetic Graphics]:Three-DimensionalGraphicsandRealism-Radiosity; objectsintolight-basedscenes,thesceneispartitionedintothree I.4.1[ImageProcessing]:Digitization-Scanning;I.4.8[ components:thedistantscene,thelocalscene,andthesyntheticob- jects.Globalilluminationisusedtosimulatetheinterplayoflight Processing]:SceneAnalysis-Photometry,SensorFusion. amongstallthreecomponents,exceptthatlightre¯ectedbackatthe distantsceneisignored.Asaresult,BRDFinformationforthedis- tantsceneisunnecessary.Estimatesofthegeometryandmaterial 1ComputerScienceDivision,UniversityofCaliforniaatBerke- propertiesofthelocalsceneareusedtosimulatetheinteractionof ley,Berkeley,CA94720−1776. Email: [email protected]. lightbetweenitandthesyntheticobjects. More information and additional results may be found at: http://www.cs.berkeley.edu/Ädebevec/Research Currentlyavailabletechniquesforrealisticallyrenderingsyn- theticobjectsintoscenesarelaborintensiveandnotalwayssuccess- ful.Acommontechniqueistomanuallysurveythepositionsofthe lightsources,andtoinstantiateavirtuallightofequalcolorandin- tensityforeachreallighttoilluminatethesyntheticobjects.An- othertechniqueistophotographareferenceobject(suchasagray sphere)inthescenewherethenewobjectistoberendered,and useitsappearanceasaqualitativeguideinmanuallycon®guringthe lightingenvironment.Lastly,thetechniqueofre¯ectionmappingis usefulformirror-likere¯ections.Thesemethodstypicallyrequire considerablehand-re®nementandnoneofthemeasilysimulatesthe effectsofindirectilluminationfromtheenvironment. To appear in the SIGGRAPH 98 conference proceedings

Accurately simulating the effects of both direct and indirect light- technique that produces perceptually accurate results even when the ing has been the subject of research in global illumination. With a estimated BRDF is somewhat inaccurate. global illumination algorithm, if the entire scene were modeled with We demonstrate the general method for the speci®c case of ren- its full geometric and re¯ectance (BRDF) characteristics, one could dering synthetic objects into particular views of a scene (such as correctly render a synthetic object into the scene simply by adding it background plates) rather than into a general image-based model. In to the model and recomputing the global illumination solution. Un- this method, a light probe is used to acquire a high dynamic range fortunately, obtaining a full geometric and re¯ectance model of a panoramic radiance near the location where the object will be large environment is extremeley dif®cult. Furthermore, global il- rendered. A simple example of a light probe is a camera aimed at a lumination solutions for large complex environments are extremely mirrored sphere, a con®guration commonly used for acquiring envi- computationally intensive. ronment . An approximate geometric model of the scene is cre- Moreover, it seems that having a full re¯ectance model of the ated (via surveying, photogrammetry, or 3D scanning) and mapped large-scale scene should be unnecessary: under most circumstances, with radiance values measured with the light probe. The distant a new object will have no signi®cant effect on the appearance of scene, local scene, and synthetic objects are rendered with global most of the of the distant scene. Thus, for such distant areas, know- illumination from the same point of view as the background plate, ing just its radiance (under the desired lighting conditions) should and the results are composited into the background plate with a dif- suf®ce. ferential rendering technique. Recently, [9] introduced a high dynamic range photographic technique that allows accurate measurements of scene radiance to 1.1 Overview be derived from a set of differently exposed . This tech- nique allows both low levels of indirect radiance from surfaces and The rest of this is organized as follows. In the section high levels of direct radiance from light sources to be accurately we discuss work related to this paper. Section 3 introduces the ba- recorded. When combined with image-based modeling techniques sic technique of using acquired maps of scene radiance to illuminate (e.g. [22, 24, 4, 10, 23, 17, 29]), and possibly active techniques for synthetic objects. Section 4 presents the general method we will use measuring (e.g. [35, 30, 7, 27]) these derived radiance to render synthetic objects into real scenes. Section 5 describes a maps can be used to construct spatial representations of scene ra- practical technique based on this method using a light probe to mea- diance. sure incident illumination. Section 6 presents a differential render- We will use the term light-based model to refer to a repre- ing technique for rendering the local environment with only an ap- sentation of a scene that consists of radiance information, possi- proximate description of its re¯ectance. Section 7 presents a sim- bly with speci®c reference to light leaving surfaces, but not neces- ple method to approximately recover the diffuse re¯ectance char- sarily containing material property (BRDF) information. A light- acteristics of the local environment. Section 8 presents results ob- based model can be used to evaluate the 5D plenoptic function [1] tained with the technique. Section 9 discusses future directions for

1 this work, and we conclude in Section 10.

P  ; ; V ;V ;V 

y z x for a given virtual or real subset of space .A material-based model is converted to a light-based model by com- puting an illumination solution for it. A light-based model is differ- 2 Background and Related Work entiated from an image-based model in that its light values are ac- tual measures of radiance2, whereas image-based models may con- The practice of adding new objects to photographs dates to the early tain pixel values already transformed and truncated by the response days of in the simple form of pasting a cut-out from one function of an image acquisition or synthesis process. picture onto another. While the technique conveys the idea of the new object being in the scene, it usually fails to produce an image In this paper, we present a general method for using accurate that as a whole is a believable . Attaining such realism measurements of scene radiance in conjunction with global illumi- requires a number of aspects of the two to match. First, the nation to realistically add new objects to light-based models. The camera projections should be consistent, otherwise the object may synthetic objects may have arbitrary material properties and can be seem too foreshortened or skewed relative to the rest of the picture. rendered with appropriate illumination in arbitrary lighting environ- Second, the patterns of ®lm grain and ®lm response should match. ments. Furthermore, the objects can correctly interact with the en- Third, the lighting on the object needs to be consistent with other vironment around them: they cast the appropriate shadows, they are objects in the environment. Lastly, the object needs to cast realistic properly re¯ected, they can re¯ect and focus light, and they exhibit shadows and re¯ections on the scene. Skilled artists found that by appropriate diffuse interre¯ection. The method can be carried out giving these considerations due attention, synthetic objects could be with commonly available equipment and . painted into still photographs convincingly. In this method (see Fig. 1), the scene is partitioned into three In optical ®lm compositing, the use of object mattes to prevent components. The ®rst is the distant scene, which is the visible part particular sections of ®lm from being exposed made the same sort of of the environment too remote to be perceptibly affected by the syn- cut-and-paste compositing possible for moving images. However, thetic object. The second is the local scene, which is the part of the the increased demands of realism imposed by the dynamic nature of environment which will be signi®cantly affected by the presence of ®lm made matching camera positions and lighting even more criti- the objects. The third component is the synthetic objects. Our ap- cal. As a result, care was taken to light the objects appropriately for proach uses global illumination to correctly simulate the interaction the scene into which they were to be composited. This would still of light amongst these three elements, with the exception that light not account for the objects casting shadows onto the scene, so often radiated toward the distant environment will not be considered in the these were painted in by an artist frame by frame [13, 2, 28]. Digi- calculation. As a result, the BRDF of the distant environment need tal ®lm scanning and compositing [26] helped make this process far not be known Ð the technique uses BRDF information only for the more ef®cient. local scene and the synthetic objects. We discuss the challenges in Work in global illumination [16, 19] has recently produced algo- estimating the BRDF of the local scene, and methods for obtain- rithms (e.g. [31]) and software (e.g. [33]) to realistically simulate ing usable approximations. We also present a differential rendering lighting in synthetic scenes, including indirect lighting with both specular and diffuse re¯ections. We leverage this work in order to 1Time and wavelength dependence can be included to represent the gen- eral 7D plenoptic function as appropriate. create realistic renderings. 2In practice, the measures of radiance are with respect to a discrete set of Some work has been done on the speci®c problem of composit- spectral distributions such as the standard tristimulus model. ing objects into photography. [25] presented a procedure for ren-

2 To appear in the SIGGRAPH 98 conference proceedings dering architecture into background photographs using knowledge recorded by taking a full dynamic range photograph of a mirrored

of the sun position and measurements or approximations of the lo- ball on a (see Section 5). A digital camera was used to acquire 1

cal ambient light. For diffuse buildings in diffuse scenes, the tech- a series images in one-stop exposure increments from 1 to 10000 nique is effective. The technique of re¯ection mapping (also called second. The images were fused using the technique in [9].4 environment mapping) [3, 18] produces realistic results for mirror- The environment is displayed at three exposure levels (-0, -3.5, like objects. In re¯ection mapping, a panoramic image is rendered and -7.0 stops) to show its full dynamic range. Recovered RGB ra- or photographed from the location of the object. Then, the surface diance values for several points in the scene and on the two major normals of the object are used to index into the panoramic image light sources are indicated; the color difference between the tung- by re¯ecting rays from the desired viewpoint. As a result, the shiny sten lamp and the sky is evident. A single low-dynamic range pho- object appears to properly re¯ect the desired environment3.How- tograph would be unable to record the correct and intensities ever, the technique is limited to mirror-like re¯ection and does not over the entire scene. account for objects casting light or shadows on the environment. Fig. 3(a-e) shows the results of using this panoramic radiance A common visual effects technique for having synthetic objects map to synthetically light a variety of materials using the RADI- cast shadows on an existing environment is to create an approximate ANCE global illumination algorithm [33]. The materials are: (a) geometric model of the environment local to the object, and then perfectly re¯ective, (b) rough gold, (c) perfectly diffuse gray ma- compute the shadows from the various light sources. The shadows terial, (d) shiny green plastic, and (e) dull orange plastic. Since can then be subtracted from the background image. In the hands of we are computing a full illumination solution, the objects exhibit professional artists this technique can produce excellent results, but self-re¯ection and shadows from the light sources as appropriate. it requires knowing the position, size, shape, color, and intensity of Note that in (c) the protrusions produce two noticeable shadows of each of the scene's light sources. Furthermore, it does not account slightly different colors, one corresponding to the ceiling light and for diffuse re¯ection from the scene, and light re¯ected by the ob- a softer shadow corresponding to the window. jects onto the scene must be handled specially. The shiny plastic object in (d) has a 4 percent specular component To properly model the interaction of light between the objects and with a Gaussian roughness of 0.04 [32]. Since the object's surface the local scene, we pose the compositing problem as a global illumi- both blurs and attenuates the light with its rough specular compo- nation computation as in [14] and [12]. As in this work, we apply nent, the re¯ections fall within the dynamic range of our display de- the effect of the synthetic objects in the lighting solution as a dif- vice and the different colors of the light sources can be seen. In (e) ferential update to the original appearance of the scene. In the pre- the rough plastic diffuses the incident light over a much larger area. vious work an approximate model of the entire scene and its origi- To illustrate the importance of using high dynamic range radi- nal light sources is constructed; the positions and sizes of the light ance maps, the same renderings were produced using just one of sources are measured manually. Rough methods are used to esti- the original photographs as the lighting environment. In this single mate diffuse-only re¯ectance characteristics of the scene, which are image, similar in appearance to Fig. 2(a), the brightest regions had then used to estimate the intensities of the light sources. [12] addi- been truncated to approximately 2 percent of their true values. The tionally presents a method for performing fast updates of the illu- rendering of the mirrored surface (f) appears similar to (a) since it mination solution in the case of moving objects. As in the previous is displayed in low-dynamic range printed form. Signi®cant errors work, we leverage the basic result from incremental radiosity [6, 5] are noticeable in (g-j) since these materials blur the incident light. that making a small change to a scene does not require recomputing In (g), the blurring of the rough material darkens the light sources, the entire solution. whereas in (b) they remain saturated. Renderings (h-j) are very dark due to the missed light; thus we have brightened by a factor of eight 3 Illuminating synthetic objects with real on the right in order to make qualitative comparisons to (c-e) pos- light sible. In each it can be seen that the low-dynamic range image of the lighting environment fails to capture the information necessary In this section we propose that computer-generated objects be lit by to simulate correct color balance, shadows, and highlights. actual recordings of light from the scene, using global illumination. Fig. 4 shows a collection of objects with different material prop- Performing the lighting in this manner provides a uni®ed and phys- erties illuminated by two different environments. A wide variety ically accurate alternative to manually attempting to replicate inci- of light interaction between the objects and the environment can be dent illumination conditions. seen. The (synthetic) mirrored ball re¯ects both the synthetic ob- Accurately recording light in a scene is dif®cult because of the jects as well as the environment. The ¯oating diffuse ball shows high dynamic range that scenes typically exhibit; this wide range a subtle color shift along its right edge as it shadows itself from of brightness is the result of light sources being relatively concen- the windows and is lit primarily by the incandescent lamp in Fig. trated. As a result, the intensity of a source is often two to six orders 4(a). The re¯ection of the environment in the black ball (which has of magnitude larger than the intensity of the non-emissive parts of a specular intensity of 0.04) shows the colors of the light sources, an environment. However, it is necessary to accurately record both which are too bright to be seen in the mirrored ball. A variety of the large areas of indirect light from the environment and the con- shadows, re¯ections, and focused light can be observed on the rest- centrated areas of direct light from the sources since both are signif- ing surface. icant parts of the illumination solution. The next section describes how the technique of using radiance Using the technique introduced in [9], we can acquire correct maps to illuminate synthetic objects can be extended to compute the measures of scene radiance using conventional imaging equipment. proper photometric interaction of the objects with the scene. It also The images, called radiance maps, are derived from a series of im- describes how high dynamic range photography and image-based ages with different sensor integration times and a technique for com- modeling combine in a natural manner to allow the simulation of puting and accounting for the imaging system response function f . arbitrary (non-in®nite) lighting environments. We can use these measures to illuminate synthetic objects exhibiting arbitrary material properties. Fig. 2 shows a high-dynamic range lighting environment with 4 The General Method electric, natural, and indirect lighting. This environment was This section explains our method for adding new objects to light- 3Using the surface normal indexing method, the object will not re¯ect based scene representations. As in Fig. 1, we partition our scene itself. Correct self-re¯ection can be obtained through . into three parts: the distant scene, the local scene, and the synthetic

3 To appear in the SIGGRAPH 98 conference proceedings

(11700,7300,2600)

(5700,8400,11800)

(18,17,19) (60,40,35) (620,890,1300)

Figure 2: An omnidirectional radiance map This full dynamic range lighting environment was acquired by photographing a mirrored ball balanced on the cap of a pen sitting on a table. The environment contains natural, electric, and indirect light. The three views of this image adjusted to (a) +0 stops, (b) -3.5 stops, and (c) -7.0 stops show that the full dynamic range of the scene has been captured without saturation. As a result, the image usefully records the direction, color, and intensity of all forms of incident light.

Figure 3: Illuminating synthetic objects with real light (Top row: a,b,c,d,e) With full dynamic range measurements of scene radiance from Fig. 2. (Bottom row: f,g,h,i,j) With low dynamic range information from a single photograph of the ball. The right sides of images (h,i,j) have been brightened by a factor of six to allow qualitative comparison to (c,d,e). The high dynamic range measurements of scene radiance are necessary to produce proper lighting on the objects.

Figure 4: Synthetic objects lit by two different environments (a) A collection of objects is illuminated by the radiance information in 2. The objects exhibit appropriate interre¯ection. (b) The same objects are illuminated by different radiance information obtained in an outdoor urban environment on an overcast day. The radiance map used for the illumination is shown in the upper left of each image. Candle holder model courtesy of Gregory Ward Larson.

4 To appear in the SIGGRAPH 98 conference proceedings objects. We describe the geometric and photometric requirements its appearance from the desired viewpoint should be included for each of these components. as part of the local scene. Since the local scene is a full BRDF model, it can be added 1. A light-based model of the distant scene to the global illumination problem as would any other object. The distant scene is constructed as a light-based model. The The local scene may consist of any number of surfaces and ob- synthetic objects will receive light from this model, so it is nec- jects with different material properties. For example, the local essary that the model store true measures of radiance rather scene could consist of a patch of ¯oor beneath the synthetic than low dynamic range pixel values from conventional im- object to catch shadows as well as a mirror surface hanging ages. The light-based model can take on any form, using very on the opposite wall to catch a re¯ection. The local scene re- little explicit geometry [23, 17], some geometry [24], moder- places the corresponding part of the light-based model of the ate geometry [10], or be a full 3D scan of an environment with distant scene. view-dependent texture-mapped [11] radiance. What is im- portant is for the model to provide accurate measures of inci- Since it can be dif®cult to determine the precise BRDF char- dent illumination in the vicinity of the objects, as well as from acteristics of the local scene, it is often desirable to have only the desired viewpoint. In the next section we will present a the change in the local scene's appearance be computed with convenient procedure for constructing a minimal model that the BRDF estimate; its appearance due to illumination from meets these requirements. the distant scene is taken from the original light-based model. This differential rendering method is presented in Section 6. In the global illumination computation, the distant scene ra- diates light toward the local scene and the synthetic objects, 3. Complete material-based models of the objects but ignores light re¯ected back to it. We assume that no area The synthetic objects themselves may consist of any variety of the distant scene will be signi®cantly affected by light re- of shapes and materials supported by the global illumination ¯ecting from the synthetic objects; if that were the case, the software, including plastics, metals, emitters, and dielectrics area should instead belong to the local scene, which contains such as glass and water. They should be placed in their desired the BRDF information necessary to interact with light. In the geometric correspondence to the local scene. RADIANCE [33] system, this exclusively emissive behavior can be speci®ed with the ºglowº material property. Once the distant scene, local scene, and synthetic objects are properly modeled and positioned, the global illumination software 2. An approximate material-based model of the local scene can be used in the normal fashion to produce renderings from the The local scene consists of the surfaces that will photomet- desired viewpoints. rically interact with the synthetic objects. It is this geome- try onto which the objects will cast shadows and re¯ect light. Since the local scene needs to fully participate in the illumina- 5 Compositing using a light probe tion solution, both its geometry and re¯ectance characteristics This section presents a particular technique for constructing a light- should be known, at least approximately. If the geometry of based model of a real scene suitable for adding synthetic objects at a the local scene is not readily available with suf®cient accuracy particular location. This technique is useful for compositing objects from the light-based model of the distant scene, there are vari- into actual photography of a scene. ous techniques available for determining its geometry through In Section 4, we mentioned that the light-based model of the dis- active or passive methods. In the common case where the lo- tant scene needs to appear correctly in the vicinity of the synthetic cal scene is a ¯at surface that supports the synthetic objects, its objects as well as from the desired viewpoints. This latter require- geometry is determined easily from the camera pose. Meth- ment can be satis®ed if it is possible to directly acquire radiance ods for estimating the BRDF of the local scene are discussed maps of the scene from the desired viewpoints. The former require- in Section 7. ment, that the appear photometrically correct in all directions in Usually, the local scene will be the part of the scene that is geo- the vicinity of the synthetic objects, arises because this information metrically close to the synthetic objects. When the local scene comprises the incident light which will illuminate the objects. is mostly diffuse, the rendering equation shows that the visible To obtain this part of the light-based model, we acquire a full dy- effect of the objects on the local scene decreases as the inverse namic range omnidirectional radiance map near the location of the square of the distance between the two. Nonetheless, there is a synthetic object or objects. One technique for acquiring this radi- variety of circumstances in which synthetic objects can signif- ance map is to photograph a spherical ®rst-surface mirror, such as a icantly affect areas of the scene not in the immediate vicinity. polished steel ball, placed at or near the desired location of the syn- Some common circumstances are: thetic object4. This procedure is illustrated in Fig. 7(a). An actual radiance map obtained using this method is shown in Fig. 2.  If there are concentrated light sources illuminating the object, then the object can cast a signi®cant shadow on The radiance measurements observed in the ball are mapped onto a distant surface collinear with it and the light source. the geometry of the distant scene. In many circumstances this model can be very simple. In particular, if the objects are small and resting  If there are concentrated light sources and the object is on a ¯at surface, one can model the scene as a horizontal plane for ¯at and specular, it can focus a signi®cant amount of the resting surface and a large dome for the rest of the environment. light onto a distant part of the scene. Fig. 7(c) illustrates the ball image being mapped onto a table surface

 If a part of the distant scene is ¯at and specular (e.g. a and the walls and ceiling of a ®nite room; 5 shows the resulting light- mirror on a wall), its appearance can be signi®cantly af- based model. fected by a synthetic object. 5.1 Mapping from the probe to the scene model

 If the synthetic object emits light (e.g. a synthetic laser), it can affect the appearance of the distant scene signi®- To precisely determine the mapping between coordinates on the ball cantly. and rays in the world, one needs to record the position of the ball

These situations should be considered in choosing which parts 4Parabolic mirrors combined with telecentric lenses [34] can be used to of the scene should be considered local and which parts dis- obtain hemispherical ®elds of view with a consistent principal point, if so tant. Any part of the scene that will be signi®cantly affected in desired.

5 To appear in the SIGGRAPH 98 conference proceedings relative to the camera, the size of the ball, and the camera param- 5.2 Creating renderings eters such as its location in the scene and focal length. With this information, it is straightforward to trace rays from the camera cen- To render the objects into the scene, a synthetic local scene model is ter through the pixels of the image, and re¯ect rays off the ball into created as described in Section 4. Images of the scene from the de- the environment. Often a good approximation results from assum- sired viewpoint(s) are taken (Fig. 7(a)), and their position relative to ing the ball is small relative to the environment and that the camera's the scene is recorded through pose-instrumented cameras or (as in view is orthographic. our work) photogrammetry. The location of the ball in the scene is also recorded at this time. The global illumination software is then The data acquired from a single ball image will exhibit a num- run to render the objects, local scene, and distant scene from the de- ber of artifacts. First, the camera (and possibly the photographer) sired viewpoint (Fig. 7(d)). will be visible. The ball, in observing the scene, interacts with it: The objects and local scene are then composited onto the back- the ball (and its support) can appear in re¯ections, cast shadows, ground image. To perform this compositing, a mask is created by and can re¯ect light back onto surfaces. Lastly, the ball will not re- rendering the objects and local scene in white and the distant scene ¯ect the scene directly behind it, and will poorly sample the area in black. If objects in the distant scene (which may appear in front nearby. If care is taken in positioning the ball and camera, these ef- of the objects or local scene from certain viewpoints) are geomet- fects can be minimized and will have a negligible effect on the ®nal rically modeled, they will properly obscure the local scene and the renderings. If the artifacts are signi®cant, the images can be ®xed objects as necessary. This compositing can be considered as a subset

manually in image editing program or by selectively combining im- of the general method (Section 4) wherein the light-based model of

V ;V ;V 

y z ages of the ball taken from different directions; Fig. 6 shows a rela- the distant scene acts as follows: if x corresponds to an

tively artifact-free enviroment constructed using the latter method. actual view of the scene, return the radiance value looking in direc-

; 

We have found that combining two images of the ball taken ninety tion  . Otherwise, return the radiance value obtained by casting

 ; ; V ;V ;V 

y z degrees apart from each other allows us to eliminate the camera's the ray x onto the radiance-mapped distant scene appearance and to avoid poor sampling. model. In the next section we describe a more robust method of com- positing the local scene into the background image.

6 Improving quality with differential ren- dering The method we have presented so far requires that the local scene be modeled accurately in both its geometry and its spatially varying material properties. If the model is inaccurate, the appearance of the local scene will not be consistent with the appearance of adjacent distant scene. Such a border is readily apparent in Fig. 8(c), since (a) the local scene was modeled with a homogeneous BRDF when in reality it exhibits a patterned albedo (see [21]). In this section we describe a method for greatly reducing such effects. Suppose that we compute a global illumination solution for the local and distant scene models without including the synthetic ob- jects. If the BRDF and geometry of the local scene model were per- fectly accurate, then one would expect the appearance of the ren- dered local scene to be consistent with its appearance in the light-

based model of the entire scene. Let us call the appearance of the lo- LS

cal scene from the desired viewpoint in the light-based model b . LS

In the context of the method described in Section 5, b is simply LS

the background image. We will let noobj denote the appearance of the local scene, without the synthetic objects, as calculated by the

global illumination solution. The error in the rendered local scene

Err = LS LS

noobj b (without the objects) is thus: ls . This error results from the difference between the BRDF characteristics of the

actual local scene as compared to the modeled local scene. LS

Let obj denote the appearance of the local environment as cal- culated by the global illumination solution with the synthetic objects

in place. We can compensate for the error if we compute our ®nal LS

rendering f inal as:

= LS Err

(b) LS

f inal obj ls Figure 6: Rendering with a Combined Probe Image The full dy- namic range environment map shown at the top was assembled from Equivalently, we can write:

two light probe images taken ninety degrees apart from each other.

= LS +LS LS 

As a result, the only visible artifact is small amount of the probe sup- LS

b obj noobj

port visible on the ¯oor. The map is shown at -4.5, 0, and +4.5 stops. f inal

LS LS noobj The bottom rendering was produced using this lighting information, In this form, we see that whenever obj and are the and exhibits diffuse and specular re¯ections, shadows from different same (i.e. the addition of the objects to the scene had no effect on

sources of light, re¯ections, and caustics. the local scene) the ®nal rendering of the local scene is equivalent

LS LS obj

to b (e.g. the background plate). When is darker than LS

noobj , light is subtracted from the background to form shadows,

6 To appear in the SIGGRAPH 98 conference proceedings

Up

East South

North West

Down

Figure 5: A Light-Based Model A simple light-based model of a room is constructed by mapping the image from a light probe onto a box. The box corresponds to the upper half of the room, with the bottom face of the box being coincident with the top of the table. The model contains

the full dynamic range of the original scene, which is not reproduced in its entirety in this ®gure.

LS LS noobj and when obj is lighter than light is added to the back- spatial variation) ground to produce re¯ections and caustics. 2. Choose approximate initial values for the parameters of the re- Stated more generally, the appearance of the local scene without ¯ectance model the objects is computed with the correct re¯ectance characteristics 3. Compute a global illumination solution for the local scene lit by the correct environment, and the change in appearance due to with the current parameters using the observed lighting con- the presence of the synthetic objects is computed with the modeled ®guration or con®gurations. re¯ectance characteristics as lit by the modeled environment. While

4. Compare the appearance of the rendered local scene to its ac- LS the realism of f inal still bene®ts from having a good model of the re¯ectance characteristics of the local scene, the perceptual ef- tual appearance in one or more views. fect of small errors in albedo or specular properties is considerably 5. If the renderings are not consistent, adjust the parameters of reduced. Fig. 8(g) shows a ®nal rendering in which the local en- the re¯ectance model and return to step 3.

vironment is computed using this differential rendering technique. Ef®cient methods of performing the adjustment in step 5 that ex- LS The objects are composited into the image directly from the obj ploit the properties of particular re¯ectance models are left as future solution shown in Fig. 8(c). work. However, assuming a diffuse-only model of the local scene It is important to stress that this technique can still produce abi- in step 1 makes the adjustment in step 5 straightforward. We have: trarily wrong results depending on the amount of error in the es-

timated local scene BRDF and the inaccuracies in the light-based

Z Z

2 =2 Err

model of the distance scene. In fact, ls may be larger than

L  ; =  L  ;  cos sin d d =

LS LS

1 r r i i i i i i i

d f inal

obj , causing to be negative. An alternate approach is 0

to compensate for the relative error in the appearance of the local 0

LS = LS LS =LS 

Z Z

b obj noobj

scene: f inal . Inaccuracies in the local

 =2

scene BDRF will also be re¯ected in the objects. 2

 L  ;  cos sin d d

i i i i i i i

In the next section we discuss techniques for estimating the d 0

BRDF of the local scene. 0

 = 1

If we initialize the local scene to be perfectly diffuse ( d ) 7 Estimating the local scene BRDF everywhere, we have:

Simulating the interaction of light between the local scene and the

Z Z

=2 

synthetic objects requires a model of the re¯ectance characteristics 2

L  ;  cos sin d d L  ; =

i i i i i i i 2 r r

of the local scene. Considerable recent work [32, 20, 8, 27] has pre- r 0 sented methods for measuring the re¯ectance properties of mate- 0 rials through observation under controlled lighting con®gurations. The updated diffuse re¯ectance coef®cient for each part of the lo- Furthermore, re¯ectance characteristics can also be measured with cal scene can be computed as:

commercial radiometric devices.

L  ; 

r 1 r r

It would be more convenient if the local scene re¯ectance could 0

 =

d

L  ; 

2 r r

be estimated directly from observation. Since the light-based model r contains information about the radiance of the local scene as well as its irradiance, it actually contains information about the local scene In this manner, we use the global illumination calculation to ren- re¯ectance. If we hypothesize re¯ectance characteristics for the lo- der each patch as a perfectly diffuse re¯ector, and compare the re-

cal scene, we can illuminate the local scene with its known irradi- sulting radiance to the observed value. Dividing the two quantities

0 

ance from the light-based model. If our hypothesis is correct, then yields the next estimate of the diffuse re¯ection coef®cient d .If

0  the appearance should be consistent with the measured appearance. there is no interre¯ection within the local scene, then the d esti- This suggests the following iterative method for recovering the re- mates will make the renderings consistent. If there is interre¯ection, ¯ectance properties of the local scene: then the algorithm should be iterated until there is convergence. For a trichromatic image, the red, green, and blue diffuse re- 1. Assume a re¯ectance model for the local scene (e.g. diffuse ¯ectance values are computed independently. The diffuse charac- only, diffuse + specular, metallic, or arbitrary BRDF, including teristics of the background material used to produce Fig. 8(c) were

7 ToappearintheSIGGRAPH98conferenceproceedings

computedusingthismethod,althoughitwasassumedthattheentire localscenehadthesamediffusere¯ectance. Inthestandardªplasticºilluminationmodel,justtwomoreco- ef®cients±thoseforspecularintensityandroughness±needtobe speci®ed.InFig.8,thespecularcoef®cientsforthelocalscenewere estimatedmanuallybasedonthespecularre¯ectionofthewindow inthetableinFig.2.

8CompositingResults (a)Acquiringthebackgroundphotograph Fig.5showsasimplelight-basedmodelofaroomconstructedus- ingthepanoramicradiancemapfromFig.2.Theroommodelbe- ginsattheheightofthetableandcontinuestotheceiling;itsmea- light probe surementsandthepositionoftheballwithinitweremeasuredman- ually.Thetablesurfaceisvisibleonthebottomface.Sincetheroom modelis®niteinsize,thelightsourcesareeffectivelylocalrather thanin®nite.Thestretchingonthesouthwallisduetothepoorsam- plingtowardthesilhouetteedgeoftheball. Figs.4and6showcomplexarrangementsofsyntheticobjectslit entirelybyavarietyoflight-basedmodels.Theselectionandcom- (b)Usingthelightprobe positionoftheobjectsinthescenewaschosentoexhibitawidevari- etyoflightinteractions,includingdiffuseandspecularre¯ectance, multiplesoftshadows,andre¯ectedandfocusedlight.Eachren- deringwasproducedusingtheRADIANCEsystemwithtwodif- fuselightbouncesandarelativelyhighdensityofambientsample points. Fig.8(a)isabackgroundplateimageintowhichthesyntheticob- jectswillberendered.In8(b)acalibrationgridwasplacedonthe tableinordertodeterminethecameraposerelativetothesceneand tothemirroredball,whichcanalsobeseen.Theposesweredeter- distant minedusingthephotogrammetricmethodin[10].In8(c),amodel scene ofthelocalsceneaswellasthesyntheticobjectsisgeometrically matchedandcompositedontothebackgroundimage.Notethatthe localscene,whilethesameaveragecolorasthetable,isreadilydis- (c)Constructingthelight-basedmodel tinguishableatitsedgesandbecauseitlacksthecorrectvariations inalbedo. Fig.8(d)showstheresultsoflightingthelocalscenemodelwith thelight-basedmodeloftheroom,withouttheobjects.Thisimage willbecomparedto8(c)inordertodeterminetheeffectthesyn- theticobjectshaveonthelocalscene.Fig.8(e)isamaskimagein whichthewhiteareasindicatethelocationofthesyntheticobjects. Ifthedistantorlocalsceneweretooccludetheobjects,suchregions wouldbedarkinthisimage. synthetic distant objects scene Fig.8(f)showsthedifferencebetweentheappearanceofthelo- calscenerenderedwith(8(c))andwithout(8(d))theobjects.Foril- lustrationpurposes,thedifferenceinradiancevalueshavebeenoff- local scene setsothatzerodifferenceisshowningray.Theobjectshavebeen (d)Computingtheglobalilluminationsolution maskedoutusingimage8(e).Thisdifferenceimageencodesboth theshadowing(darkareas)andre¯ectedandfocussedlight(light areas)imposedonthelocalscenebytheadditionofthesynthetic Figure7:Usingalightprobe(a)Thebackgroundplateofthescene objects. (someobjectsonatable)istaken.(b)Alightprobe(inthiscase,the Fig.8(g)showsthe®nalresultusingthedifferentialrendering cameraphotographingasteelball)recordstheincidentradiance methoddescribedinSection6.Thesyntheticobjectsarecopied nearthelocationofwherethesyntheticobjectsaretobeplaced. directlyfromtheglobalilluminationsolution8(c)usingtheobject (c)Asimpli®edlight-basedmodelofthedistantsceneiscreatedas mask8(e).Theeffectstheobjectshaveonthelocalscenearein- aplanarsurfaceforthetableanda®niteboxtorepresenttherest cludedbyaddingthedifferenceimage8(f)(withoutoffset)tothe oftheroom.Thesceneistexture-mappedinhighdynamicrange backgroundimage.Theremainderofthesceneiscopieddirectly withtheradiancemapfromthelightprobe.Theobjectsontheta- ble,whichwerenotexplicitlymodeled,becomeprojectedontothe fromthebackgroundimage8(a).Notethatinthemirrorball'sre- table.(d)SyntheticobjectsandaBRDFmodelofthelocalsceneare ¯ection,themodeledlocalscenecanbeobservedwithouttheeffects addedtothelight-basedmodelofthedistantscene.Aglobalillumi- ofdifferentialrenderingÐalimitationofthecompositingtech- nationsolutionofthiscon®gurationiscomputedwithlightcoming nique. fromthedistantsceneandinteractingwiththelocalsceneandsyn- Inthis®nalrendering,thesyntheticobjectsexhibitaconsistent theticobjects.Lightre¯ectedbacktothedistantsceneisignored. appearancewiththerealobjectspresentinthebackgroundimage Theresultsofthisrenderingarecomposited(possiblywithdiffer- 8(a)inboththeirdiffuseandspecularshading,aswellasthedi- entialrendering)intothebackgroundplatefrom(a)toachievethe rectionandcolorationoftheirshadows.Thesomewhatspeckled ®nalresult. natureoftheobjectre¯ectionsseeninthetablesurfaceisdueto

8 (a) Background photograph (b) Camera calibration grid and light probe

(c) Objects and local scene matched to background (d) Local scene, without objects, lit by the model

(e) Object matte (f) Difference in local scene between c and d

(g) Final result with differential rendering Figure 8: Compositing synthetic objects into a real scene using a light probe and differential rendering ToappearintheSIGGRAPH98conferenceproceedings

thestochasticnatureoftheparticularglobalilluminationalgorithm threedimensionaldirectvisualizationfromONRandBMDO,grantFDN00014-96-1- used. 1200. Thedifferentialrenderingtechniquesuccessfullyeliminatesthe References borderbetweenthelocalsceneandthebackgroundimageseenin [1]ADELSON,E.H.,ANDBERGEN,J.R.ComputationalModelsofVisualPro- 8(c).Notethatthealbedotextureofthetableinthelocalscenearea cessing.MITPress,Cambridge,Mass.,1991,ch.1.ThePlenopticFunctionand ispreserved,andthataspecularre¯ectionofabackgroundobject theElementsofEarlyVision. onthetable(appearingjusttotheleftofthe¯oatingsphere)iscor- [2]AZARMI,M.OpticalEffectsCinematography:ItsDevelopment,Methods,and Techniques.UniversityMicro®lmsInternational,AnnArbor,Michigan,1973. rectlypreservedinthe®nalrendering.Thelocalscenealsoexhibits [3]BLINN,J.F.Textureandre¯ectionincomputergeneratedimages.Communica- re¯ectionsfromthesyntheticobjects.Acausticfromtheglassball tionsoftheACM19,10(October1976),542±547. [4]CHEN,E.QuickTimeVR-animage-basedapproachtovirtualenvironmentnav- focusingthelightoftheceilinglampontothetableisevident. igation.InSIGGRAPH'95(1995). [5]CHEN,S.E.Incrementalradiosity:Anextensionofprogressiveradiositytoan 9Futurework interactivesynthesissystem.InSIGGRAPH'90(1990),pp.135±144. [6]COHEN,M.F.,CHEN,S.E.,WALLACE,J.R.,ANDGREENBERG,D.P.Apro- gressivere®nementapproachtofastradiosityimagegeneration.InSIGGRAPH Themethodproposedheresuggestsanumberofareasforfuture '88(1988),pp.75±84. work.Oneareaistoinvestigatemethodsofautomaticallyrecov- [7]CURLESS,B.,ANDLEVOY,M.Avolumetricmethodforbuildingcomplexmod- eringmoregeneralre¯ectancemodelsforthelocalscenegeome- elsfromrangeimages.InSIGGRAPH'96(1996),pp.303±312. [8]DANA,K.J.,GINNEKEN,B.,NAYAR,S.K.,ANDKOENDERINK,J.J.Re- try,asproposedinSection7.Withsuchinformationavailable,the ¯ectanceandtextureofreal-worldsurfaces.InProc.IEEEConf.onComp.Vision programmightalsoalsobeabletosuggestwhichareasofthescene andPatt.Recog.(1997),pp.151±157. [9]DEBEVEC,P.E.,ANDMALIK,J.Recoveringhighdynamicrangeradiancemaps shouldbeconsideredaspartofthelocalsceneandwhichcansafely fromphotographs.InSIGGRAPH'97(August1997),pp.369±378. beconsidereddistant,giventhepositionandre¯ectancecharacter- [10]DEBEVEC,P.E.,TAYLOR,C.J.,ANDMALIK,J.Modelingandrenderingar- isticsofthedesiredsyntheticobjects. chitecturefromphotographs:Ahybridgeometry-andimage-basedapproach.In SIGGRAPH'96(August1996),pp.11±20. Someadditionalworkcouldbedonetoallowtheglobalillumina- [11]DEBEVEC,P.E.,YU,Y.,ANDBORSHUKOV,G.D.Ef®cientview-dependent tionalgorithmtocomputetheiluminationsolutionmoreef®ciently. image-basedrenderingwithprojectivetexture-mapping.Tech.Rep.UCB//CSD- Onetechniquewouldbetohaveanalgorithmautomaticallylocate 98-1003,UniversityofCaliforniaatBerkeley,1998. [12]DRETTAKIS,G.,ROBERT,L.,ANDBOUGNOUX,S.Interactivecommonillu- andidentifyconcentratedlightsourcesinthelight-basedmodelof minationforcomputeraugmentedreality.In8thEurographicsworkshoponRen- thescene.Withsuchknowledge,thealgorithmcouldcomputemost dering,St.Etienne,France(May1997),J.DorseyandP.Slusallek,Eds.,pp.45± 57. ofthedirectilluminationinaforwardmanner,whichcoulddra- [13]FIELDING,R.TheTechniqueofSpecialEffectsCinematography.Hastings maticallyincreasetheef®ciencywithwhichanaccuratesolution House,NewYork,1968. [14]FOURNIER,A.,GUNAWAN,A.,ANDROMANZIN,C.Commonillumination couldbecalculated.Tothesameend,useofthemethodpresented betweenrealandcomputergeneratedscenes.InGraphicsInterface(May1993), in[15]toexpeditethesolutioncouldbeinvestigated.Forthecase pp.254±262. ofcompositingmovingobjectsintoscenes,greatlyincreasedef®- [15]GERSHBEIN,R.,SCHRODER,P.,ANDHANRAHAN,P.Texturesandradiosity: Controllingemissionandre¯ectionwithtexturemaps.InSIGGRAPH'94(1994). ciencycouldbeobtainedbyadaptingincrementalradiositymethods [16]GORAL,C.M.,TORRANCE,K.E.,GREENBERG,D.P.,ANDBATTAILE,B. tothecurrentframework. Modelingtheinteractionoflightbetweendiffusesurfaces.InSIGGRAPH'84 (1984),pp.213±222. [17]GORTLER,S.J.,GRZESZCZUK,R.,SZELISKI,R.,ANDCOHEN,M.F.The 10Conclusion Lumigraph.InSIGGRAPH'96(1996),pp.43±54. [18]HECKBERT,P.S.Surveyoftexturemapping.IEEEComputerGraphicsand Wehavepresentedageneralframeworkforaddingnewobjectsto Applications6,11(November1986),56±67. [19]KAJIYA,J.Therenderingequation.InSIGGRAPH'86(1986),pp.143±150. light-basedmodelswithcorrectillumination.Themethodlever- [20]KARNER,K.F.,MAYER,H.,ANDGERVAUTZ,M.Animagebasedmeasure- agesatechniqueofusinghighdynamicrangeimagesofrealscene mentsystemforanisotropicre¯ection.InEUROGRAPHICSAnnualConference radiancetosyntheticallyilluminatenewobjectswitharbitraryre- Proceedings(1996). [21]KOENDERINK,J.J.,ANDVANDOORN,A.J.Illuminancetextureduetosurface ¯ectancecharacteristics.Weleveragethistechniqueinageneral mesostructure.J.Opt.Soc.Am.13,3(1996). methodtosimulateinterplayoflightbetweensyntheticobjectsand [22]LAVEAU,S.,ANDFAUGERAS,O.3-Dscenerepresentationasacollectionof images.InProceedingsof12thInternationalConferenceonPatternRecognition thelight-basedenvironment,includingshadows,re¯ections,and (1994),vol.1,pp.689±691. caustics.Themethodcanbeimplementedwithstandardglobalil- [23]LEVOY,M.,ANDHANRAHAN,P.Light®eldrendering.InSIGGRAPH'96 luminationtechniques. (1996),pp.31±42. [24]MCMILLAN,L.,ANDBISHOP,G.PlenopticModeling:Animage-basedren- Fortheparticularcaseofrenderingsyntheticobjectsintoreal deringsystem.InSIGGRAPH'95(1995). scenes(ratherthangenerallight-basedmodels),wehavepresenteda [25]NAKAMAE,E.,HARADA,K.,ANDISHIZAKI,T.Amontagemethod:Theover- practicalinstanceofthemethodthatusesalightprobetorecordinci- layingofthecomputergeneratedimagesontoabackgroundphotograph.InSIG- GRAPH'86(1986),pp.207±214. dentilluminationinthevicinityofthesyntheticobjects.Inaddition, [26]PORTER,T.,ANDDUFF,T.Compositingdigitalimages.InSIGGRAPH84(July wehavedescribedadifferentialrenderingtechniquethatcancon- 1984),pp.253±259. [27]SATO,Y.,WHEELER,M.D.,ANDIKEUCHI,K.Objectshapeandre¯ectance vincinglyrendertheinterplayoflightbetweenobjectsandthelocal modelingfromobservation.InSIGGRAPH'97(1997),pp.379±387. scenewhenonlyapproximatere¯ectanceinformationforthelocal [28]SMITH,T.G.IndustrialLightandMagic:TheArtofSpecialEffects.Ballantine sceneisavailable.Lastly,wepresentedaniterativeapproachfor Books,NewYork,1986. [29]SZELISKI,R.Imagemosaicingfortele-realityapplications.InIEEEComputer determiningre¯ectancecharacteristicsofthelocalscenebasedon GraphicsandApplications(1996). measuredgeometryandobservedradianceinuncontrolledlighting [30]TURK,G.,ANDLEVOY,M.Zipperedpolygonmeshesfromrangeimages.In conditions.Itisourhopethatthetechniquespresentedherewillbe SIGGRAPH'94(1994),pp.311±318. [31]VEACH,E.,ANDGUIBAS,L.J.Metropolislighttransport.InSIGGRAPH'97 usefulinpracticeaswellascompriseausefulframeworkforcom- (August1997),pp.65±76. biningmaterial-basedandlight-basedgraphics. [32]WARD,G.J.Measuringandmodelinganisotropicre¯ection.InSIGGRAPH'92 (July1992),pp.265±272. [33]WARD,G.J.Theradiancelightingsimulationandrenderingsystem.InSIG- Acknowledgments GRAPH'94(July1994),pp.459±472. TheauthorwishestothankChrisBregler,DavidForsyth,JianboShi,CharlesYing, [34]WATANABE,M.,ANDNAYAR,S.K.Telecentricopticsforcomputationalvi- SteveChenney,andAndreanKalemisforthevariousformsofhelpandadvicethey sion.InProceedingsofImageUnderstandingWorkshop(IUW96)(February provided.SpecialgratitudeisalsoduetoJitendraMalikforhelpingmakethiswork 1996). possible.DiscussionswithMichaelNaimarkandSteveSaundershelpedmotivatethis [35]Y.CHEN,ANDMEDIONI,G.Objectmodelingfrommultiplerangeimages.Im- work.TimHawkinsprovidedextensiveassistanceonimprovingandrevisingthispa- ageandVisionComputing10,3(April1992),145±155. perandprovidedinvaluableassistancewithimageacquisition.GregoryWardLarson deservesgreatthanksfortheRADIANCElightingsimulationsystemandhisinvalu- ableassistanceandadviceinusingRADIANCEinthisresearch,forassistingwithre- ¯ectancemeasurements,andforveryhelpfulcommentsandsuggestionsonthepaper. ThisresearchwassupportedbyaMultidisciplinaryUniversityResearchInitiativeon

10