The 10th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST - State of the Art Reports (2009) M. Ashley and F. Liarokapis (Editors)

Illuminating the Past - State of the Art

Jassim Happa1†, Mark Mudge2, Kurt Debattista1, Alessandro Artusi3,1, Alexandrino Gonçalves4, and Alan Chalmers1.

1International Digital Laboratory, University of Warwick, UK 2Cultural Heritage Imaging, San Francisco, USA 3CASToRC Cyprus Institute, Cyprus 4Research Center for Informatics and Communications, Polytechnic Institute of Leiria, Portugal

Abstract Virtual reconstruction and representation of historical environments and objects have been of research interest for nearly two decades. Physically-based and historically accurate illumination allows archaeologists and historians to authentically visualise a past environment to deduce new knowledge about it. This report reviews the current state of illuminating cultural heritage sites and objects using for scientific, preservation and research purposes. We present the most noteworthy and up-to-date examples of reconstructions employing ap- propriate illumination models in object and image space, and in the visual perception domain. Finally, we also discuss the difficulties in rendering, documentation, validation and identify probable research challenges for the future. The report is aimed for researchers new to cultural heritage reconstruction who wish to learn about existing methods and examples of illuminating the past.

Categories and Subject Descriptors (according to ACM CCS): Computer Graphics [I.3.7]: Three-Dimensional Graphics and Realism—, Computer Graphics [I.3.8]: Applications—, Image Processing and Computer Vision [I.4.1]: Digitization and Image Capture—, Image Processing and Computer Vision [I.4.8]: Scene Analysis—, Image Processing and Computer Vision [I.4.9]: Applications—

1. Introduction it is necessary to recreate illumination of cultural heritage Past societies relied entirely on daylight and flames for light- reconstructions to be both physically and historically accu- Authentic Illumina- ing before electricity became available. Under modern day rate as possible. This can be described as tion lighting conditions, ancient environments may appear sig- [CRL06], and can highlight aspects of a reconstruction nificantly different from when they were first built and used that would otherwise not be present using simpler lighting in the past. Furthermore, the site itself may have changed models. Great care must be taken to ensure the illumination several times over the years. Computer graphics aid the pos- model adopted is validated. Due to lack of historical refer- sibility of reconstructing a site, and its lighting, as it may ence material, any model will always be a scientific best- have appeared at any stage in time. guess, and must be updated each time new data regarding the site or object is found. Employing computer graphics to reconstruct cultural her- itage environments and objects is no longer a novelty [Rei91, This report consists of two main parts. Part 1 reviews en- FSR97]. The first few examples discussing virtual recon- abling methods and technologies that can be employed for struction of cultural heritage dealt primarily with displaying illuminating virtual reconstructions. In this first part, section geometry using simple lighting models. Today, realistic vir- 2 presents a summary of existing common technologies for tual reconstructions are used by the media, the film and com- acquiring light data. Section 3 discusses existing methods of puter games industries, museums and researchers. In order removing lighting artefacts such as specular highlights and for the photorealistic visualisation to be of scientific value, shadows in photographs. Section 4 details the uses of High Dynamic Range Imaging (HDRI). Rendering approaches are presented in section 5. Methods of modelling participat- † e-mail: [email protected] ing media (fog, smoke, dust etc.) are also summarised in

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art this section. Sections 6 describes techniques for recreating removal problems in greater detail. The presence of specu- sunlight. Reflectance Transformation Imaging (RTI) and its lar reflections in the real world is inevitable as there exists applications are reviewed in section 7. Finally, methods to several materials that will show both diffuse and specular re- model flames are described in section 8. flections. Shadows are also common artefacts that are deter- Part 2 of this report presents an overview of existing mined by the position of a light source when the photograph virtual reconstructions that have a focus on authentic illu- of the texture is acquired. Failure to remove highlights and mination, but also overviews examples of image-space ap- shadows from an acquired texture map can significantly alter proaches and applications of visual perception in the cul- its appearance. Specular reflections for instance hide surface tural heritage domain. These are all reviewed in section 9. details, and may appear as additional features to the material Section 10 discusses the reoccurring primary difficulties in of the object surface [PLQ06]. rendering, documentation and validation of reconstructions Specularity and shadows removal can be viewed as the in a cultural heritage context, and overviews additional ap- problem of extracting information contained in an image plications of authentic illumination for potential future work. and transforming it into a certain meaningful representa- tion [AC07]. This representation can describe the intrinsic properties of the input image [BT78]. For example, in the PART ONE context of specularity removal, the interface and body re- flections are two possible intrinsic image properties. Enabling Methods & Technologies Several specularity removal techniques have been pub- lished. These differ in the information they use and how 2. Common Equipment this is used. There are two main categories based on the types of input data: Single Image and Multi-Image meth- Several technologies today are available for capturing and ∗ rendering light. A short list of the most common items used ods [LLK 02, Wei01]. The first category performs the sep- is listed here, while more specialised equipment is detailed aration of the reflection components using only a single im- in their respective sections: age. The second category makes use of a sequence of im- ages benefiting from the fact that under varying viewing di- Single-lens reflex (SLR) cameras are useful for capturing rection diffuse and specular reflections show different be- photographic references of a scene. These can be used for haviour [AC07]. geometry reference purposes, , but also for capturing incident lighting from the scene. A light meter is The single-image category is further subdivided into the techniques based on the information they use: Neighbour- capable of determining what shutter speed and aperture is ∗ best suited for cameras with the current lighting condition in hood Analysis [TI03, PLQ06, TFA05, YCK06, MZK 06], the real world. Spectroradiometers/spectrophotometers are Colour Space Analysis [KSK87, KSK90b, KSK90a, ST95b, devices capable of measuring attributes of sampled light, ST95a], and Polarisation [NFB97,KLHS02]. The first group this includes Spectral Radiance (wavelengths), Luminance uses the information of the neighbourhood to compute (amount of light emitted from or passing through a particu- the diffuse colour. The second group considers colour lar area) and Chromaticity (the quality of a colour sample, space to analyse the distribution of the diffuse and specu- disregarding its luminance). These devices however, do not lar components and uses this information for the separation. provide information about ideal camera settings. The techniques applying polarisation filters rely on the fact that the specular component is polarised and thus the colour There are currently many rendering packages available; of the specular component can be identified. Besides this Radiance [WS03], 3DS Max [Aut09a], Maya [Aut09b], classification, one can categorise the approaches as local or Mental Ray [Men09] and PBRT [PH04] are some of the most global depending on how they use the information contained common rendering applications today. Radiance is a suite of in the image data. The local methods utilise local pixel inter- programs made for the analysis and visualisation of lighting actions to remove the specular reflection, while the global in design [War94,WS03]. It is open source, has been used for methods estimate diffuse colours of image regions, which several cultural heritage reconstructions, and is widely em- implies segmentation [AC07]. ployed in academic settings. In this report, several examples will be highlighted with Radiance in mind. Several shadows removal techniques have been proposed in the past based on colour space analysis such as RGB [BA03] and HVS [CGP∗01] colour spaces. Also invari- 3. Removing Existing Lighting and Artefacts ant colour models [ESE01] and the measure of bright- The acquisition of texture maps using a camera does not ness [BMA04] (shadow density) have been used to classify capture all material properties and may contain specularities shadows regions and afterwards removed by modifying the (highlights) or shadows that need to be removed before these brightness and colour of the input image. textures can be included into a realistic relighting of the vir- For texture mapping purposes it is necessary to colour cal- tual reconstruction. [Sha84, TI04, LTQ06] discuss specular ibrate existing photographs. This can be done by computing

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art colour compensation based on a colour chart, like the Mac- Of the two methods rasterisation is the most popular, beth Colour Checker using for instance the macbethcal pro- particularly for interactive environments, due to the perfor- gram in Radiance. mance that can now be achieved by employing dedicated graphics hardware. For this reason, and the amount of sup- 4. High Dynamic Range Imaging port there is for it in terms of libraries [SWND04, Bly06], rasterisation is the de-facto standard for rendering in the The dynamic range of lighting available in the real world is computer graphics industry. Rasterisation renders images by vast, and the Human Visual System (HVS) is able to adapt projecting primitives, which are always polygon based, or to various lighting conditions. Current conventional captur- geometry that can be tesselated, to the using ing and display technologies do not support the full dynamic a series of transformations. Once the geometry is projected range our eyes are capable of processing. This High Dy- the rasterisation phase computes aspects of lighting, texture namic Range (HDR) of luminance needs to be captured and and visible surface detection before rasterising the primi- integrated to increase the reproduced level of realism in any tives. When more complex lighting effects such as shad- virtual reconstruction. The research field in computer graph- ows, caustics, indirect lighting etc. are required, these are ics that deals with all aspects of acquiring, processing and computed using specialised algorithms. Akenine-Moller et displaying HDR content is called High Dynamic Rage Imag- al. [AMHH08] provide a comprehensive overview of raster- ing (HDRI). HDRI has become an important research topic isation methods and their extensions. for illuminating the past because its adds greater accuracy in terms of lighting than Low Dynamic Range (LDR) content [Whi80] methods for rendering were origi- is capable of producing. nally based on a ray casting method for visible surface de- tection [App68]. More recent ray tracing methods model the Humans are capable of seeing a large range of light in- geometric optic properties of light by calculating the inter- tensities ranging from daylight levels of around 108cd/m2 actions of photons with geometry and can reproduce more to night light conditions of approximately 10−6cd/m2. Cur- complex lighting effects with little modifications. While ray rently, available consumer cameras are limited to capture tracing is computationally expensive, recent algorithmic and only 8-bit images or 12-bit images in RAW format, which hardware advances are making it possible to render scenes do not cover the full dynamic range of irradiance values in at interactive rates [WMG∗07]. most environments in real world. The only option today is to take a number of exposures of the same scene to capture details from very dark regions to very bright regions as pro- 5.1. Physically-based Rendering posed in [MP95, DM97, MN99, RBS99]. The quality of illumination is highly dependent on the model Conventional display technologies today are not capable of the lighting simulation. In order to accurately recreate of displaying HDR luminance. The HDR values can how- lighting conditions in virtual environments, physically-based ever, be converted to better fit its LDR displays. This is rendering is required. Physically-based rendering involves known as Tone Mapping. Tone Mapping is an important last the process of modelling physical properties of the materi- step in the reproduction of realistic images and many oper- als and lighting in the scene and simulating the local and ators have been proposed, most of which detailed in a EG global components of the light transport. Light reflectance STAR report by Devlin et al. [DCWP02]. There are two cat- models describe the emission and reflectance of a mate- egories of TM operators: global and local. These algorithms rial. These are responsible for the local illumination com- allow transforming the same pixel intensity of the input im- ponent in rendering. The reflectance model describes how age to different display values, or different pixel intensities light interacts on, around and through a surface, including to the same display value [DW00]. Recently the novel color very small surfaces such as particles in a volume. Complex appearance model iCAM [KJF07] was introduced. It is able reflectance functions termed Bi-directional Scattering Sur- to predict a range of colour appearance phenomena that al- face Reflectance Distribution Functions (BSSRDF) provide low to simulate more precisely how the HDR scene will ap- a model that describes surfaces that are both reflective and pear to the user. Although HDR display technology is ap- partially translucent [JMLH01]. pearing [SHS∗04], it will be a few years before it becomes commonly available. Despite being 10 times darker and 30 More commonly used functions are the simplified Bi- times brighter than traditional LDR displays, even HDR dis- directional Reflectance Distribution Functions (BRDFs), plays do not deliver the full range of real world luminance. that account for surface interactions at the same point, with- out scattering. A large number of reflectance functions that are physically-inspired that are commonly used in illumina- 5. Rendering tion for cultural heritage are the Ward BRDF [War92] and Computer graphics research has provided a vast number of Lafortune BRDF [LFTG97]. Other analytical solutions also methods for computing and displaying virtual environments. exist, such as the Cook-Torrance model [CT82]. Dorsey et Most methods can be categorised as being based on one of al. [DRS08] present a detailed discussion on material mod- two techniques: Rasterisation or Ray Tracing [Whi80]. elling.

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

5.2. Global Illumination methods have been used to accelerate the computation of The computation of global illumination at any point in a participating media [JDZJ08], translucency [KLC06] and for scene, without participating media, is governed by the ren- temporal coherence [GBP07]. Photon Mapping [Jen01] is a dering equation [Kaj86]. The radiance at a point p in direc- bi-directional method that computes indirect interreflections tion Θ is given by: and has been very successful at computing caustics. In pho- ton mapping, photons shot from light sources are stored in L(p → Θ) = Le(p → Θ)+ kd-trees and density estimation is used to compute illumina- R f (p,Θ ↔ Ψ)cos(N ,Ψ)L (p ← Ψ)δω tion points visible from the eye. Photon mapping continues Ωp r p i Ψ to be extended and improved [HOJ08,SJ09]. In practice, de- where Le is the emitted radiance, Li is the incoming radiance terministic approaches are used in conjunction with stochas- from across the hemisphere, fr represents the BRDF. Various tic approaches such as in Radiance (the rendering applica- solutions have been proposed for calculating the rendering tion), which uses a combination of distributed ray tracing, equation. These can be subdivided into two broad categories: irradiance caching and deterministic ray tracing and Mental finite-element solutions and point-sampling methods. Ray [Men09] which uses photon mapping. Finite-element methods were first used to solve the render- Other, recent methods which can also accelerate com- ing equation with the introduction of Radiosity [GTGB84]. putation of global illumination are Instant Radiosity tech- Radiosity methods are, for the most part, computed in con- niques [Kel97], light/illumination cuts [WFA∗05] and Pre- junction with rasterisation for computing the effects of the computed Radiance Transfer (PRT) [SKS02]. Instant Ra- indirect lighting due to diffuse inter-reflections. A com- diosity is a bi-directional method that, in a first phase has prehensive overview of radiosity is provided in the paper photons, termed Virtual Point Lights (VPLs), emitted from by Cohen et al. [CWH93]. Recent extensions of radios- the light sources and a subsequent rendering phase computes ity have made it possible to achieve interactive frame rates illumination using the VPLs as point light sources. Using [DSDD07]. small number of VPLs (< 1024), extensions of instant ra- ∗ Point-sampling methods are based on the ray tracing diosity, such as Imperfect Shadow Maps [RGK 08], have method. These methods, for the most part, solve the ren- been shown to compute global illumination in real time on dering equation using stochastic methods. Distributed Ray modern graphics hardware. Other methods have achieved in- ∗ Tracing [CPC84] extended classic ray tracing to render as- teractive rates on multi-core CPUs [DDC09,DDB 09]. Light pects of global illumination effects such as soft shadows, in- cuts/Illumination Cuts accelerate computation by clustering direct lighting, motion blur etc. Naive distributed ray-tracing similar components (usually light sources or VPLs) that are is characterised by a ray explosion. Path Tracing [Kaj86] not deemed that important together to reduce computational used a Markov chain random walk to solve the rendering costs. Light cuts has often been used in conjunction with equation. Path tracing does not follow a ray explosion, in- instant radiosity methods to further improve performance stead, only one ray is shot at each intersection. Metropolis [WABG06,WA09]. PRT methods use a precomputation step Light Transport (MLT) [VG97] uses Metropolis-Hastings to compute illumination and visibility which subsequently sampling by mutating paths (from the eye and the light) can be efficiently computed in real time on modern graphics over subsequent iterations. These three methods are con- hardware. PRT extensions have made it possible to obtain ∗ sidered unbiased in that the expected value of the error is global illumination for dynamic scenes [PWL 07,IDYN07]. zero. Other, more recent unbiased methods such as Energy- Redistribution Path Tracing (ERPT) [CTE05] and Popula- 5.2.1. Participating Media tion Monte Carlo [LFCD07] are beginning to demonstrate The assumption is made in many computer graphics appli- further improvements in performance and additional algo- cations that light travels through vacuum. In a number of rithmic improvements may yet bring unbiased methods to cases this is a reasonable assumption for realistic synthe- the fore of rendering. sised images. However, especially in the past, environments Other approaches have been used to accelerate ray-tracing had the intentional and unintentional presence of fog, smoke methods, these methods are considered biased but consis- and visible dust particles. Smoke can stem from burning can- tent meaning that the expected value of the error converges dles and wood, while dust particles can become highly vis- towards zero as the number of samples increases. Two of ible in interior environments, for example those in Egypt, the most popular methods are irradiance caching and pho- with sun ray, see Figure 21. Examples like this can al- ton mapping. Irradiance Caching [WRC88] accelerates the ter the visual perception of the environment significantly computation of indirect diffuse interreflections by caching [Rus95, HED05, SGGC05, GSGC06, GSGC08]. computations and interpolating them in object space. Irradi- Scattering in participating media needs to account for ance caching has influenced a large number of new caching a number of interactions: Emission, Absorption, Out- schemes for rendering. Methods have been created to be able Scattering and In-Scattering. The rendering equation can be to re-use computation for glossy and diffuse interreflections, extended to account for participating media [Gla94, Cha60, a method known as the radiance cache [KGPB05]. Caching PH04]. A number of extensions to the global illumination al-

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art gorithms have been developed to render global illumination rored sphere. Stumpfel et al. [SJW∗04] discuss the captur- with participating media, such methods include extensions ing process of HDR hemispherical image of the sun and sky to photon mapping [JC98] and irradiance caching [JDZJ08]. by using an SLR camera with a 180◦ fish-eye lens. In the commercial field, a few companies provide HDR cameras based on automatic multiple exposure capturing. The two 6. Sky Illumination main cameras are Spheron HDR VR camera [Sph02] and Simulating physically-based sky illumination is computa- Panoscan MK-3 [Pan02]. For example, the Spheron HDR tionally expensive. This is due to all sky elements that need VR can capture 26 f-stops of dynamic range at 50 Megapix- to be addressed during rendering. Extensive work has been els resolution in 30 minutes (depending on illumination con- done on modelling the sky, some of which for atmospheric ditions). science analysis purposes such as [PSI93, IKMN04], oth- Light probe images can be represented in several for- ers for realistic image synthesis [DSSD97, PSS99, JDS∗01, mats including; the mirrored sphere format [Deb98], see SJW∗04]. Figure 1, the vertical cross cube format [Deb01] and the Preetham et al. [PSS99] presents an analytic model that latitude-longitude panoramic representation [BN76,Deb06], approximates the spectrum of daylight for various atmo- see Figure 2. The conversion between these different rep- spheric conditions. The model also presents a model that resentations are also described in great detail by Reinhard approximates the effects of aerial perspective (atmosphere). et al. [RWPD05], and easily done by using existing soft- Atmospheric data is easily accessible for the atmospheric ware such as HDRShop [HDR]. Publicly available galleries science community. However, this data is not widely ac- of HDR environment maps exist in above-mentioned for- cessible to computer graphics professionals and has there- mats in [Deb01, Deb06]. A tutorial for using IBL in Radi- fore been provided as an appendix to the paper. Jensen et al. ance can be found in [Deb02], however, most commercial ∗ [JDS 01] presents a physically-based model of the night sky and open source renderers today also support IBL in the lat- for realistic image synthesis rendered with a Monte Carlo itude/longitude format. ray tracer. The authors describe how to model the appear- ance and illumination of the night sky with the exception of rare or unpredictable phenomena such as aurora, comets, and novas. Similarly, the paper also contains an appendix that present useful formulas for rendering of the night sky. Several commercially available and academic rendering software allow physically-based simulations of light from the sun, given fixed sets of attributes such as time and po- sition. gensky for instance is a program in Radiance that pro- Figure 1: Examples of Light Probes. Left: St. Peter’s Basil- duces a scene description for the Commission Internationale ica, Rome. Middle: Galileo’s Tomb, Santa Croce, Florence. de l’Eclairage (CIE) standard sky distribution at a given time Right: The Uffizi Gallery, Florence, [Deb01]. and position [WS03]. mkillum is another program in Radi- ance that computes light sources. Input to the program con- tains surfaces that act as light sources during rendering. This The primary concern regarding lighting using light probe is useful for computing interior renders for which it is not images is that a single light probe image is only valid for one necessary to compute outdoor light distribution. point in space. Incident Light Fields allows for capture and rendering of synthetic objects with spatially varying illumi- 6.1. Image-Based Lighting nation presented by Unger [Ung09]. The emergence of HDRI has made it possible to extend the In cultural heritage, there are several applications of IBL. use of capturing HDR photographs to capture radiance maps It is possible to relight objects with illumination that exists and use HDR photographs as light sources for a scene. Such within the site. If the aim is to estimate a light probe image images are referred to as Light Probes or Environment Maps that illuminates objects as they appeared in the past, it is and their application is often referred to as Image-Based necessary recreate the distant scene to appear as it once did. Lighting (IBL) [DM97, Deb98]. Instead of using synthetic This includes switching off modern light sources, remove light sources, it is possible to use sampled light values from recent additions to the scene and allow light to be projected HDR images mapped onto a sphere that acts as a skydome in the scene as it would have in the past. to represent the distant illumination. This is useful in order Hawkins et al. [HCD01] extend the application of IBL by to relight synthetic objects that occupy the same location as presenting a photometry-based approach to digitally docu- the light probe. ment cultural heritage artefacts. Each artefact is represented There are several methods of capturing these HDR envi- in terms of how it transforms light into images - this is also ronment maps. Debevec [Deb98] details the use of a mir- known as its Reflectance Field. The technique is data inten-

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art sive, and requires a light stage [DHT∗00] and thousands of direction that is the perpendicular/normal vector to the lo- photographs of the artefact. cation of the surface. This ability to document colour and Figure2 illustrates differences in two spherical panora- true 3D shape information by using normals, is the source of mas in the latitude/longitude format that show two different RTI’s documentary power. illumination conditions of the sky. The top image shows a RTI information can also be mathematically enhanced. largely diffuse lighting environment, while the bottom im- Enhancements have shown to disclose surface features that age is an example of an outdoor cloudless morning outside are difficult or impossible to discern under direct physical Tanum Church. Rendering objects with the bottom environ- examination. RTIs can be captured over a large size range, ment map produces much stronger shadows. The presence from several square meters to millimetres, and can acquire a of shadows is far less noticeable using images similar to the sample density and precision that most 3D scanners are un- top image. This is similar to how shadows are far more no- able to reach. RTIs can capture the surface features of a wide ticeable in sunny days over cloudy ones. variety of material types, including diffuse objects to highly reflective materials such as jade or gold. Polynomial Texture Mapping (PTM), was introduced by Malzbender et al. [MGW01]. This is considered the orig- inal approach to RTI and is a widely-available method to mathematically synthesise RTI images. The paper presents a mathematical model describing luminance information for each pixel in an image in terms of a function represent- ing the direction of incident illumination. The illumina- tion direction function is approximated in the form of a bi-quadratic polynomial whose six coefficients are stored along with the colour information of each pixel. PTMFit- ter, a tool to build PTMs from an image sequence, and PT- MViewer, an interactive viewer for PTMs are freely avail- able for non-commercial use under license from Hewlett- Packard [HP09].

Figure 2: Examples of Environment Maps. Top: Mount Vesuvius, [HWT∗09]. Bottom: A cloudless morning outside 7.1. Recent RTI Advances Tanum Church; a medieval church in Norway. The new RTI file format, .rti, provides the ability to select from a menu of polynomial basis functions, ways to mathe- matically synthesise the input set of digital photographs into the optimal type of RTI image. The menu of polynomial 7. Reflectance Transformation Imaging basis functions, which includes Polynomial Texture Map- Reflectance data of a real world object can be empirically ping (PTM), Spherical Harmonics (SH) [SKS02], and Hemi- acquired through a digital photography process called Re- spherical Harmonics (HSH) [PGB04], offers greater user flectance Transformation Imaging (RTI). Given a viewpoint, control of information richness vs. storage/processing cost the illumination on an object’s surface varies according to determinations. how the surface is illuminated. RTIs are derived from mul- PTMs offer excellent representations with a compact file tiple digital photographs of an object shot from a stationary size. The SH and HSH alternative offer more accurate repre- camera position. In each photograph, light is projected from sentations of a variety of materials, particularly specular and a different position and direction. translucent materials, but produce larger files. SH estimates This process produces a series of images of the same ob- coefficients over an entire sphere, whether there is informa- ject with different highlights and shadows. This informa- tion from all directions or not, while HSH, a special case of tion can be captured with conventional digital photographic SH, estimates coefficients over half a sphere. As each input equipment. After the light has been projected from a rep- image for an RTI represents the reflectance of a light located resentative sample of directions, all lighting information within a hemispheric sample range, when these images are from the images is mathematically synthesised to generate a fitted to the HSH polynomial reflectance function, the result- viewpoint-specific, per-pixel reflectance function, enabling ing coefficients are more accurate. This is helpful, especially a user to interactively re-light the object’s surface. For each in combination with RTIs of the same material captured from RTI-acquired viewpoint of the object, the resulting synthe- different viewpoints, in estimating a material’s BRDF. HSH sised file resembles a single 2D photographic image, but coefficients have also been demonstrated to enable more ac- with added reflectance information. Each 2D pixel, repre- curate classification of different materials, such as shiny or senting an area on the object’s 3D surface, visualises the 3D matte versions of the same colour, than SH [WGSD09].

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

All methods of building .rti files use the same input pho- This knowledge is encapsulated either in a physical equip- tographs. Legacy image sequences used to build PTMs in the ment structure such as shown in [MGW01, MMSL06] or past can be reused to generate new .rtis using a different ba- an instructional template that gives directions for the place- sis function. Selecting the HSH basis function when building ment of free-standing lights in pre-determined locations an .rti image, permits the use of 4, 9, 16, or 25 coefficients [DCCS06]. per pixel. The more coefficients, the more closely the result- The use of fixed-light position equipment has several ad- ing RTI resembles the initial photorealistic ground truth, at vantages. Automatic control of lights and camera can ac- the cost of larger RTI file size and more input images. quire an RTI with great speed, frequently between 2 to 5 Figure3 shows the previously introduced Merovingian minutes. These efficiencies are valuable when large numbers Treins rendered in alternating stripes. The stripes with the of objects must be captured. This equipment also can support black bars on the left show a 9 coefficient HSH rendering, the optics for controlled wavelength imaging of photoni- the stripes with the bars on the right show the PTM render- cally fragile cultural artefacts [MMSL06] and multi-spectral ing. The same 24 input photos were used for each rendering. imaging. There are also limitations associated with fixed-light po- sition equipment. The light distance from the object limits the object’s maximum size. Bigger objects require propor- tionately larger light distances. As equipment size increases, light power needs, structural requirements, transport difficul- ties, and costs increase. The Highlight RTI method differs from traditional RTI approaches in that the 3D locations of the lights used to construct the RTI need not be known or de- termined at the time the photographic images are recorded. Software can automatically find the black spheres in the RTI input photographs, locate the highlights and centres, and de- termine the incident direction of the light [BSP07]. Highlight RTI has several advantages: It is relatively in- expensive as much of the required photo equipment may al- ready be owned by the user. The staff of the Tauric Preserve of Chersoneses was outfitted with full RTI equipment for on-going documentation of their collection for $3,000 (ex- cluding the computer). Most of the equipment necessary is Figure 3: Merovingian Treins rendered in alternating already in widespread use. It is easy to learn and is compati- stripes. ble with existing working culture. Highlight RTI can capture a wide range of object sizes from 1 cm to several square me- The .rti format also contains structures for the manage- ters. ment of process history and object metadata. New tools to automate metadata management, consistent with interna- tional standard ISO21127, are under development. The tools for generating and viewing .rti files are freely available under GPL license. Single-view RTIs can be captured using two methods. All forms of RTIs require knowledge of the angle of incidence of each light relative to the object. Originally, RTIs were cap- tured with the position of the lights illuminating the object known before the capture session. Figure 4: Left: Highlight RTI setup for small objects. Right: In Highlight RTI, the position of the lights could be cal- Multi-view Capture Device. culated after a capture session rather than relying on knowl- edge of pre-determined light positions obtained before the session [MMSL06]. Highlight RTI calculates the incident In multi-view RTI, see Figure 4, the .rti format organises light angle using the position of the specular highlight, gen- a set of RTIs captured from multiple viewing angles of the erated by the incident light, on the surface of a black, re- object, together with associated metadata, into a form that flective sphere introduced into the scene. Previously, RTI permits a unified, interactive viewing experience. The user capture methods relied on a prior knowledge of the light captures single-view RTIs at known, equal-angle increments positions that are used to illuminate the imaging object. around part or all of the object. The set of single-view RTIs

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art is bundled with a file indicating their order and relationship, examples include the previously discussed photometric ap- and optical flow files (if they were created), into a single proach to digitising cultural artefacts [HCD01], but also Lin- package that is accessible by the RTIViewer. ear Light Source Reflectometry [GTHD03], which presents Multi-view RTI acquisition hardware and software exists a technique for estimating the spatially-varying reflectance to speed the multi-view RTI acquisition process. The soft- properties of a surface based on its appearance during a sin- ware permits user configuration of the number and position gle pass of a linear light source. By using a linear light source of illumination samples captured at each viewpoint. Proto- (rather than a point light), it is possible to estimate the dif- type hardware, called the Multi-view Photography Appara- fuse colour and specular colour and roughness of each point tus (MPA), provides a platform for 2 cameras to capture RTI of the surface. input images anywhere along a 150 degree inclination arc. An integrated, computer-controlled object rotation turntable 8. Flame Illumination permits 360 degrees of object motion relative to the cameras. The camera-object distance is also configurable, permitting Up until the 19th century, the only light that can be con- the camera to move closer to or farther away from the object sidered artificial came from flames. Flame light was a re- to allow optimum camera-lens optics for a variety of object quirement for visibility in dark and enclosed environments, sizes. To illuminate the object, there are 36 lights set in a including caves, grottos and interior of houses at night 42-inch diameter hemispheric array, which moves along the or where there was limited windows. It is clearly docu- inclination arc in tandem with the cameras. For each view- mented, mainly by archaeological findings that candles and point, the control software turns on a light, trips the camera oil lamps were commonly used by past societies [For66, shutter, turns off the light, downloads the photograph to the Ega99, Cha02, RC03, SCM04, GMMC08]. This is likely be- control computer, and turns on the next light in the sequence. cause they were more stable, movable and easy to manip- ulate. Hearths, lamps, candles and torches were among the The multi-view RTI processing software collects the in- common placeholders for flames [BLLR06, DCB02, For66]. put images from each viewpoint, generates .rti files for each Flame illumination is therefore a crucial part of authentic viewpoint in the sequence, prepares the file describing their cultural heritage visualisation. order, generates a log file for each action, and bundles the files for the RTIViewer, see Figure 5. 8.1. Flame Data Collection The visual appearance of past environments mostly depend on the physical properties of the flames that lit the scene. These properties depended on the components the flame was created from. Careful attention must be made to how the flames were created, as well as how they were maintained, including the flame’s colour, luminance and shape. It is also necessary to replicate all data related to the geometry, mate- rials and illumination from and surrounding the flame such as luminaires.

Figure 5: A screen capture of RTIViewer software

When in single-view mode, the RTIViewer can read a high-resolution .rti or .ptm file locally or remotely over the Internet and then permit the user to zoom in dynamically, receiving portions of higher and higher resolution images. A software module pre-processes RTI content on the reposi- Figure 6: Left: Beeswax and tallow candles [Cha02]. Right: tory side so that it can be accessed by remote users using the Materials for Roman illumination: Roman lucerna, olive oil RTIViewer. and salt, [GMMC07].

7.2. Other Image-based lighting approaches Experimental archaeology techniques may be used to Debevec [Deb03] presents an overview of various image- build a variety of reconstruction candles and oils for lamps based techniques for reconstruction purposes. Some of using appropriate materials, see Figure 6. Data should be which for modelling purposes, others for relighting. These based on historical records and gathered methodically, as

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art incorrect data collection can produce erroneous spectral re- third sample shown by the red curve, S3, is of an olive oil sults, see examples in Figure 8. At this stage the aid of his- closer to a common commercial olive oil, which can be pur- torian experts is crucial to the reliability of the reconstructed chase at an ordinary modern grocery store. In certain regions luminaire. of the visible electromagnetic spectrum, this sample has an increase of approximately 50% in light intensity. This hap- 8.1.1. Luminaires pens because the viscosity and chemical components of the olive oil differs in the manufacture process. The fuel manu- Ideally, historical luminaires of a period should be used to facture procedure (olive oil, in this example) has major im- maintain the highest level of authenticity. If there are no au- plications to the light intensities produced. Ancient manu- thentic luminaires available, these can be made using meth- facturing methods tend to produce inferior values of light in- ods and material available from that period and location in tensity, therefore this should be taken into account whenever time. This is important as the physical shape and the manu- these kind simulations are made. facture material of the luminaire has an impact on the proper- ties of the flame. In oil lamps, the nozzle temperature causes a preheating of the fuel, ensuring a more complete combus- tion [Ega99]. Archaeological evidences indicate that lamps with wicks and fuel became more common at the close of the Bronze Age and passed between civilisations through time. Egyptians, Greeks, Romans, Byzantine among others used them for specific purposes, see Figure 7. Figure 8: S1, S2 and S3 show measurements of three samples of olive oil from different origins and manufac- ture procedures. S1 and S1-S are the measurements of the same olive oil; with no salt (S1) and with salt (S1-S), [GMMC08, GMMC09].

8.1.3. Additives in fuel materials Figure 7: Left: A Roman oil lamp (lucerna) manufactured in clay [GMMC08]. Right: A North African Bronze oil lamp All chemical components change the attributes of the flame. from the 7th century A.D. (Photo by George Post c 2009), Salt for instance is a component used in Roman times be- [Ega99]. cause it produces a more stable and brighter flame and re- duce the amount of smoke generated by a flame [For66, DC01, GMMC08]. The plot in Figure8 also compares the spectral values of two different configurations of the same 8.1.2. Fuel materials olive oil sample. The mixture of salt in the olive oil has a Another important element in the spectral properties of the significant impact in light intensity (brown curve) with an burning flame is the fuel used to light the luminaire. Solid increase of more than 60%, see Figure 20. This illustrates fuel types can be utilised for firewood, torches and can- the importance of studying all additives in fuel materials. dles. Examples of materials for candles include beeswax and animal fat/tallow, see Figure 6. Liquid fuels for lamps 8.1.4. Wicks include olive oil, sesame oil, fish oil and castor oil. Veg- etable oils were the most common fuels used by civilisa- Wicks also have a large influence on the quality and at- tions established around the Mediterranean Sea due to the tributes of the flame. Woven or braided wicks were generally abundance of fuel supplies such as olives, sesame seeds in made from fibrous materials such as: flax, linen, papyrus, that region [KJW01]. Oils produce flames with better quality tow, dry reeds, hemp or ordinary rushes [For66, Ega99, compared to others from animal sources [For66]. The recon- GMMC07]. Other issues which must be considered include structed fuel materials should consist of the same chemical the length and thickness of the wick from the nozzle. This compounds and be prepared as the past fuel, or else the ex- thickness is of importance as thin wicks burn fuel more perimental data obtained could be inaccurate, see Figure 8. slowly than thick ones without having a significant influence on the size of the flame. Figure 9 shows a variety of exam- S1, S2 and S3 in Figure 8 present the spectral radiance of ples of wicks that can be used in such experiments. an oil lamp lighting measurements using three distinct sam- ples of pure olive oil as fuel, without of any kind of addi- 8.1.5. Measuring Lighting tives during the manufacture procedure. The green and blue curves (S1 and S2) illustrate the results obtained from two To acquire the experimental data it is necessary to measure samples manufactured using old traditional methods. The the absolute values of the spectral properties from the light

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

Figure 10: Spectral readings of an oil lamp. The light emit- ted by the lamp is almost entirely reflected on the white board which is then captured by the spectroradiometer, Figure 9: Examples of several wicks with different diame- [GMMC08]. ters for use in experiments. The bottom right three consist of linen and while the others are made up of cotton.

sion and transmission in the regions of combustion. Render- radiated on the flame under combustion. Spectroradiome- ing is achieved by applying a texture map to a geometri- ters and spectrophotometers are able to measure the emis- cal primitive similar to a flame which is then volume ren- sion range of a light source in the visible gamut of the elec- dered. [Rac96] extended this work to incorporate the dy- tromagnetic spectrum from 380nm to 760nm. The captured namic nature of the flame. wavelength data can subsequently be rendered as spectral radiance or converted to RGB. Below is a compiled list of suggestions for an experiment setup: • A fully dark and sealed room, with no winds to disturb the flame. • Avoid single readings, it may lead to misleading values. Multiple readings can be averaged. • The burning luminaire should be placed against a board perfectly diffuse and with a reflectance factor closer to 100%. • A calibrated spectroradiometer placed perpendicularly to the board, see Figure 10. Figure 11: Left: Simple luminaire model. Right: Real flame • The reading must be made solely from the reflected light in virtual environment, [DC01]. on the board, and not of the flame. • Measurements should be made in small wavelength inter- vals, typically; 4-5nm. An alternative approach is to incorporate video footage • Avoid specular surfaces near the experiment. of a real flames into a virtual environment as suggested by Devlin et al. [DC01], see Figure 11. The illumination of the flame is computed from the size and position of the flame, 8.2. Flame Simulation and not recorded in the video. Filming the flame against an There are several approaches to simulating flames. One of evenly green coloured, matt background enables a threshold the earliest examples is the use of particle systems [Ree83, for each frame, which can be used to identify and dismiss PP94, SF95, TTC97]. Particle systems refers to common the background colour. This effectively separates the flame techniques in computer graphics in which it is possible to from any unwanted parts of the scene. In Radiance, the illu- simulate certain phenomena that are otherwise difficult to mination of the flame can consist of a series of illum spheres, replicate through conventional rendering approaches. A par- as detailed in by Devlin et al. [DC01, CDB∗02], see Figure ticle system typically has an emitter, which acts as the source 11. When viewed directly, the spheres are invisible but emit of the particles. Sets of parameters define the behaviour of light in the area they occupy. The number and size of the the emitted particles, these can include initial velocity vec- spheres can vary to achieve a better fit to the shape of the tors, particle lifetime and colour. flame for each frame in the video sequence. Inakage [Ina90] proposed one of the first still candle flame Melek et al. [MK02] developed a method to simulate models. It was based on the physical scattering of light emis- flames combusted by fuel gases. A modified interactive grid

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

fluid dynamics equation solver was used to describe the mo- as close to accurate models compared to today’s standards. tion of a 3-gas system (fuel, air and exhaust gasses). The However, this is primarily due to the vast advancements in burning process is simulated by the amounts of fuel and air hardware and software solutions that have emerged in the consumption in each grid cell which allow them to gain in- past 12 years. Some well-known historical examples from teractivity in the process while maintaining realistic mod- the book include Stonehenge, the Parthenon, the Sphinx and elling and visualisation of fire flames. the Pyramids at Giza. Nguyen et al. [NFJ02] present a physically-based model to reproduce realistic both smooth and turbulent flames from 9.1. Reconstruction with Sky Illumination Models solid and gaseous fuels. The method innovates in the repro- Foni et al. [FPMT02] detail a virtual reconstruction of the duction of the interaction of fire with flammable and non- Hagia Sofia in Istanbul. Virtual texture restoration is a key flammable objects and in the expansion of the fuel as it re- topic in this paper due to the severely damaged decorative acts to form hot gaseous products. This was achieved by us- walls and ceilings. This was done using commercial photo ing implicit surfaces to represent the reaction zone where the editing software. While this may not be a physically correct gaseous fuel is combusted. approach to restore texture detail, there still remains research to reverse engineer details as texture maps and BRDFs. A render plug-in for 3DS Max [Aut09a] was used in the recon- struction to create realistic interior lighting. Kalabsha is a temple in Egypt that was dismantled and moved in the 1960s to save it from the rising water of Lake Nasser. The purpose of the reconstruction [SCM04] was to relight the temple in its previous position as it stood when it was first built 2000 years ago. Figure 13 shows two images of the temple of Kalabsha in Egypt employing gensky in Ra- Figure 12: Left: Comparison of same shapes for differing diance compared with a photograph of the temple today. degrees of gaseous expansion. Right: Two burning logs are placed on the ground and used to emit fuel. The top, unlit crossways log forces the flame to flow around it, [NFJ02].

Hasinoff et al. and Ihrke et al. [HK03, IM04] proposed to employ a tomographic methodology for reconstructing vol- umetric flames from several pictures of a fire. Pegoraro et al. [PP06] use molecular physics properties of the various chemical compounds of fire to compute realistic fire flames. Their rendering algorithm employs detailed simulation of ra- diative emission and refractive transfer which occurs in real flames. Bridault-Louchez et al. [BLLR06] and [BLRR07] gener- ate particles that can be used to control the Nonuniform ra- tional B-splines (NURBS) surfaces that models the flame on which a transparent 2D texture is mapped. This is done to compute the flame flickering in real-time. The authors com- pute the illumination by coding the photometric distribution Figure 13: Top: Photograph and Render comparison. Bot- of a real flame directly into a pixel on the GPU. tom Left: Daylight at the Temple of Kalabsha at 9am on 21 January photograph 2003. Bottom Right: Daylight simula- tion 30BC, [SCM04]. PART TWO Illuminating the Past Debevec et al. [DTG∗04, Deb05] detail a reconstruction of The Parthenon in Greece as it stands today and its an- cient form. In the former paper, the authors estimated dif- 9. Cultural Heritage Reconstructions fuse surface colours of the scene lit by sunlight illumination. Forte et al. [FSR97] present an extensive overview of the The paper describes the approach to calculate the spatially- earliest cultural heritage reconstructions. Despite attempt- varying diffuse surface reflectance given the scene’s geome- ing accurate rendering, few of the reconstructions stand out try, a set of photographs taken under natural illumination,

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art and corresponding measurements of the illumination. De- Earl et al. [EKB08] discuss the uses of computer graph- bevec [Deb05] then overviews the technology and produc- ics, including illumination for now-demolished spaces in and tion processes used to create the model. Einarsson et al. around the proposed Imperial Palace at Portus, Italy. The au- [EHD04] described a low-cost system for acquiring high- thors illustrate their approach with three alternative architec- resolution geometry and reflectance properties using photo- tural scenarios. metric stereo and was used to model ancient inscriptions on Corsini et al. [CCC08] introduce Stereo Light Probes to the Parthenon. The reconstruction is expanded upon to cover relight a scanned model of Michelangelo’s David, presented real-time realistic rendering through precomputed lighting by Koller et al. [KTL∗04]. This is done to illustrate the sig- on the GPU by Callieri et al [CDPS06] and relit using the ap- nificant visual difference of using two light probes compared proach to capture an HDR environment map by Stumpfel et ∗ to a single one, and shows how multiple copies of David ap- al. [SJW 04]. Sander [San06] uses level of detail approaches pear significantly different with statues casting shadows on to render the scene for real-time purposes. one another to a much greater extent with the use of two light probes. Happa et al. [HWT∗09,HAD∗09] illustrate two case stud- ies of the pipeline for the data acquisition, post-processing and rendering of cultural heritage objects and environments. The first, a Roman statue head of an Amazon (further dis- cussed by Earl et al. [EBH∗09, Ear09b]), the second, a re- construction of the Panagia Angeloktisti; a Byzantine church in Cyprus, lit at various times in the day, see Figure 15. Both papers emphasise how unbiased rendering for cultural her- itage reconstructions can be useful for interpreting the past, making use of Path Tracing and MLT.

Figure 14: Various images of the Parthenon. Top Left: Evening render. Top Right: The restored version on the an- cient Acropolis. Bottom Left: Modern day photograph. Bot- tom Right: Daylight modern render, [DTG∗04, Deb05].

CAVE-like systems surround the user with large displays. Gutierrez et al. [GSM∗04] present use of realistic lighting in a CAVE-like system for their virtual 10-12th century Mus- lim suburb Sinhaya. The lighting was calculated using the authors’ in-house Monte-Carlo global illumination software. The user could specify a time and a day, atmospheric condi- Figure 15: Top Left: A Roman statue head of an Amazon lit tions, and obtain a complete sun and sky model to light the under light captured on top of Mount Vesuvius. Top Right: scene. However, only natural light could be used in the ap- The head, added interior environment and a directional light plication. The authors also discuss the inherent difficulty of source. Full Environment Map seen in Figure 2,[HWT∗09]. adding presence in virtual environments. Bottom Left: Interior render of the Panagia Angeloktisti at A domestic building from Roman Italica in Andalucia, mid-day. Bottom Right: Exterior render of the Panagia An- Spain is presented in by Earl [Ear05]. Earl uses standard tex- geloktisti at mid-day, [HAD∗09]. tures to provide an initial impression of the scene. In addition however illumination is captured through automatic unwrap- ping of many thousands of textures produced under varying A reconstruction of the Minoan Cemetery at Phourni in lighting conditions. These lighting conditions in turn simu- Crete is presented by Papadopoulos and Earl [Ear09a]. The late visual prominence of different parts of the site. authors use a physical sky generated in Mental Ray to evalu- ate the tombs’ architecture, use, visual impact, their capacity Rome Reborn is a project that aims to reconstruct an- as well as the contribution of illumination to their interior. cient Rome. While the initial reconstruction was based on laser scanned models, version 2.0 was a procedurally gen- erated city, and uses an HDR Environment map of the sky 9.2. Image-based Techniques and Examples captured in Rome to illuminate the city in a realistic fash- The application of Reflectance Fields is described by ion [FAG∗08]. Hawkins et al. [HCD01] and employed to a number of

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

Native American cultural artefacts including an otter fur headband, a feathered headdress, an animal-skin drum, and several pieces of neckwear and clothing. Gardner et al. [GTHD03] illustrate recovering reflectance parameters us- ing Linear Light Source Reflectometry for an medieval illu- minated manuscript and relights it with IBL.

9.2.1. Reflectance Transformation Imaging Examples RTI’s documentary usefulness has been demonstrated in several natural science and cultural heritage object ar- eas and offers significant advantages; [MGW01, Mud04, MVS∗05, FBM∗06, MMSL06, MO05, ZSMA07, MMC∗08, Mal, Cul09]. Some of the advantages of RTI in the cultural heritage domain include: • Non-contact acquisition of data. • Clear representation of 3D shape characteristics through interactive viewing tools. • Data discernment improvements over direct physical ex- amination through RTI enhancement functions. • No data loss due to shadows and specular highlights. • High resolution sample densities [MO05]. Figure 16: Top Left: Unenhanced ’Julian Star’, the coin is • Automatic determination of the most information-rich il- seen in its natural, primarily diffuse mode. Top Right: ’Julian lumination directions [MO05]. Star’ with Specular Enhancement. Bottom Left: Highlighted • Simple and achievable image processing pipeline. surface areas are hidden. Bottom Right: Surfaces visible. • Easy online communication [Arc09]. The top two images in Figure 16 show two images of the coin commemorating Julius Caesar’s Funeral Games. The under olive oil lamp. Figure 17 illustrates how virtual sce- Comet appearing during these games is circled in its full narios appear significantly different when viewed with mod- context and then magnified. Due to heavy wear, there is sig- ern lighting, revealing details, and perception not possible nificant abrasion of the coin’s surface features. On the right by other means [CDB∗02]. of the figure, the coin is seen in its Specular Enhancement mode. The specular enhancement mode shows, with superior Chalmers et al. [CGH00, Cha02] present a reconstruction clarity, the four rays of light surrounding the comet, along of a prehistoric rock shelter site of Cap Blanc, France where with the triangular comets tail extending behind it at the ’five animal sculptures are engraved in the cave walls. Viewing o’clock’ position. these sculptures illuminated by an animal fat tallow can- dle creates a warmer ambiance and increases the amount of Mudge et al. [MVS∗05] confirmed previous observations shadows to the whole scene. The perception of the dynamic that RTIs were able to robustly document the surfaces of flame coupled with the 3D structure of the carving suggest highly specular materials, particularly gold. The bottom im- how people may have used this combination of the flicker- ages in Figure 16 show two images of the obverse of the ing flame and 3D structure to create animations 15,000 years Merovingian Triens. The bottom left image of the same fig- ago, see Figure 18. ure is one of the twenty four input images for the PTM and shows where specular reflections have caused data loss. A computer-generated model based on a half-timbered structure in Southampton was generated by Devlin et al. [DCB02]. The paper investigates the bright colour of pottery 9.3. Reconstruction of Ancient Flames in medieval times; how the top half of some jugs appear both While several societies in the past may have created fire from glazed and decorated. This is likely due to deliberate use of arbitrary items in their surroundings, others carefully se- light, suggesting several pots would have looked brightest lected the content of their fuel to ignite the flames in their en- when lit from above, and done so for practical reasons. vironments [DC01,DCB02,SCM04,ZCBRC07,GMMC08]. Roussou et al. [RC03] present a reconstruction of a sec- The resources were needed for other purposes, in domestic tion of the Knossos palace, a Bronze Age archaeological site uses for example, or because the lighting properties of some on Crete. The paper included accurate modelling of a flame specific fuels can affect the perception of the artefacts that that may have been used to light the past environment. A they illuminate. realistic and efficient flickering of flames was created. How- Devlin et al. [DC01] investigate Pompeii frescoes viewed ever, the method is limited to small flames from candles.

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

Figure 19: Egyptian hieroglyphics from the Temple of Kal- absha illuminated under various conditions. Left: Modern light (painted) Middle: Sesame oil lamp light (unpainted) and Right: Sesame oil lamp light (painted), [SCM04].

Figure 17: Top Left: Medieval painting lit by modern light- ing. Top Right: The same painting lit by simulated medieval candles [CDB∗02]. Bottom Left: Frescoes in Pompeii lit by modern lighting. Bottom Right: Same room with authentic Figure 20: Illuminated frescoes and mosaics by Roman candles and olive oil lamp, [DC01, DCB02]. lamps and candlesticks holders. The same sample of olive oil is the fuel source in both images. Left: mixed with salt. Right: No salt, [GMMC08, GMMC09].

data to recreate early Islamic glass lamp lighting, and vali- dated how various water levels and glass fixture shapes used during early Islamic times changed the overall light patterns and downward caustics.

Figure 18: Top Left: The simulation under 55w incandes- 9.4. Participating Media cent bulb. Top Right & Bottom: The simulation under animal Less dense participating mediums such as dust particles fat lamplight, [CGH00]. present in an environment can greatly alter the visual per- ception in an environment. Sundstedt et al. [SGGC05] dis- cuss sunlight in the Temple of Kalabsha and show the vi- sual impact difference in simulations with and without dust, Colourful Egyptian hieroglyphs viewed under sesame oil see Figure 21. Gutierrez et al. extend this work for use candle is presented by Sundstedt et al. [SCM04]. In this of participating media for cultural heritage reconstructions case the blue paint appears as almost green, see Figure 19. [GSGC06, GSGC08]. The interior was also modelled to allow exploration and re- construction of hieroglyphics under contemporary lighting. Properties of real sesame oil lamps were measured in the re- 9.5. Visual Perception construction. The accurate simulation of the distribution of light energy in A reconstruction of Roman frescoes and mosaics from scenes does not guarantee that images generated from this Conimbriga in Portugal is presented by Gonçalves et al. simulation will have a correct visual appearance. This is due [GMMC08,GMMC09]. The visual perception of Roman oil to two main reasons. Firstly, it may be a limitation of the dis- lamp illumination through HDRI rendering is also explored. play technology used. Secondly, it is also necessary to model Figure 20 shows Roman oil lamps using the same sample of the behaviour of the HVS to capture and then visualise the olive oil as fuel mixture, but with different fuel additives. correct appearance of the original scene virtually. Kider et al. [KFY∗09] investigate the use of experimental The eye adjusts to the ambient illumination in the environ-

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

virtual tourism applications, however it also assumes wall writings have already been examined and interpreted by ar- chaeologists. Zányi et al. [ZCBRC07] compare images of the Icon of Christ Arakiotis from Cyprus illuminated with simu- lated modern lighting and candle light which was present in Byzantine times on both a traditional LCD and a novel High Dynamic Range display, see Figure 23. The authors argue that in the Byzantine period, the layout and lighting of the icons, frescoes and mosaics were carefully regulated to achieve a spiritual response from the viewer and illustrate how such ideas can be investigated through HDR display technologies. Figure 21: Top Left: real picture of a slit. Top Middle: simulation without participating media. Top Right: simula- tion with participating media. Bottom Left: Chamber with- out participating media, Bottom Right: Chamber including participating media, [GSGC08].

Figure 23: Left: Modern Lighting. Middle: Approximated Candle Lighting on an HD LCD screen. Right: Approxi- mated Candle Lighting on an HDR display, [ZCBRC07].

Surface Depth Hallucination is a method developed by Glencross et al. [GWM∗08] to recover models of predomi- nantly diffuse textured surfaces that can be relit and viewed from any angle under various illumination models. Using standard digital cameras from a single view the technique takes a diffuse-lit (cloudy day) image and a flashlit image to produce an albedo map and textured height field. The paper uses experimental validation to compare Mayan Glyphs at Chichén Itzá relit using this method to rendering of laser- Figure 22: Top four images: Simulating dark adaptation. scanned depth map in Radiance. After some time, sensitivity increases, allowing the HVS to recover some visibility. Bottom left: Kalabsha tone-mapped The use of physical light to reconstruct physical artefacts with a local model of eye adaptation, [LSC04]. Bottom right; in situ is an emerging application as shown by Aliaga et linear mappings at different exposure times. al. [ALY08]. The authors present a system to restore and alter the appearance of old and deteriorated objects. The system allows altering the appearances of objects to that of a synthetic restoration. This in turn allows the virtual re- ment. The timecourse of light and dark adaptation is well- illuminations of the objects. The authors use an ancient Chi- known and it is different for the cones and rods [HF86]. Dur- nese vase as an example. ing the eye adaptation process, it is important to simulate ef- Ritschel et al. [RIF∗09] argue that the temporal properties fects including colour sensitivity and visual acuity variation. of glare are a strong means to increase perceived brightness The eye is not capable of seeing colours in darkness. A lo- and therefore increase realism of bright light sources. The cal model of eye adaptation for HDR images is presented by paper proposes a model based on the anatomy of the human Ledda et al. [LSC04], see Figure 22. eye, enabling the simulation of dynamic glare on a GPU. Beraldin et al. [BPEH∗02] discuss problems encountered It does not react to the user’s eye, but rather a human eye in visualising weathered materials in a Byzantine crypt. En- model. The glare is dependent on type of illumination and hancing texture maps over realistic lighting can be useful for can be seen pulsating realistically in video. This can be used

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art to increase realism of light sources for cultural heritage items An aspect that remains largely unexplored in cultural her- such as lit candles. itage reconstruction includes rendering of dark or predomi- nantly indirectly lit environments. Low lit environments are significantly difficult to render accurately for several rea- 10. Discussion sons; the lack of HDR display systems, but also the absence We will, of course, never know precisely how any ancient of computational power necessary to render such environ- environment was lit. In addition, our modern preconceptions ments. and cultural backgrounds preclude a complete understanding Spectral rendering refers to rendering of light with wave- of the experience of an ancient viewer. However, authentic length data. Traditionally, the spectral nature of light is ig- illumination of reconstructions offer a chance to get closer nored in favour of a simpler RGB model. Spectral rendering, to a perception of the past thereby facilitating new modes of and its subtle, but important visual impact and implications engagement and interpretation [GG00]. is another unexplored topic in cultural heritage settings. There also exists models for night time rendering, how- 10.1. Illumination for Virtual Archaeology ever, to our knowledge there are no papers that explore physically-based rendering of cultural heritage objects at The importance of illumination is of significant importance night. With the recent advancements in HDRI capturing often overshadowed by the far more noticeable visual im- methods and display technologies, there will be the possi- pacts of modelling geometry and materials. Authentic il- bility of rendering night light, and explore the importance of lumination is an element of a reconstruction that, unless moon and star light in the past. present will make the viewer unaware of what informa- tion may be present in the scene. The question of realism in virtual archaeology remains the object of much discus- 10.3. Documentation & Validation sion [Rei91,FSR97,RR97,Mar01,CDB ∗02]. With this topic comes also the concern regarding the authenticity of the il- Once a reconstruction is considered complete, it may be re- lumination. While there are no differences in light transport examined in the future. Without comprehensive documen- from ancient to modern times, there are many changes in tation of the data capture and reconstruction process, vital the manner people use light in the past thousands of years. information about the reconstruction might be lost in the Additionally, changes in the environment that may affect the future. This includes topics such as approximations made perception of light at the remnants of these sites. Changes by the modellers, laser-scanning technicians, difficulties en- in pollution, climate, materials and physical environment countered during data collection and in post-processing. (neighbouring buildings) all need to be addressed in the au- While research papers give a general idea of the process they thentic reconstruction of the site. rarely detail problems with the final reconstructions. This is important if the reconstruction should be of any scientific and historic merit for future generations. While there are no 10.2. Rendering standardised approaches of documenting progress of recon- struction, this is likely to become a topic in the future. It is surprising that when rendering methods that require au- thentic lighting, unbiased methods have not been fully ex- There is increasing research interest in modelling uncer- plored in the cultural heritage domain. The reasons for this tainty in cultural heritage site reconstructions, and defining may vary. Possibly such methods are still too computation- standards for meta-data [AG07]. This is likely to continue to ally expensive or possibly the quality difference offered by be an active area of research in the coming years. A major such methods is not substantially significant to warrant the problem with illumination models is determining the histor- increased cost. Yet, newer methods such as ERPT [CTE05] ical correctness to the approach. For instance; a model may and PMC [LFCD07] may offer possibly distinct advantages be correct for a given state in time, but not for any other pe- at reduced costs. Other, more deterministic methods such riods in time, or vice versa. as radiance caching [KGPB05], light cuts [WFA∗05] and The Cedarberg for instance, is an area in South Africa its extensions such as bi-directional importance sampling with a large number of prehistoric cave art. Figure 24 shows [WA09] could well provide suitable alternatives to the more an example of the impact of direct sunlight on a site at differ- traditional photon mapping and irradiance caching methods. ent times of the year. The image on the left shows a photo- In terms of interactive global illumination, it is now clear graph of painting of a woman in purple taken in November. that it is becoming achievable with the development of It looks nothing special, however on the right is the same interactive ray tracing methods [WMG∗07], the extension painting taken in March. The shadow from the natural over- of interactive ray tracers to interactive global illumination hang of the rock shelter almost perfectly frames the painting [DDC09, DDB∗09], and new GPU techniques such as Im- on three sides. This shows how much of an impact correct perfect Shadow Maps [RGK∗08] and the method presented illumination can have on a site and how it can easily be over- by Wang et al. [WWZ∗09]. looked depending on what time the site is examined.

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

[Bly06] BLYTHE D.: The 10 system. ACM Transactions on Graphics (2006).

[BMA04] BABA M.,MUKUNOKI M.,ASADA N.: Shadow removal from a real image based on shadow density. In SIGGRAPH ’04: ACM SIGGRAPH Posters (2004).

[BN76] BLINN J. F., NEWELL M. E.: Texture and reflection in computer generated images. Communications of the ACM (1976). ∗ [BPEH 02] BERALDIN J.-A.,PICARD M.,EL-HAKIM S. F., GODIN G.,LATOUCHE C.,VALZANO V., BANDIERA A.: Exploring a byzantine crypt through a high- resolution texture mapped 3d model: combining range data and photogrammetry. In Proceedings of ISPRS/CIPA Int. Workshop Scanning for Cultural Heritage Recording. (2002). Figure 24: Cedarberg cave art. Left: November. Right: [BSP07] BARBOSA J.,SOBRAL J.L.,PROENÇA A. J.: Imaging techniques to simplify the ptm generation of a bas-relief. In VAST ’07: Proceedings of the Symposium on March. Virtual Reality, Archaeology and Cultural Heritage (2007).

[BT78] BARROW H.,TANENBAUM J.: Recovering intrinsic scene characteristic from images. In Computer Vision System (1978).

Not all new information is obtainable through methodi- [CCC08] CORSINI M.,CALLIERI M.,CIGNONI P.: Stereo light probe. Computer cal scientific approaches. The validation of authentic illumi- Graphics Forum (2008). ∗ nation is difficult for objects we have no prior knowledge [CDB 02] CHALMERS A.,DEVLIN K.,BROWN D.,DEBEVEC P., MARTINEZ P., of, except its current location. Occasionally, hypotheses can WARD G.: Recreating the Past. SIGGRAPH Course, 2002. spring up by chance or coincidentally by combining exist- [CDPS06] CALLIERI M.,DEBEVEC P., PAIR J.,SCOPIGNO R.: A realtime immersive application with realistic lighting: The Parthenon. Computer & Graphics (2006). ing information. An authentic reconstruction should always [CGH00] CHALMERS A.,GREEN C.,HALL M.: Firelight: Graphics and Archaeology. critically examine existing scientific and historical evidence, SIGGRAPH Electronic Theatre, 2000. otherwise remain an interpretation of the past. ∗ [CGP 01] CUCCHIARA R.,GRANA C.,PICCARDI M.,PRATI A.,SIROTTI S.: Im- proving shadow suppression in moving object detection with HSV color information. Acknowledgements In Proceedings of Intelligent Transportation Systems (2001). [Cha60] CHANDRASEKHAR S.: Radiative transfer. New York: Dover Publications, Images in figures1 and 14 courtesy of Paul Debevec, 1960. ∗ [Deb01,DTG 04,Deb05]. Figure 3 courtesy of Oliver Wang, [Cha02] CHALMERS A.: Very realistic graphics for visualising archaeological site Prabath Gunawardane, and James Davis. Left image in Fig- reconstructions. In Proceedings of 18th Spring Conference on Computer Graphics ure 7 courtesy of George Post, [Ega99]. Images in Figure (2002). 12 courtesy of Henrik Wann Jensen [NFJ02]. Figures 11, [CPC84] COOK R.L.,PORTER T., CARPENTER L.: Distributed ray tracing. In SIG- GRAPH ’84: Proceedings of the 11th annual conference on Computer graphics and 13, 19 and 22 courtesy of University of Bristol. Thanks also interactive techniques (1984). to Graeme Earl for comments. We would also like to thank [CRL06] CHALMERS A.,ROUSSOS I.,LEDDA P.: Authentic illumination of archae- the anonymous reviewers of this VAST-STAR for their com- ological site reconstructions. In CGIV’2006: IS&T’s Third European Conference on ments. Color in Graphics, Imaging and Vision (2006). [CT82] COOK R.L.,TORRANCE K. E.: A reflectance model for computer graphics. ACM Transactions on Graphics (1982). References [CTE05] CLINE D.,TALBOT J.,EGBERT P.: Energy redistribution path tracing. In [AC07] ARTUSI A.,CHETVERIKOV. D.: A survey of specularity removal methods. SIGGRAPH ’05: Proceedings of the 32nd annual conference on Computer graphics Technical Report, 2007. and interactive techniques (2005). RNOLD ESER [AG07] A D.,G G.: Research agenda for the applications of ict to cultural [Cul09] CULTURAL HERITAGE IMAGING: CHI Webpage. http://www.c-h- heritage. EPOCH publications, 2007. i.org/featured_projects/featured_projects.html, 2009. [ALY08] ALIAGA D.G.,LAW A.J.,YEUNG Y. H.: A virtual restoration stage for [CWH93] COHEN M. F., WALLACE J.,HANRAHAN P.: Radiosity and realistic image real-world objects. ACM Transactions on Graphics (2008). synthesis. Academic Press Professional, Inc., 1993. [AMHH08] AKENINE-MOLLER T., HAINES E.,HOFFMAN N.: Real-Time Rendering. [DC01] DEVLIN K.,CHALMERS A.: Realistic visualisation of the pompeii frescoes. A. K. Peters, Ltd., 2008. In AFRIGRAPH ’01: Proceedings of the 1st International Conference on Computer [App68] APPEL A.: Some techniques for machine renderings of solids. In Graphics, Virtual Reality, Visualisation and Interaction in Africa (2001). Proceedings of the Spring Joint Computer Conference (1968). [DCB02] DEVLIN K.,CHALMERS A.,BROWN D.: Predictive lighting and perception [Arc09] ARCHAEOLOGICAL COMPUTING RESEARCH GROUP,UNIVERSITYOF in archaeological representations. In UNESCO "World Heritage in the Digital Age" SOUTHAMPTON: A polynomial texture map of an amazon statue, interactive demo. 30th Anniversary Digital Congress, UNESCO World Heritage Centre (2002). http://www.soton.ac.uk/archaeology/acrg/acrg_research_amazon.html, 2009. [DCCS06] DELLEPIANE M.,CORSINI M.,CALLIERI M.,SCOPIGNO R.: High quality [Aut09a] AUTODESK: 3DS Max website. http://www.autodesk.com/3dsmax, 2009. PTM acquisition: Reflection transformation imaging for large objects. In VAST ’06: [Aut09b] AUTODESK: Maya website. http://www.autodesk.com/maya, 2009. Proceedings of the Symposium on Virtual Reality, Archaeology and Cultural Heritage (2006). [BA03] BABA M.,ASADA N.: Shadow removal from a real picture. In SIGGRAPH ’03: ACM SIGGRAPH Sketches & Applications (2003). [DCWP02] DEVLIN K.,CHALMERS A.,WILKIE A.,PURGATHOFER W.: STAR re- port on tone reproduction and physically-based spectral rendering. In Eurographics [BLLR06] BRIDAULT-LOUCHEZ F., LEBLOND M.,ROUSSELLE F.: Enhanced illumi- (2002). nation of reconstructed dynamic environments using a real-time flame model. In AFRI- ∗ GRAPH ’06: Proceedings of the 4th international conference on Computer graphics, [DDB 09] DEBATTISTA K.,DUBLA P., BANTERLE F., SANTOS L. P., CHALMERS virtual reality, visualisation and interaction in Africa (2006). A.: Instant caching for interactive global illumination. Computer Graphics Forum (2009). [BLRR07] BRIDAULT F., LEBOND M.,ROUSSELLE F., RENAUD C.: Real-time ren- dering and animation of plentiful flames. In Proceedings of the 3rd Eurographics [DDC09] DUBLA P., DEBATTISTA K.,CHALMERS A.: Adaptive interleaved sampling Workshop on Natural Phenomena (2007). for interactive high fidelity rendering. Computer Graphics Forum (2009).

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

[Deb98] DEBEVEC P.: Rendering synthetic objects into real scenes: bridging traditional [For66] FORBES J.R.: Studies in Ancient Technology. Leiden & Brill,Netherlands, and image-based graphics with global illumination and high dynamic range photog- 1966. raphy. In SIGGRAPH ’98: Proceedings of the 25th annual conference on Computer [FPMT02] FONI A.,PAPAGIANNAKIS G.,MAGNENAT-THALMANN N.: Virtual hagia graphics and interactive techniques (1998). sophia: Restitution, visualization and virtual life simulation. Presented at the UNESCO [Deb01] DEBEVEC P.: Light probe image gallery. http://www.debevec.org/probes/, World Heritage Congress, 2002. 2001. [FSR97] FORTE M.,SILIOTTI A.,RENFREW C.: Virtual Archaeology: Re-Creating [Deb02] DEBEVEC P.: Image-based lighting. Computer Graphics and Applications, Ancient Worlds. Harry N Abrams, 1997. IEEE (2002). [GBP07] GAUTRON P., BOUATOUCH K.,PATTANAIK S.: Temporal radiance caching. [Deb03] DEBEVEC P.: Image-based techniques for digitizing environments and arti- IEEE Transactions on Visualization and Computer Graphics (2007). facts. 3DIM: Invited paper for The 4th International Conference on 3-D Digital Imag- ing and Modeling, 2003. [GG00] GOODRICK G.,GILLINGS M.: Constructs, simulations and hyperreal worlds: the role of virtual reality (vr) in archaeological research. On the Theory and Practice [Deb05] DEBEVEC P.: Making ’The Parthenon’. Invited paper: VAST ’05: International of Archaeological Computing, 2000. Symposium on Virtual Reality, Archaeology, and Cultural Heritage, 2005. [Gla94] GLASSNER A.: Principles of Digital Image Synthesis. Morgan Kaufmann, [Deb06] DEBEVEC P.: High resolution light probe gallery. http://gl.ict.usc.edu/Data/HighResProbes/, 2006. 1994. ∗ [DHT 00] DEBEVEC P., HAWKINS T., TCHOU C.,DUIKER H.-P., SAROKIN W., [GMMC07] GONÇALVES A.,MAGALHÃES L.,MOURA J.,CHALMERS A.: SAGAR M.: Acquiring the reflectance field of a human face. In SIGGRAPH ’00: Metodologia para geração de imagens high dynamic range em iluminação romana. Proceedings of the 27th annual conference on Computer graphics and interactive tech- In Proceedings of International Association for the Scientific Knowledge InterTIC’07 niques (2000). (2007).

[DM97] DEBEVEC P., MALIK J.: Recovering high dynamic range radiance maps from [GMMC08] GONÇALVES A.,MAGALHÃES L.,MOURA J.,CHALMERS A.: Accurate photographs. In SIGGRAPH ’97: Proceedings of the 24th annual conference on Com- modelling of roman lamps in conimbriga using high dynamic range. In VAST ’08: puter graphics and interactive techniques (1997). Proceedings of the Symposium on Virtual Reality, Archaeology and Cultural Heritage (2008). [DRS08] DORSEY J.,RUSHMEIER H.,SILLION F.: Digital modeling of the appearance of materials. Morgan Kaufmann, 2008. [GMMC09] GONÇALVES A.,MAGALHÃES L.,MOURA J.,CHALMERS A.: High dy- namic range - a gateway for predictive ancient lighting. In ACM Journal on Computing [DSDD07] DACHSBACHER C.,STAMMINGER M.,DRETTAKIS G.,DURAND F.: Im- plicit visibility and antiradiance for interactive global illumination. In SIGGRAPH and Cultural Heritage (2009). ’07: Proceedings of the 34th annual conference on Computer graphics and interactive [GSGC06] GUTIERREZ D.,SUNDSTEDT V., GOMEZ F., CHALMERS A.: Dust and techniques (2007). light: predictive virtual archaeology. Journal of Cultural Heritage (2006). [DSSD97] DAUBERT K.,SCHIRMACHER H.,SILLION F., DRETTAKIS G.: Hierar- [GSGC08] GUTIERREZ D.,SUNDSTEDT V., GOMEZ F., CHALMERS A.: Modeling Proceedings of the Eurographics chical lighting simulation for outdoor scenes. In light scattering for virtual heritage. Journal on Computing and Cultural Heritage Workshop on Rendering (1997). (2008). ∗ [DTG 04] DEBEVEC P., TCHOU C.,GARDNER A.,HAWKINS T., POULLIS C., ∗ [GSM 04] GUTIERREZ D.,SERON F., MAGALLON J.,SOBREVIELA E.,LATORRE STUMPFEL J.,JONES A.,YUN N.,EINARSSON P., LUNDGREN T., FAJARDO M., P.: Archaeological and cultural heritage: Bringing life to an unearthed muslim suburb MARTINEZ P.: Estimating surface reflectance properties of a complex scene under in an immersive environment. Journal of Cultural Heritage (2004). captured natural illumination. In USC ICT Technical Report ICT-TR-06.2004, 2004. [GTGB84] GORAL C.M.,TORRANCE K.E.,GREENBERG D. P., BATTAILE B.: Mod- [DW00] DICARLO J.C.,WANDELL. B. A.: Rendering high dynamic range images. SIGGRAPH ’84: Proceed- SPIE conferences, 2000. eling the interaction of light between diffuse surfaces. In ings of the 11th annual conference on Computer graphics and interactive techniques [Ear05] EARL G. P.: Wandering the house of the birds: reconstruction and perception at (1984). roman italica. In VAST ’05: Symposium on Virtual Reality, Archaeology and Cultural Heritage, Short Papers (2005). [GTHD03] GARDNER A.,TCHOU C.,HAWKINS T., DEBEVEC P.: Linear light source reflectometry. SIGGRAPH ’03: Proceedings of the 30th annual conference on Com- [Ear09a] EARL C. P. G. P.: Structural and lighting models for the minoan cemetery puter graphics and interactive techniques (2003). at Phourni, crete. In VAST ’09: Proceedings of the Symposium on Virtual Reality, ∗ Archaeology and Cultural Heritage (2009). [GWM 08] GLENCROSS M.,WARD G.,MELENDEZ F., JAY C.,LIU J.,HUBBOLD R.: A perceptually validated model for surface depth hallucination. In SIGGRAPH [Ear09b] EARL G. P.: In press physical and photo-realism: The herculaneum amazon. ’08: Proceedings of the 35th annual conference on Computer graphics and interactive In Plenary session: Fundamentos teóricos de la Arqueología virtual. Proceedings of techniques (2008). Arqueologica 2.0 Seville (2009). ∗ ∗ [HAD 09] HAPPA J.,ARTUSI A.,DUBLA P., BASHFORD-ROGERS T., DEBATTISTA [EBH 09] EARL G. P., BEALE G.,HAPPA J.,WILLIAMS M.,TURLEY G.,MAR- K.,HULUSIC´ V., CHALMERS A.: The virtual reconstruction and daylight illumination TINEZ K.,CHALMERS A.: A repainted amazon. In Proceedings of the EVA London of the panagia angeloktisti. In VAST ’09: Proceedings of the Symposium on Virtual Conference (2009). Reality, Archaeology and Cultural Heritage (2009). [Ega99] EGAN F.: Fine bronze oil lamps. http://www.eganbronze.com/Pages/lamps.html, 1999. [HCD01] HAWKINS T., COHEN J.,DEBEVEC P.: A photometric approach to digitiz- ing cultural artifacts. In VAST ’01: Proceedings of the Symposium on Virtual reality, [EHD04] EINARSSON P., HAWKINS T., DEBEVEC P.: Photometric stereo for archeo- archeology, and cultural heritage (2001). logical inscriptions. In SIGGRAPH ’04: ACM SIGGRAPH 2004 Sketches (2004). [HDR] HDRSHOP: Example software to research hdri. http://gl.ict.usc.edu/HDRShop/. [EKB08] EARL G. P., KEAY S.J.,BEALE G.: Computer graphic modelling at portus: Analysis, reconstruction and representation of the claudian and trajanic harbours. In [HED05] HAWKINS T., EINARSSON P., DEBEVEC P.: Acquisition of time-varying Proceedings of EARSEL SIG Remote Sensing for Archaeology and Cultural Heritage participating media. In SIGGRAPH ’05: Proceedings of the 32nd annual conference (2008). on Computer graphics and interactive techniques (2005).

[ESE01] E. SALVADOR P. G., EBRAHIMI. T.: Shadow identification and classification [HF86] HOOD D.,FINKELSTEIN M.: Sensitivity to light. Handbook of Perception and using invariant color models, 2001. Human Performance. John Wiley & Sons, 1986. ∗ [FAG 08] FRISCHER B.,ABERNATHY D.,GUIDI G.,MYERS J.,THIBODEAU C., [HK03] HASINOFF S. W., KUTULAKOS K. N.: Photo-consistent 3d fire by flame-sheet SALVEMINI A.,MÜLLER P., HOFSTEE P., MINOR B.: Rome reborn. In SIGGRAPH decomposition. In ICCV ’03: Proceedings of the Ninth IEEE International Conference ’08: ACM SIGGRAPH new tech demos (2008). on Computer Vision (2003). ∗ [FBM 06] FREETH T., BITSAKIS Y., MOUSSAS X.,SEIRADAKIS J.,TSELIKAS A., [HOJ08] HACHISUKA T., OGAKI S.,JENSEN H. W.: Progressive photon mapping. In MANGOU H.,ZAFEIROPOULOU M.,HADLAND R.,BATE D.,RAMSEY A.,ALLEN SIGGRAPH Asia ’08: ACM SIGGRAPH Asia papers (2008). M.,CRAWLEY A.,HOCKLEY P., MALZBENDER T., GELB D.,AMBRISCO W., ED- MUNDS M.: Decoding the ancient greek astronomical calculator known as the an- [HP09] HEWLETT-PACKARD: Polynomial texture mapping - interactive relighting soft- tikythera mechanism. Nature (2006). ware license. http://www.hpl.hp.com/research/ptm/downloads/agreement.html, 2009.

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

∗ [HWT 09] HAPPA J.,WILLIAMS M.,TURLEY G.,EARL G. P., DUBLA P., BEALE [LFTG97] LAFORTUNE E.,FOO S.-C.,TORRANCE K.,GREENBERG D.: Non-linear G.,GIBBONS G.,DEBATTISTA K.,CHALMERS A.: Virtual relighting of a roman approximation of reflectance functions. In SIGGRAPH ’97: Proceedings of the 24th statue head from herculaneum: a case study. In AFRIGRAPH ’09: Proceedings of the annual conference on Computer graphics and interactive techniques (1997). 6th International Conference on Computer Graphics, Virtual Reality, Visualisation and ∗ [LLK 02] LIN S.,LI Y., KANG S.B.,TONG X.,SHUM H. Y.: Diffuse-specular Interaction in Africa (2009). separation and depth recovery from image sequences. In European Conference on [IDYN07] IWASAKI K.,DOBASHI Y., YOSHIMOTO F., NISHITA T.: Precomputed ra- Computer Vision (2002). diance transfer for dynamics scene taking into account light interreflection. In Euro- [LSC04] LEDDA P., SANTOS L. P., CHALMERS A.: A local model of eye adaptation graphics Symposium on Rendering (2007). for high dynamic range images. In AFRIGRAPH ’04: Proceedings of the 3rd interna- [IKMN04] IGAWA N.,KOGA Y., MATSUZAWA T., NAKAMURA H.: Models of sky tional conference on Computer graphics, virtual reality, visualisation and interaction radiance distribution and sky luminance distribution. Solar Energy (2004). in Africa (2004).

[IM04] IHRKE I.,MAGNOR M.: Image-based tomographic reconstruction of flames. In [LTQ06] LIN S.,TAN P., QUAN L.: Separation of highlight reflections on texture sur- SCA ’04: Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer faces. In IEEE Conference on Computer Vision and Pattern Recognition 2006 (2006). animation (2004). [Mal] MALZBENDER T.: Tom malzbender publication list. [Ina90] INAKAGE M.: A simple model of flames. In CG International ’90: Proceed- http://www.hpl.hp.com/personal/Tom_Malzbender/papers/papers.htm. ings of the eighth international conference of the Computer Graphics Society on CG [Mar01] MARTINEZ P.: Digital realities and archaeology: a difficult relationship or International ’90: computer graphics around the world (1990). a fruitful marriage? In VAST ’01: Proceedings of the Symposium on Virtual reality, [JC98] JENSEN H. W., CHRISTENSEN P. H.: Efficient simulation of light transport in archeology, and cultural heritage (2001). scences with participating media using photon maps. In SIGGRAPH ’98: Proceed- [Men09] MENTAL IMAGES: Mental ray company website. ings of the 25th annual conference on Computer graphics and interactive techniques http://www.mentalimages.com/, 2009. (1998). ALZBENDER ELB OLTERS ∗ [MGW01] M T., G D.,W H.: Polynomial Texture Maps. In [JDS 01] JENSEN H. W., DURAND F., STARK M.,PREMOZE S.,DORSEY J., SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics SHIRLEY P.: A physically-based nightsky model. In SIGGRAPH ’01: Proceedings of and interactive techniques (2001). the 28th annual conference on Computer graphics and interactive techniques (2001). [MK02] MELEK Z.,KEYSER J.: Interactive simulation of fire. Technical Report 2002- [JDZJ08] JAROSZ W., DONNER C.,ZWICKER M.,JENSEN H. W.: Radiance caching 7-1, Texas A&M University, Department of Computer Science, 2002. for participating media. ACM Transactions on Graphics (2008). ∗ [MMC 08] MUDGE M.,MALZBENDER T., CHALMERS A.,SCOPIGNO R.,DAVIS [Jen01] JENSEN H. W.: Realistic Image Synthesis Using Photon Mapping. AK Peters, J.,WANG O.,GUNAWARDANE P., ASHLEY M.,DOERR M.,PROENCA A.,BAR- 2001. BOSA J.: Image-based empirical acquisition, scientific reliability, and long-term dig- [JMLH01] JENSEN H. W., MARSCHNER S.R.,LEVOY M.,HANRAHAN P.: A prac- ital preservation for the natural sciences and cultural heritage. Eurographics Tutorial tical model for subsurface light transport. In SIGGRAPH ’01: Proceedings of the 28th Notes, 2008.

annual conference on Computer graphics and interactive techniques (2001). [MMSL06] MUDGE M.,MALZBENDER T., SCHROER C.,LUM M.: New reflection [Kaj86] KAJIYA J.: The rendering equation. In SIGGRAPH ’86: Proceedings of the transformation imaging methods for rock art and multiple viewpoint display. VAST 13th annual conference on Computer graphics and interactive techniques (1986). ’06: Proceedings of the Symposium on Virtual Reality, Archaeology and Cultural Her- itage, 2006. [Kel97] KELLER A.: Instant radiosity. In SIGGRAPH ’97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques (1997). [MN99] MITSUNAGA T., NAYAR S.: Radiometric self calibration. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (1999). ∗ [KFY 09] KIDER J. T., FLETCHER R.L.,YU N.,HOLOD R.,CHALMERS A., BADLER N. I.: Recreating early islamic glass lamp lighting. In VAST ’09: Proceedings [MO05] MALZBENDER T., ORDENTLICT E.: Maximum entropy lighting for physical of the Symposium on Virtual Reality, Archaeology and Cultural Heritage (2009). objects. Hewlett-Packard Technical Report HPL-2005-68, 2005.

[KGPB05] KRIVÁNEKˇ J.,GAUTRON P., PATTANAIK S.,BOUATOUCH K.: Radiance [MP95] MANN S.,PICARD R. W.: Being "undigital" with digital cameras: Extending caching for efficient global illumination computation. IEEE Transactions on Visual- dynamic range by combining differently exposed pictures. In Proceedings of IS&T ization and Computer Graphics (2005). 46th annual conference (1995).

[KJF07] KUANG J.,JOHNSON G.M.,FAIRCHILD M. D.: iCAM06: A refined im- [Mud04] MUDGE M.: Implementing digital technology adoption by cultural heritage age appearance model for HDR image rendering. Journal of Visual Communication professionalsl. SIGGRAPH ’04: Conference Presentations for Cultural Heritage and (2007). Computer Graphics Panel, 2004. ∗ UDGE OUTAZ CHROER UM [KJW01] KIMPE K.,JACOBS P. A., WAELKENS M.: Analysis of oil used in late ro- [MVS 05] M M.,V J.-P., S C.,,L M.: Reflection transfor- man oil lamps with different mass spectrometric techniques revealed the presence of mation imaging and virtual representations of coins from the hospice of the grand st. predominantly olive oil together with traces of animal fat. Journal of Chromatography bernard. In VAST ’05: Proceedings of the Symposium on Virtual Reality, Archaeology A (2001). and Cultural Heritage (2005). ∗ [MZK 06] MALLICK S. P., ZICKLER T., KRIEGMAN D.J.,,BELHUMEUR P. N.: [KLC06] KENG S.-L.,LEE W.-Y., CHUANG J.-H.: An efficient caching-based render- ing of translucent materials. The Visual Computer: International Journal of Computer Specularity removal in images and videos: A pde approach. In Proceedings European Graphics (2006). Conference Computer Vision (2006). [NFB97] NAYAR S.K.,FANG X.,BOULT T.: Separation of reflection components [KLHS02] KIM D.,LIN S.,HONG K.,SHUM H.: Variational specular separation using color and polarization. In IAPR Workshop on Machine Vision Applications (2002). using color and polarization. In International Journal of Computer Vision (1997). [NFJ02] NGUYEN D.Q.,FEDKIW R.,JENSEN H. W.: Physically based modeling [KSK87] KLINKER G.J.,SHAFER S.A.,KANNADE T.: Using a color reflection model and animation of fire. SIGGRAPH ’02: Proceedings of the 29th annual conference on to separate highlights from object color. In Proceedings 1st International Conference Computer graphics and interactive techniques (2002). on Computer Vision, IEEE London (1987). [Pan02] PANOSCAN: Panoscan MK-3, Company website. http://www.panoscan.com/, [KSK90a] KLINKER G.J.,SHAFER S.A.,KANNADE T.: The measurement of high- 2002. lights in color images. In International Journal of Computer Vision (1990). [PGB04] P. GAUTRON J.K ˛ARIVTANEKˇ S. N. P., BOUATOUCH. K.: A novel hemi- [KSK90b] KLINKER G.J.,SHAFER S.A.,KANNADE T.: A physical approach to color spherical basis for accurate and efficient rendering. In Rendering Techniques 2004, image understanding. In International Journal of Computer Vision (1990). Eurographics Symposium on Rendering (2004). ∗ [KTL 04] KOLLER D.,TURITZIN M.,LEVOY M.,TARINI M.,CROCCIA G., [PH04] PHARR M.,HUMPHREYS G.: Physically Based Rendering: From Theory to CIGNONI P., SCOPIGNO R.: Protected interactive 3d graphics via remote rendering. Implementation. Morgan Kaufmann, 2004. In SIGGRAPH ’04: Proceedings of the 31st annual conference on Computer graphics and interactive techniques (2004). [PLQ06] P.TAN,LIN S.,QUAN L.: Separation of highlight reflections on texture sur- faces. In IEEE Conference on Computer Vision and Pattern Recognition (2006). [LFCD07] LAI Y.-C., FAN S.H.,CHENNEY S.,DYER C.: Rendering techniques. In Photorealistic Image Rendering with Population Monte Carlo Energy Redistribution [PP94] PERRY C.H.,PICARD R. W.: Synthesizing flames and their spreading. In (2007), Kautz J., Pattanaik S., (Eds.). Proceedings of the 5th Eurographics Workshop on Animation and Simulation (1994).

c The Eurographics Association 2009. J. Happa, M. Mudge, K. Debattista, A. Artusi, A. Gonçalves, and A. Chalmers / Illuminating the Past - State of the Art

[PP06] PEGORARO V., PARKER S. G.: Physically-based realistic fire rendering. In [ST95a] SCHLÜNS K.,TESCHNER M.: Analysis of 2d color spaces for highlight elim- Proceedings of the 2nd Eurographics Workshop on Natural Phenomena (2006). ination in 3d shape reconstruction. In Proceedings ACCV (1995).

[PSI93] PEREZ R.,SEALS R.,INEICHEN P.: An allweather model for skyluminance [ST95b] SCHLÜNS K.,TESCHNER M.: Fast separation of reflection components and distribution. Solar Energy (1993). its application in 3d shape recovery. In Proceedings 3rd Color Imaging Conference (1995). [PSS99] PREETHAM A.,SHIRLEY P., SMITS B.: A practical analytic model for day- light. In SIGGRAPH ’99: Proceedings of the 26th annual conference on Computer [SWND04] SHREINER D.,WOO M.,NEIDER J.,DAVIS T.: OpenGL(R) 1.4 Reference graphics and interactive techniques (1999). Manual (4th Edition). Addison Wesley Longman Publishing Co., Inc., 2004. ∗ [PWL 07] PAN M.,WANG R.,LIU X.,PENG Q.,BAO H.: Precomputed radiance [TFA05] TAPPEN M. F., FREEMAN W. T., ADELSON E. H.: Recovering intrinsic im- transfer field for rendering interreflections in dynamic scenes. Computer Graphics ages from a single image. In IEEE Transactions on Pattern Analysis and Machine Forum (2007). Intelligence (2005).

[Rac96] RACZKOWSKI J.: Visual simulation and animation of a laminar candle flame. [TI03] TAN R.,IKEUCHI K.: Separating reflection components of terxtured surfaces In International Conference on Image Processing and Computer Graphics (1996). using a single image. In Proceeding IEEE International Conference on Computer Vision ICCV [RBS99] ROBERTSON M.A.,BORMAN S.,STEVENSON R. L.: Dynamic range im- (2003). provement through multiple exposures. In Proceedings of the 1999 International Con- [TI04] TAN R.,IKEUCHI K.: Intrinsic properties of an image with highlights. Meeting ference on Image Processing (ICIP-99) (1999). on Image Recognition and Understanding MIRU 2004 (2004).

[RC03] ROUSSOS I.,CHALMERS A.: High fidelity lighting of knossos. In VAST ’03: [TTC97] TAKAHASHI J.-Y.,TAKAHASHI H.,CHIBA N.: Image synthesis of flickering Proceedings of the Symposium on Virtual Reality, Archaeology and Intelligent Cultural scenes including simulated flames. IEICE transactions on information and systems Heritage (2003). (1997).

[Ree83] REEVES W. T.: Particle systems - a technique for modeling a class of fuzzy [Ung09] UNGER J.: Incident Light Fields. PhD thesis, Linköping University, 2009. objects. ACM Transactions on Graphics (1983). [VG97] VEACH E.,GUIBAS L. J.: Metropolis light transport. In SIGGRAPH ’97: Pro- [Rei91] REILLY P.: Towards a virtual archaeology. Computer Applications and Quan- ceedings of the 24th annual conference on Computer graphics and interactive tech- titative Methods in Archaeology (1991). niques (1997). ∗ [RGK 08] RITSCHEL T., GROSCH T., KIM M.H.,SEIDEL H.-P., DACHSBACHER [WA09] WANG R.,AKERLUND O.: Bidirectional importance sampling for unstructured C.,KAUTZ J.: Imperfect shadow maps for efficient computation of indirect illumina- illuminationn. In Eurographics (2009). tion. SIGGRAPH Asia ’08: ACM SIGGRAPH Asia papers (2008). ∗ [WABG06] WALTER B.,ARBREE A.,BALA K.,GREENBERG D. P.: Multidimen- [RIF 09] RITSCHEL T., IHRKE M.,FRISVAD J.R.,COPPENS J.,MYSZKOWSKI K., sional lightcuts. ACM Transactions on Graphics (2006). SEIDEL H.-P.: Temporal glare: Real-time dynamic simulation of the scattering in the human eye. In Eurographics (2009). [War92] WARD G. J.: Measuring and modeling anisotropic reflection. SIGGRAPH ’92: Proceedings of the 19th annual conference on Computer graphics and interactive [RR97] ROBERTS J.,RYAN N.: Alternative archaeological representations within vir- techniques (1992). tual worlds. In Proceedings of the 4th UK Virtual Reality Specialist Interest Group Conference - Brunel University (1997). [War94] WARD G.: The radiance lighting simulation and rendering system. In SIG- GRAPH ’94: Proceedings of the 21st annual conference on Computer graphics and [Rus95] RUSHMEIER H.: Rendering participating media: Problems and solutions from interactive techniques (1994). application areas. In Proceedings of the 5th Eurographics Workshop on Rendering (1995), Springer-Verlag. [Wei01] WEISS Y.: Deriving intrinsic images from image sequences. In Proceeding IEEE International Conference on Computer Vision ICCV (2001). [RWPD05] REINHARD E.,WARD G.,PATTANAIK S.,DEBEVEC P.: High Dynamic ∗ Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, [WFA 05] WALTER B.,FERNANDEZ S.,ARBREE A.,BALA K.,DONIKIAN M., 2005. GREENBERG D. P.: Lightcuts: a scalable approach to illumination. ACM Transac- tions on Graphics (2005). [San06] SANDER P.: The Parthenon demo preprocessing and real-time rendering tech- niques for large datasets. SIGGRAPH, 2006. [WGSD09] WANG O.,GUNAWARDANE P., SCHER S.,DAVIS J.: Material classifica- tion using BRDF slices. IEEE Conference on Computer Vision and Pattern Recognition [SCM04] SUNDSTEDT V., CHALMERS A.,MARTINEZ P.: High fidelity reconstruction (2009). of the ancient egyptian temple of kalabsha. In AFRIGRAPH ’04: Proceedings of the 3rd international conference on Computer graphics, virtual reality, visualisation and [Whi80] WHITTED T.: An improved illumination model for shaded display. Commun. interaction in Africa (2004). ACM (1980). ∗ [SF95] STAM J.,FIUME E.: Depicting fire and other gaseous phenomena using dif- [WMG 07] WALD I.,MARK W. R., GÜNTHER J.,BOULOS S.,IZE T., HUNT W., fusion processes. In SIGGRAPH ’95: Proceedings of the 22nd annual conference on PARKER S.G.,SHIRLEY P.: State of the Art in Ray Tracing Animated Scenes. In Computer graphics and interactive techniques (1995). Eurographics (2007).

[SGGC05] SUNDSTEDT V., GUTIERREZ D.,GOMEZ F., CHALMERS A.: Participating [WRC88] WARD G.,RUBINSTEIN F. M., CLEAR R. D.: A ray tracing solution for media for high-fidelity cultural heritage. In VAST ’05: Proceedings of the Symposium diffuse interreflection. In SIGGRAPH ’88: Proceedings of the 15th annual conference on Virtual Reality, Archaeology and Cultural Heritage (2005). on Computer graphics and interactive techniques (1988).

[Sha84] SHAFER S.: Using color to separate reflection components. Technical Report, [WS03] WARD G.,SHAKESPEARE R.: Rendering with Radiance: The Art and Science 1984. of Lighting Visualisation (Revised edition). Morgan Kaufmann Publishers, 2003. ∗ ∗ [SHS 04] SEETZEN H.,HEIDRICH W., STUERZLINGER W., WARD G.,WHITEHEAD [WWZ 09] WANG R.,WANG R.,ZHOU K.,PAN M.,BAO H.: An efficient gpu-based L.,TRENTACOSTE M.,GHOSH A.,VOROZCOVS A.: High dynamic range display approach for interactive global illumination. In SIGGRAPH ’09: Proceedings of the systems. ACM Transactions on Graphics (2004). 36th annual conference on Computer graphics and interactive techniques (2009).

[SJ09] SPENCER B.,JONES M. W.: Into the blue: Better caustics through photon re- [YCK06] YOON K.J.,CHOI Y., KWEON I. S.: Fast separation of reflection compo- laxation. Computer Graphics Forum (2009). nents using a specularity-invariant image representation. In IEEE International Con- ∗ [SJW 04] STUMPFEL J.,JONES A.,WENGER A.,TCHOU C.,HAWKINS T., DE- ference on Image Processing ICIP (2006). BEVEC AFRIGRAPH ’04: Proceedings P.: Direct HDR capture of the sun and sky. In [ZCBRC07] ZÁNYI E.,CHRYSANTHOU Y., BASHFORD-ROGERS T., CHALMERS A.: of the 3rd international conference on Computer graphics, virtual reality, visualisation High dynamic range display of authentically illuminated byzantine art from Cyprus. In and interaction in Africa (2004). VAST ’07: Proceedings of the Symposium on Virtual Reality, Archaeology and Cultural [SKS02] SLOAN P.-P., KAUTZ J.,SNYDER J.: Precomputed radiance transfer for real- Heritage (2007). time rendering in dynamic, low-frequency lighting environments. In SIGGRAPH ’02: [ZSMA07] ZÁNYI E.,SCHROER C.,MUDGE M., A. C.: Lighting and byzantine glass Proceedings of the 29th annual conference on Computer graphics and interactive tech- tesserae. In Proceedings of the 2009 EVA London Conference (2007). niques (2002).

[Sph02] SPHERON: SpherOn HDR, Company website. http://www.spheron.com/, 2002.

c The Eurographics Association 2009.