A Survey of Volumetric Illumination Techniques for Interactive

Daniel Jönsson, Erik Sundén, Anders Ynnerman and Timo Ropinski

Linköping University Post Print

N.B.: When citing this work, cite the original article.

Original Publication:

Daniel Jönsson, Erik Sundén, Anders Ynnerman and Timo Ropinski, A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering, 2014, Computer graphics forum (Print), (33), 1, 27-51. http://dx.doi.org/10.1111/cgf.12252 Copyright: Wiley http://eu.wiley.com/WileyCDA/

Postprint available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105757

Volume 0 (1981), Number 0 pp. 1–25 COMPUTER GRAPHICS forum

A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering

Daniel Jönsson, Erik Sundén, Anders Ynnerman, and Timo Ropinski

Scientific Group, Linköping University, Sweden

Abstract Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation—Volume Rendering

1. Introduction the past years the scientific visualization community started to develop interactive volume rendering techniques, which Interactive volume rendering as a subdomain of scientific vi- do not only allow simulation of local lighting effects, but sualization has become mature in the last decade. Several also more realistic global effects [RSHRL09]. The exist- approaches exist, which allow a domain expert to visual- ing approaches span a wide range in terms of underlying ize and explore volumetric datasets at interactive frame rates rendering paradigms, consumed hardware resources as well on standard graphics hardware. While the varying render- as lighting capabilities, and thus have different perceptual ing paradigms underlying these approaches directly influ- impacts [LR11]. Due to the wide range of techniques and ∗ ence image quality [SHC 09], in most real-world use cases tradeoffs it can be hard for both application developers and the standard emission absorption model is used to incorpo- new researchers to know which technique to use in a cer- rate illumination effects [Max95]. However, the emission ab- tain scenario, what has been done, and what the challenges sorption model is only capable of simulating local lighting are within the field. Therefore, we provide a survey of tech- effects, where light is emitted and absorbed locally at each niques dealing with volumetric illumination, which allows sample point processed during rendering. This result in vol- interactive data exploration, i. e., rendering parameters can ume rendered images, which convey the basic structures rep- be changed interactively. When developing such techniques, resented in a volumetric dataset (see Figure1 (left)). Though, several challenges need to be addressed: all structures are clearly visible in these images, information regarding the arrangement and size of structures is some- times hard to convey. In the computer graphics community, • A volumetric optical model must be applied, which usu- more precisely the real-time rendering community, the quest ally results in more complex computations due to the for realistic interactive image synthesis is one of the ma- global nature of the illumination. jor goals addressed since its early days [DK09]. Originally, • Interactive transfer function updates must be supported. mainly motivated by the goals to be able to produce more This requires, that the resulting changes to the 3D struc- realistic imagery for computer games and , the ture of the data must be incorporated during illumination. perceptual benefits of more realistic representations could • Graphics processing unit (GPU) algorithms need to be de- also be shown [WFG92, Wan92, GP06]. Consequently, in veloped to allow interactivity.

c 2013 The Author(s) Computer Graphics Forum c 2013 The Eurographics Association and Blackwell Publish- ing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Jönsson et al. / Survey: Interactive Volumetric Illumination

The remainder of this article is structured as follows. In the next section we will discuss the models used for sim- ulating volumetric illumination. We will have our starting point in the seminal work presented by Max [Max95], with the emission absorption model which we successively en- rich to later on support global volumetric illumination ef- fects. The reader interested mainly in the concepts underly- ing the implementations of the approaches covered in this article may skip Section2 and directly proceed to Section3, Figure 1: Comparison of a volume rendered image using where we classify the covered techniques. We then propose the local emission absorption model [Max95] (left), and the usage guidelines in Section4, which motivates the selected global half angle slicing technique [KPHE02] (right). Apart criteria and can help the reader in comparing the different from the illumination model, all other image parameters are techniques. These guidelines have been formulated with the the same. goal to support an application expert choosing the right algo- rithm for a specific use case scenario. Based on the classifi- cation, Section5 discusses all covered techniques in greater detail. Since one of the main motivations for developing vol- When successfully addressing these challenges, increas- umetric illumination models in the scientific visualization ingly realistic volume rendered images can be generated in- community is improved visual perception, we will discuss teractively. As can be seen in Figure1 (right), these images recent findings in this area with respect to interactive vol- not only increase the degree of realism, but also support ume rendering in Section6. Furthermore, based on the ob- improved perceptual comprehension. While Figure1 (left) servations made in Section6 and the limitations identified shows the overall structure of the rendered computed tomog- throughout the article, we will present future challenges in raphy (CT) dataset of a human heart, the shadows added in Section7. Finally, the article concludes in Section8. Figure1 (right) provide additional depth information. We will review and compare the existing techniques, seen in Table1, for advanced volumetric illumination with re- spect to their applicability, which we derive from the illu- 2. Volumetric Illumination Model mination capabilities, the performance behavior as well as their technical realization. Since this report addresses inter- In this section we derive the volume illumination model fre- active advanced volumetric illumination techniques, poten- quently exploited by the advanced volume illumination ap- tially applicable in scientific visualization, we do not con- proaches described in literature. The model is based on the sider methods requiring precomputations that do not permit optical model derived by Max [Max95] as well as the exten- change of the transfer function during rendering. The goal sions more recently described by Max and Chen [MC10]. is to provide the reader with an overview of the field and to For clarity, we have included the definitions used within the support design decisions when exploiting current concepts. model as a reference below. We have decided to fulfill this by adopting a classification which takes into account the technical realization, perfor- Mathematical Notation mance behavior as well as the supported illumination ef- L(~x,~ωo) Radiance scattered from~x into direction ~ωo. fects. Properties considered in the classification are for in- Li(~x,~ωi) Radiance reaching~x from direction ~ωi. stance, the usage of preprocessing, because precomputation Le(~x,~ωo) Radiance emitted from~x into direction ~ωo. based techniques are usually not applicable to large volumet- s(~x,~ωi,~ωo) Shading function used at position~x. ric datasets as the precomputed illumination volume would Models radiance coming from direction ~ωi consume too much additional graphics memory. Other ap- and scattered into direction ~ωo. proaches might be bound to a certain rendering paradigm, σa(~x) Absorption coefficient at~x. which serves as another property, since it might hamper us- σs(~x) Scattering coefficient at~x. age of a technique in a general manner. We have selected σt (~x) Extinction coefficient at~x. the covered properties, such that the classification allows an Sum of absorption and out-scattering. applicability-driven grouping of the existing advanced vol- T(~xi,~x j) Transparency between position~xi umetric illumination techniques. To provide the reader with and position~x j. a mechanism to choose which advanced volumetric illumi- Ll Initial radiance as emitted by the light nation model to be used, we will describe these and other source. properties as well as their implications along with our clas- L0(x~0,~ωo) Background intensity. sification. Furthermore, we will point out the shortcomings of the current approaches to demonstrate possibilities for fu- The simplest form of the volume rendering integral de- ture research. scribes the attenuation of radiance from all samples~x0 along

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

Absorption Emission Scattering

Figure 2: Before reaching the eye, radiance along a ray may change due to absorption, emission, or scattering.

the ray ~ωo and only incorporates emission and absorption, (a) Single scattering (b) Multiple scattering such that it can be written as: Figure 3: Cornell box with a blue transparent medium ren- dered with and without multiple scattering. Multiple scatter- L(~x,~ωo) = L0(x~0,~ωo) · T(x~0,~x) + ing causes photons to bounce to the walls and inside the blue ~x Z 0 0 0 0 (1) medium (images taken from [JKRY12]). σa(~x ) · Le(~x ,~ωo) · T(~x ,~x)d~x . x~0 0 Li(~x ,~ωi) is the attenuated incident radiance Ll traveling While L0(x~0,~ωo) depicts the background energy entering the 0 0 through the volume, defined as volume, Le(~x ,~ωo) and σa(~x ) describes the radiance at po- 0 sition ~x that is emitted and absorbed into direction ~ωo in- side the volume, and T(~x ,~x ) is dependent on the optical i j L (~x0,~ω ) = L · T(~x ,~x0), (5) depth and describes how radiance is attenuated when travel- i i l l ing through the volume: 0 where ~ωi = ~xl −~x . The single scattering formulation in Equation3 can be ex- R x~j 0 0 0 − σt (~x )d~x tended to support multiple scattering by replacing Lss(~x ,~ωo) T(~x ,~x ) = e ~xi , (2) i j with a term that recursively evaluates the incoming light (see

0 0 0 Figure3 for a comparison): where σt (~x ) = σa(~x ) + σs(~x ) is the extinction coefficient, which describes the amount of radiance absorbed and scat- 0 tered out at position ~x (see Figure2). According to Max, 0 Z 0 0 Lms(~x ,~ωo) = s(~x ,~ωi,~ωo) · L(~x ,~ωi)d~ωi. (6) the standard volume rendering integral simulating absorp- Ω tion and emission can be extended with these definitions to support single scattering (shadows): Since volumetric data is discrete, the volume rendering in- tegral is in practice approximated numerically, usually by exploiting a Riemann sum.

L(~x,~ωo) = L0(x~0,~ωo) · T(x~0,~x) + ~x Z 0 0 0 0 0 0 2.1. Material scattering (σa(~x ) · Le(~x ,~ωo) + σs(~x ) · Lss(~x ,~ωo)) · T(~x ,~x)d~x . x~0 Scattering is the physical process which forces light to de- (3) viate from its straight trajectory. The reflection of light at a surface point is thus a scattering event. Depending on 0 Here, Lss(~x ,~ωo) represents the radiance scattered into direc- the material properties of the surface, incident photons are tion ~ωo from all incoming directions ~ωi on the unit sphere scattered in different directions. When using the ray-casting towards position~x0: analogy, scattering can be considered as a redirection of a ray penetrating an object. Since scattering can also change a photons frequency, a change in color becomes possible. Z 0 0 0 Max [Max95] presents accurate solutions for simulating Lss(~x ,~ωo) = s(~x ,~ωi,~ωo) · Li(~x ,~ωi)d~ωi. (4) Ω light scattering in participating media, i. e., how the light tra- jectory is changed when penetrating translucent materials. In 0 s(~x ,~ωi,~ωo) is the material shading function, which de- classical computer graphics, single scattering accounts for scribes how much radiance coming from direction ~ωi that is light emitted from a light source directly onto a surface and scattered into direction ~ωo (see Section 2.1). When simpli- reflected unimpededly into the observer’s eye. Incident light fying the above equation to only deal with one light source thus comes directly from a light source. Multiple scatter- at position ~xl, the integral over all directions vanishes and ing describes the same concept, but incoming photons are

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

Thus, we will first cover the most relevant algorithm proper- ties that need to be considered when making such a decision. The most limiting property of advanced volumetric il- volume data set lumination algorithms is the dependence on an underly-

single scattering ing rendering paradigm. In the past, different paradigms al- lowing interactive volume rendering have been proposed. multiple scattering These paradigms range from a shear-warp transforma- tion [LL94], over splatting [MMC99], and texture slic- ing [CCF94] to volume ray-casting [KW03]. Besides their different technical realizations, the visual quality of the ren- dered image also varies with respect to the used render- ing paradigm [SHC∗09]. Since efficiency is of great im- Figure 4: Single scattering accounts for light that scatters portance in the area of interactive volumetric illumination, at one location towards the eye, while multiple scattering many approaches have been developed which allow a perfor- involves more than one bounce before traveling towards the mance gain by being closely bounded to a specific rendering eye. paradigm (e. g., [KPHE02, ZC02, ZC03, SPH∗09, vPBV10, SYR11]). Especially when integrating volumetric illumina- tion into existing visualization systems, the underlying ren- dering paradigm of the existing approaches might be limit- scattered multiple times (see Figure4). To generate realis- ing when deciding which algorithm to be used. Therefore, tic images, both single (direct light) and multiple (indirect we will discuss this dependency when presenting the cov- light) scattering events have to be taken into account. While ered algorithms in Section5. in classical polygonal computer graphics, light reflection on Another property of major importance is, which illumi- a surface is often modeled using bidirectional reflectance nation effects are supported by a certain algorithm. This is distribution functions (BRDFs), in volume rendering phase on the one hand specified based on the supported lighting functions are used to model the scattering behavior. Such a and on the other hand on the light interaction and propaga- phase function can be thought of as being the spherical ex- tion inside the volume. The supported illumination can vary tension of a hemispherical BRDF. It defines the probability with respect to the number and types of the light sources. of a photon changing its direction of motion by an angle of Supported light source types range from classical point and θ. In interactive volume rendering, the Henyey-Greenstein directional light sources to area and textured light sources. 1−g2 model G(θ,g) = 3 is often used to incorpo- Since several algorithms focus on ambient visibility com- 2 (1 +g −2g cos θ) 2 putation, we consider also an ambient light source which rate phase functions. The parameter g ∈ [−1,1] describes is omnidirectional and homogeneous. As some algorithms the anisotropy of the scattering event. A value g = 0 de- are bound to certain rendering paradigms, they may also notes that light is scattered equally in all directions. A pos- be subject to constraints regarding the light source posi- itive value of g increases the probability of forward scatter- tion,e. g., [BR98, SPH∗09, vPBV10]. Finally, the number of ing. Accordingly, with a negative value backward scattering supported light sources varies, where either one or many will become more likely. If g = 1, a photon will always pass light sources are supported. The discussed algorithms also through the point unaffectedly. If g = −1 it will determinis- vary with respect to the light interaction and propagation in- tically be reflected into the direction it came from. Figure5 side the volume. While all algorithms support single scatter- shows the Henyey-Greenstein phase function model for dif- ing through either local or global light propagation, though ferent anisotropy parameters g. In order to incorporate both at different frequencies producing soft to hard shadows, mul- BRDFs and phase functions into the DVR equation, we use 0 tiple scattering is not supported by all approaches. We there- a loose definition for the shading s(~x ,~ωi,~ωo), which can be a phase function, BRDF, or a combination of both.

0.25 3. Algorithm Classification g=0 g=0.3 g=0.5 θ To be able to give a comprehensive overview of current in- -π π teractive advanced volumetric illumination techniques, we will introduce a classification within this section, which al- -0.25 lows us to group and compare the covered approaches. The classification has been developed with the goal to provide Figure 5: The Henyey-Greenstein phase function plotted for application designers with decision support when choosing different anisotropy parameters g. advanced volumetric illumination techniques for their needs.

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination fore differentiate between local and global scattering as well rendering paradigm. A comparison of the visual output of as single and multiple scattering. We will discuss these capa- one representative technique of each of the five groups is bilities for each covered technique and relate the supported shown in Figure6. As it can be seen, when changing the illumination effects to Max’s optical models [Max95]. illumination model, the visual results might change dras- tically, even though most of the presented techniques are As in many other fields of computer graphics, applying based on Max’s formulation of a volumetric illumination advanced illumination techniques in volume rendering in- model [Max95]. The most prominent visual differences are volves a tradeoff between rendering performance and mem- the intensities of shadows as well as their frequency, which ory consumption. The two most extreme cases on this scale is visible based on the blurriness of the shadow borders. are entire precomputation of the illumination information, Apart from the used illumination model, all other rendering and recomputation on the fly for each frame. We will com- parameters are the same in the shown images. Besides the pare the techniques covered within this article with respect five main groups supporting fully interactive volume render- to this tradeoff by describing the performance impact of ing, we will also briefly cover ray-tracing based ( ) tech- these techniques as well as the required memory consump- niques, and those only limited to isosurface illumination. tion. Since different application scenarios vary with respect While the introduced groups allow a sufficient classifica- to the degree of interactivity, we will review their perfor- tion for most available techniques, it should also be men- mance capabilities with respect to rendering and illumi- tioned, that some approaches could be classified into more nation update. Some techniques trade illumination update than one group. For instance, the shadow volume propa- times for frame rendering times and thus allow higher frame gation approach [RDRS10b], which we have classified as rates, as long as no illumination critical parameter has been lattice-based, could also be classified as light space based, changed. We discuss rendering times as well as illumina- since the illumination propagation is performed based on the tion update times, whereby we differentiate whether they are current light source position. triggered by lighting or transfer function updates. Recom- putations performed when the camera changes are consid- ered as rendering time. The memory footprint of the covered 4. Usage Guidelines techniques is also an important factor, since it is often lim- To support application developers when choosing an ad- ited by the available graphics memory. Some techniques pre- vanced volumetric shading method, we provide a compar- compute an illumination volume, e. g., [RDRS10b,SMP11], ative overview of some capabilities within this section. Ta- while others store much less or even no illumination data, ble1 contains an overview of the properties of each al- e. g., [KPHE02, DEP05, SYR11]. gorithm. However, in some cases more detailed informa- Finally, it is important if a volumetric illumination ap- tion is necessary to compare and select techniques for spe- proach can be combined with clipping planes and geometry. cific application cases. Therefore, we provide the compara- While clipping is frequently used in many standard volume tive charts shown in Figure 25. Note that distances between rendering applications, the integration of polygonal geome- methods in Figure 25 are not quantifiable and is only there try data is for instance important in flow visualization, when to get a rough estimate of how they behave relative to each integrating pathlines in a volumetric dataset. other. Figure 25 (a) provides a comparative overview of the To enable easier comparison of the existing techniques techniques belonging to the discussed groups with respect to and the capabilities described above, we have decided to illumination complexity and underlying rendering paradigm. classify them into five groups based on the algorithmic con- To be able to assess the illumination capabilities of the incor- cepts exploited to achieve the resulting illumination effects. porated techniques, the illumination complexity has been de- The thus obtained groups are, first local region based ( ) picted as a continuous scale along the x-axis. It ranges from techniques which consider only the local neighborhood low complexity lighting, e. g., local shading with a single around a voxel, second slice based ( ) techniques which point light source, to highly complex lighting, which could propagate illumination by iteratively slicing through the vol- for instance simulate multiple scattering as obtained from ume, third, light space based ( ) techniques which project multiple area light sources. The y-axis in Figure 25 (a) on the illumination as seen from the light source, fourth lattice other hand depicts which algorithmic rendering paradigm based ( ) techniques which compute the illumination di- that is supported. Thus, Figure 25 (a) depicts for each tech- rectly on the volumetric data grid without applying sam- nique with which rendering paradigm it has been used and pling, and fifth basis function based ( ) techniques which what lighting complexity has been achieved. We provide this use a basis function representation of the illumination infor- chart as a reference, since we believe that the shown capa- mation. While this classification into five groups is solely bilities are important criteria to be assessed when choosing performed based on the concepts used to compute the illu- an advanced volumetric illumination model. mination itself, and not for the image generation, it might in some cases go hand in hand, e. g., when considering the slice Figure 25 (b) depicts the scalability of the covered tech- based techniques which are also bound to the slice based niques in terms of growing image and dataset size with re-

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

Figure 6: A visual comparison of different volumetric illumination models. From left to right: gradient-based shad- ing [Lev88], directional occlusion shading [SPH∗09], image plane sweep volume illumination [SYR11], shadow volume prop- agation [RDRS10b] and spherical harmonic lighting [KJL∗12]. Apart from the used illumination model, all other rendering parameters are constant.

spect to performance. Since the x-axis depicts scalability with respect to data size and the y-axis depicts scalability with respect to image size, the top right quadrant for in- stance contains all algorithms which can achieve good per- formance in both aspects. When selecting techniques in Fig- ure 25 (a) based on the illumination complexity and the un- derlying rendering paradigm, Figure 25 (b) can be used to compare them regarding their scalability. To be able to depict more details for comparison reasons, we will cover the properties of the most relevant techniques in Table1 at the end of this article. The table shows the properties with respect to supported light sources, scattering capabilities, render and update performance, memory con- sumption, and the expected impact of a low signal to noise ratio. For rendering performance and memory consumption, Figure 7: A CT scan of an elk rendered with emission and the filled circles represent the quantity of rendering time or absorption only (top), and with gradient-based diffuse shad- memory usage. For memory consumption the zero to three ing [Lev88] (bottom). Due to the incorporation of the gradi- filled circles represents no additional memory, a few addi- ent, surface structures emerge (bottom). tional 2D images, an additional volume smaller than the in- tensity volume, and memory consumption equal or greater than the intensity volume is required, respectively. An ad- properties are discussed based on the property description ditional icon represents if illumination information needs to given in the previous section. We will start with a brief ex- be updated when changing the lighting or the transfer func- planation of each algorithm, where we relate the exploited tion. Thus, the algorithm performance for frame rendering illumination computations to Max’s model (see Section2) and illumination updates becomes clear. and provide a conceptual overview of the implementation. Then, we will discuss the illumination capabilities with re- spect to supported light sources and light interactions inside 5. Algorithm Description the volume before addressing performance impact and mem- In this section, we will describe all techniques covered ory consumption. within this article in a formalized way. The section is struc- tured based on the groups introduced in the previous section. 5.1. Local Region Based Techniques ( ) For each group, we will provide details regarding the tech- niques themselves with respect to their technical capabilities As the name implies, the techniques described in this sub- as well as the supported illumination effects. All of these section perform the illumination computation based on a lo-

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

given at position~x. Thus, we can define the shading function as:

sphong(~x,~ωi,~ωo) =samb(~x) + sdi f f (~x) ·~N ·~ωi+ (7) ~ωi +~ωo α sspec(~x) · (~N · ) . 2 The material ambient, diffuse, and specular components at location~x are represented by s (~x), s (~x), and s (~x), Figure 8: Gradient-based shading applied to a magnetic amb di f f spec respectively. They are often referred to as k , k , and k in the resonance imaging (MRI) dataset (left). As it can be seen a d s computer graphics literature. The ambient term approximate in the close-up, noise is emphasized in the rendering (right). indirect illumination effects, which ensure that voxels with gradients pointing away from the light source do not appear pitch black. In this case samb is simply a constant but more advanced techniques, which include , are covered below. α is used to influence the shape of the high- light seen on surface-like structures. A rather large α results in a small sharp highlight, while a smaller α results in a big- cal region. Thus, when relating volumetric illumination tech- ger smoother highlight. niques to conventional polygonal computer graphics, these techniques could be best compared to local shading models. To also incorporate attenuation based on the distance be- While the still most frequently used gradient-based shad- tween ~x and ~xl the computed light intensity can be modu- ing [Lev88] is exploiting the concepts underlying the clas- lated. However, this distance based weighting does not in- sical Blinn-Phong illumination model [Bli77], other tech- corporate any voxels possibly occluding the way to the light niques are more focused on the volumetric nature of the source, which would result in shadowing. To solve this is- data to be rendered. Also with respect to the locality, the sue, volume illumination techniques not being constrained described techniques vary. The gradient-based shading re- to a local region need to be applied. lies on a gradient and thus takes, when for instance using fi- As the gradient 5σt ( f (~x)) is used in the computation of nite difference gradient operators, only adjacent voxels into the diffuse and specular parts, its quality is crucial for the vi- account. Other techniques, such as dynamic ambient occlu- ∗ sual quality of the rendering. Especially when dealing with a sion [RMSD 08], take into account bigger, though still lo- high degree of specularity, an artifact free image can only be cal, regions. Thus, by using local region based techniques generated when the gradients are continuous along a bound- only direct illumination is computed, i. e., local illumination ary. However, often local difference operators are used for not influenced by other parts of the scene. Hence, not every gradient computation, and thus the resulting gradients are other part of the scene has to be considered when comput- subject to noise. Therefore, several approaches have been ing the illumination for the current object and the rendering 2 developed which improve the gradient quality. Since these complexity is reduced from O(n ) to O(n), with n being the techniques are beyond the scope of this article, we refer to number of voxels. the work presented by Hossain et al. [HAM11] as a starting point. Gradient-based volumetric shading [Lev88] is today still the most widely used shading technique in the context of Gradient-based shading can be combined with all exist- interactive volume rendering. It exploits voxel gradients as ing volume rendering paradigms, and it supports an arbi- surface normals to calculate local lighting effects. The ac- trary number of point and directional light sources. Due to tual illumination computation is performed in a similar way the local nature, no shadows or scattering effects are sup- as when using the Blinn-Phong illumination model [Bli77] ported. The rendering time, which depends on the used gra- in polygonal-based graphics, whereas the surface normal is dient computation scheme, is in general quite low. To fur- substituted by the gradient derived from the volumetric data. ther accelerate the rendering performance, gradients can also Figure7 shows a comparison of gradient-based volumetric be precomputed and stored in a gradient volume accessed shading and the application of the standard emission and during rendering. No additional memory is required when absorption model. As can be seen, surface details become this precomputation is not performed. Clipping is directly more prominent when using gradient-based shading. The supported, though the gradient along the clipping surfaces local illumination at a point ~x is computed as the sum of needs to be replaced with the clipping surface normal to the three supported reflection contributions: diffuse reflec- avoid rendering artifacts [HLY09]. Besides the local nature tion, specular reflection and ambient lighting. Diffuse and of this approach, the gradient computation makes gradient- specular reflections both depend on the normalized gradient based shading highly dependent on the SNR of the rendered | 5 σt ( f (~x))| denoted ~N, where f (~x) is the intensity value data. Therefore, gradient-based shading is not recommended

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

the occlusion only illumination is shown in Figure 10 (left), while Figure 10 (right) shows the application of the addi- tional emissive term, which is set proportional to the sig- nal strength of a co-registered functional magnetic resonance volume data set ∗ RΩ imaging (fMRI) signal [NEO 10]. A straight forward implementation of local ambient oc- clusion would result in long rendering times, as for each ~x multiple rays need to be cast to determine the occlusion factor. Therefore, it has been implemented by exploiting a multiresolution approach [LLY06]. This flat blocking data structure represents different parts of the volume at differ- ent levels of detail. Thus, empty homogeneous regions can Figure 9: Local ambient occlusion is computed by, for each be stored at a very low level of detail, which requires less voxel, integrating the occlusion over all directions Ω within sampling operations when computing samb(~x). a radius R. Local ambient occlusion is not bound to a specific render- ing paradigm. When combined with gradient-based lighting, it supports point and directional light sources, while the ac- for modalities suffering from a low SNR, such as magnetic tual occlusion simulates an exclusive ambient light only. The resonance imaging (MRI)(see Figure8), 3D ultrasound or rendering time is interactive, while the multiresolution data seismic data [PBVG10]. structure needs to be updated whenever the transfer function has been changed. The memory size of the multiresolution Local Ambient Occlusion [HLY07] is a local approxima- data structure depends on the size and homogeneity of the tion of ambient occlusion, which considers the voxels in a dataset. In a follow up work, more results and the usage of neighborhood around each voxel. Ambient occlusion goes clipping planes during the local ambient occlusion computa- back to the obscurance rendering model [ZIK98], which re- tion are discussed [HLY09]. lates the luminance of a scene element to the degree of oc- clusion in its surroundings. To incorporate this information Ruiz et al. also exploit ambient illumination in their ob- into the volume rendering equation, Hernell et al. replace the scurance based volume rendering framework [RBV∗08]. To standard ambient term samb(~x) in Equation7 by: compute the ambient occlusion, Ruiz et al. perform a visibil- ity test along the occlusion rays and evaluate different dis- tance weightings. Besides advanced illumination, they also Z Z δ(~x,~ωi,RΩ) 0 0 0 facilitate the occlusion to obtain illustrative renderings as samb(~x) = gA(~x ) · T(~x ,δ(~x,~ωi,a))d~x d~ωi, Ω δ(~x,~ωi,a) well as for the computation of saliency maps. (8) Dynamic Ambient Occlusion [RMSD∗08] is a histogram where δ(~x,~ωi,a) offsets the position by a along the direc- based approach, which allows integration of ambient oc- tion ~ωi, to avoid self occlusion. RΩ specifies the radius of clusion, color bleeding and basic scattering effects, as well the local neighborhood for which the occlusion is computed, as a simple glow effect that can be used to highlight re- 0 0 and gA(~x ) denotes the light contribution at each position ~x along a ray. Since the occlusion should be computed in an ambient manner, it needs to be integrated over all rays in the neighborhood Ω around a position~x0 (see Figure9). 0 Different light contribution functions gA(~x ) are pre- sented. To achieve sharp shadow borders, the usage of a Dirac delta is proposed, which ensures that light contributes only at the boundary of Ω. Smoother results can be achieved, when the ambient light source is considered volumetric. This result in adding a fractional light emission at each sample point:

Figure 10: An application of local ambient occlusion in 0 1 0 gA(~x ) = + Le(~x ,~ωo), (9) a medical context. Usage of the occlusion term allows the RΩ − a 3D structure to be conveyed (left), while multimodal emis- sive lighting allows fMRI to be integrated as an additional 0 where Le(~x,~ωo) is a spatially varying emission term that modality (right). (images taken from [NEO∗10]) can be used for different tissue types. An application of

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

where Oenv is the local occlusion factor, which is computed as:

volume data set 1 O (~x) = ( j) · LH (~x). (11) env 2 3 ∑ σα j 3 πRΩ 0≤ j<2b

Here, RΩ depicts the radius of the region, for which the lo- cal histograms LH have been precomputed. b is the data bit b local histogram depth and thus 2 is the number of bins of the local his- togram. The bins LHj are modulated based on the opacity σα( j) assigned to each intensity j. This operation is per- formed through texture blending on the GPU, and thus al- Figure 11: Dynamic ambient occlusion computes and stores lows the occlusion information to be updated in real-time. To an index to a clustered histogram that is based on the local apply this approach to direct volume rendering, the environ- neighborhood for each voxel. mental color Eenv is derived in a similar way as the occlusion factor Oenv:

1 E (~x) = ( j) · ( j) · LH (~x), (12) env 2 3 ∑ σα σrgb j 3 πRΩ 0≤ j<2b gions within the volume. While the techniques presented in [HLY07] and [HLY09] exploit a ray-based approach to where σrgb( j) represents the emissive color of voxels hav- determine the degree of occlusion in the neighborhood of ing the intensity j. During rendering, the current voxel’s dif- ~x, dynamic ambient occlusion uses a precomputation stage, fuse color coefficient kd is obtained by blending between where intensity distributions are computed for each voxel’s the transfer function color σrgb and Eenv with using the oc- neighborhood. The main idea of this approach is to modu- clusion factor Oenv as blend factor. Thus, color bleeding is late these intensity distributions with the transfer function simulated, as the current voxel’s color is affected by its en- during rendering, such that integration over the modulated vironment. To support a simple glow effect, an additional result allows the occlusion to be obtained. The intensity dis- mapping function can be exploited, which is also used to tributions are stored as normalized local histograms. When modulate the local histograms stored in the precomputed 2D storing one normalized local histogram for each voxel, this texture. would result in a large amount of data which needs to be Dynamic ambient occlusion is not bound to a specific ren- stored and accessed during rendering. Therefore, a similar- dering paradigm, though in the original paper it has been in- ity based clustering is performed on the histograms, whereby troduced in the context of volume ray-casting [RMSD∗08]. a vector quantization approach is exploited. After clustering, Since it facilitates a gradient-based Blinn-Phong illumina- the local histograms representing a cluster are available as a tion model, which is enriched by ambient occlusion and 2D texture table, which is accessed during rendering. Ad- color bleeding, it supports point, distant and ambient light ditionally, a scalar volume containing a cluster ID for each sources. While an ambient light source is always exclusive, voxel is generated, which associates the voxel with the cor- the technique can be used with several point and distant responding row storing the local histogram in the 2D texture light sources, which are used to compute diffuse and spec- (see Figure 11). ular contributions. Since the precomputed histograms are During rendering, the precomputed information is used constrained to local regions, only local shadows are sup- to support four different illumination effects: ambient oc- ported, which have a soft appearance. A basic scattering clusion, color bleeding, scattering and a simple glow effect. extension, exploiting one histogram for each halfspace de- Therefore, the representative histogram is looked up for the fined by a voxel’s gradient, is also supported. The render- current voxel and then modulated with the transfer function ing performance is interactive, as in comparison to standard color to produce the final color. To integrate ambient occlu- gradient-based shading only two additional texture lookups sion into an isosurface based volume renderer, the gradient- need to be performed. The illumination update time is also based shading formulation described above is modulated, low, since the histogram modulation is performed through texture blending. For the precomputation stage, Ropinski et such that the ambient intensity samb(~x) is derived from the ∗ precomputed data: al. [RMSD 08] report expensive processing times, which are required to compute the local histograms and to perform the clustering. In a more recent approach, Mess and Ropin- ski propose a CUDA-based approach which reduces the pre- samb(~x) = 1.0 − Oenv(~x), (10) computation times by a factor of up to 10 [MR10]. During

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

multiple scattering, respectively, and θ is a user defined cone angle. Although not entirely obvious from the func- tion name, Blur( ) is a function that approximates the indi- half angle vector θ rect incoming radiance within the cone angle. In addition to volume data set this modified equation, which approximates forward multi- ple scattering, half angle slicing also exploits a surface shad- ing factor S(~x). S(~x) can either be user defined through a mapping function or derived from the gradient magnitude at ~x. It is used to weight the degree of emission and surface shading, such that the illumination C(~x,~ωo) at~x is computed as:

Figure 12: Half angle slicing performs texture slicing in the C(~x,~ωo) =σa(~x) · Le(~x,~ωo) · (1 − S(~x))+ orthogonal direction to the half angle between the view di- (14) fBRDF (~x,~ωl,~ωo) · S(~x), rection and the light source.

where fBRDF (~x,~ωo,~ωl) is the BRDF used for the surface. C(~x,~ωo) is then used in the volume rendering integral as fol- precomputation, one additional scalar cluster lookup volume lows: as well as the 2D histogram texture is generated. Since the histogram computation takes a lot of processing time, inter- active clipping is not supported, as it would affect the density L(~x,~ωo) =L0(x~0,~ωo) · Tss(x~0,~x)+ ~x distribution in the voxel neighborhoods. Z 0 0 0 0 0 C(~x ,~ωo) · σs(~x ) · Lms(~x ,~ωo) · Tss(~x ,~x)d~x . x~0 5.2. Slice Based Techniques ( ) (15)

The slice based techniques covered in this subsection are The directional nature of the indirect lighting in 0 all bound to the slice based rendering paradigm, originally Lms(~x ,~ωo) supports an implementation where the rendering presented by Cabral et al. [CCF94]. This volume render- and the illumination computation are synchronized. Kniss et ing paradigm exploits a stack consisting of a high number al. achieve this by exploiting the half angle vector seen in of rectangular polygons, which are used as proxy geometry Figure 12. This technique chooses the slicing axis, such that to represent a volumetric data set. Different varieties of this it is halfway between the light direction and the view direc- paradigm exist, though today image plane aligned polygons, tion, or the light direction and the inverted view direction, if to which the volumetric data set stored in a 3D texture is the light source is behind the volume. Thus, each slice can mapped, are most frequently used. be rendered from the point of view of both the observer and Half Angle Slicing was introduced by Kniss et the light, and no intermediate storage for the illumination al. [KPHE02] and later extended to support phase functions information is required. The presented implementation uses and spatially varying indirect extinction for the indirect light three buffers for accumulating the illumination information in [KPH∗03]. It is the first of the slice based approaches and compositing along the view ray. which synchronizes the slicing used for rendering with Due to the described synchronization behavior, half an- the illumination computation and the first integration of gle slicing is, as all techniques described in this subsection, volumetric illumination models including multiple scatter- bound to the slicing rendering paradigm. It supports one di- ing within interactive applications. The direct lighting is rectional or point light source located outside the volume, modeled using the equation supporting a single light source, and can produce shadows having hard, well defined bor- Equation5. To include multiple scattering, a directed ders (see Figure1 (right)). By integrating a directed diffu- diffusion process is modeled through an angle dependent sion process in the indirect lighting computation, forward blur function. Thus, the multiple scattering using one light directed multiple scattering can be simulated which allows source at position xl with direction ~ωl is rewritten as: a realistic representation of translucent materials. Half an- gle slicing performs two rendering passes for each slice, one from the point of view of the observer and one from that of L (~x0,~ω ) =L · p(~x0,~ω ,~ω ) · T (~x ,~x0)+ ms o l l o ss l the light source, and thus allows interactive frame rates to be 0 (13) Ll · Tms(~xl,~x )Blur(θ), achieved. Since rendering and illumination computation are performed in a lockstep manner, besides the three buffers 0 0 where p(~x ,~ωl,~ωo) is the phase function, Tss(~xl,~x ) and used for compositing, no intermediate storage is required. 0 Tms(~xl,~x ) denotes the extinction term used for single and Clipping planes can be applied interactively.

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

Directional Occlusion Shading [SPH∗09] is another slice based technique that exploits front-to-back rendering of view aligned slices. In contrast to the half angle slicing ap- proach discussed above, directional occlusion shading does volume data set not adapt the slicing axis with respect to the light position. θ Instead, it solely uses a phase function in s(~x,~ωi,~ωo) to prop- agate occlusion information during the slice traversal from front to back:

( 0 if −~ωi ·~ωo < cos(θ) s(~x,~ωi,~ωo) = 1 , 2π·(1−cos(θ)) otherwise. (16) Figure 13: Directional occlusion shading fixes the light where ~ωo is the ray towards the eye and the −~ωi’s are lying source to the location of the camera and uses a back peaked in the cone centered on ~ωo. Similar to the approach by Kniss phase function defined by the angle θ to propagate occlusion et al. [KPH∗03] this is a, in this case backward, peaked phase information. function, where the scattering direction is defined through π the cone angle θ ∈ [0, 2 ]. This backward peaked behavior is important in order to support synchronized occlusion com- putation and rendering. Thus, s(~x,~ωi,~ωo) can be used during only the additional occlusion buffers need to be allocated. the computation of the direct lighting as follows: Finally, clipping planes can be used interactively. Patel et al. [PBVG10] apply concepts similar to direc- 0 Z 0 0 tional occlusion shading when visualizing seismic volume Lss(~x ,~ωo) = Ll · s(~x ,~ωi,~ωo) · T(~xl,~x )d~ωi, (17) Ω data sets. By using the incremental blurring operation mod- eled through the phase function, they are able to visualize where the background intensity coming from Ll is modu- seismic data sets without emphasizing inherent noise in the 0 lated by s(~x ,~ωi,~ωo), which is evaluated over the contribu- data. A more recent extension by Schott et al. [SMGB12, tions coming from all directions ~ωi over the sphere. Since SMG∗13] allows geometry to be integrated and support light 0 s(~x ,~ωi,~ωo) is defined as a backward peaked phase function, interactions between the geometry and the volumetric media. in practice the integral only needs to be evaluated in an area defined by the cone angle θ. To incorporate the occlusion Multidirectional Occlusion Shading, introduced by Šoltés- based lighting into the volume rendering integral, Schott et zová et al. [vPBV10], extends the directional occlusion tech- 0 ∗ al. propose to remove the emission term Le(~x ,~ωo) = 0 from nique [SPH 09] to allow for a more flexible placement of Equation3: the light source. The directional occlusion shading simpli- fies computation by restricting the light source direction to be aligned with the viewing direction, and therefore does not L(~x,~ωo) =L0(x~0,~ωo) · T(x~0,~x) allow the light source position to be changed. Multidirec- ~x Z 0 0 0 0 tional occlusion shading avoids this assumption by introduc- + σs(~x ) · Lss(~x ,~ωo) · T(~x ,~x)d~x . ing more complex convolution kernels. Instead of perform- x~0 (18) ing the convolution of the opacity buffer with a symmetrical disc-shaped kernel, Šoltészová et al. propose the usage of The implementation of occlusion based shading is similar to an elliptical kernel derived from a tilted cone. In their pa- standard front-to-back slice rendering. However, as in half per, they derive the convolution kernel from the light source angle slicing [KPHE02] two additional occlusion buffers are position and the aperture θ defining the cone. Due to the di- used for compositing the lighting contribution. rectional component introduced by the front-to-back slicing, the tilt angle of the cone is limited to lie in [0, π −θ] towards Due to the fixed compositing order, directional occlu- 2 the viewer, which restricts the shape of the filter kernel from sion shading is bound to the slice based rendering paradigm generating into hyperbolas or parabolas. The underlying il- and supports only a single light source, which is located at lumination model is the same as for directional occlusion the camera position. The iterative convolution allows soft shading, whereas the indirect lighting contribution is modi- shadow effects to be generated by approximating multi- fied by exploiting ellipse shaped filter kernels. ple scattering from a cone shaped phase function (see Fig- ure 14 (left) and (middle)). The performance is interactive As standard directional occlusion shading, multidirec- and comparable to half angle slicing but, since no precom- tional occlusion shading uses front-to-back slice rendering, putation is used, illumination updates does not need be per- which tightly binds it to this rendering paradigm. Since the formed. Accordingly, also the memory footprint is low, as introduced filter kernels allow light sources to be placed

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

mapping has first been applied to volume rendering by Had- wiger et al. [HKSB06]. An alternative approach has been presented by Ropinski et al. [RKH08]. While the original deep shadow map approach [LV00] stores the overall light intensity in each layer, in volume rendering it is advanta- geous to store the absorption given by the accumulated al- pha value in analogy to the volume rendering integral. Thus, Figure 14: Directional occlusion shading [SPH∗09] in- for each shadow ray, the alpha function is analyzed, i. e., the tegrates soft shadowing effects (left). Multidirectional oc- function describing the absorption, and approximated by us- clusion shading [vPBV10] additionally supports differ- ing linear functions. ent lighting positions (middle) and (right). (Images taken To have a termination criterion for the approximation of from [SPH∗09] and [vPBV10], courtesy of Mathias Schott the shadowing function, the depth interval covered by each and Veronika Šoltészová) layer can be restricted. However, when it is determined that the currently analyzed voxels cannot be approximated suffi- ciently by a linear function, smaller depth intervals are con- at different positions, multiple light sources are supported. sidered. Thus, the approximation works as follows. Initially, However, due to the directional nature, light sources must be the first hit point for each shadow ray is computed, similar positioned in the viewer’s hemisphere. Light scattering ca- to standard shadow mapping. Next, the distance to the light pabilities are the same as with directional occlusion shading, source of the first hit point and the alpha value for this po- i. e., soft shadows through multiple scattering is supported. sition are stored within the first layer of the deep shadow However, since different light positions are supported, shad- map. At the first hit position, the alpha value usually equals ows can be made more prominent in the final image when zero. Starting from this first hit point, each shadow ray is the light direction differs from the view direction. This ef- traversed and it is checked iteratively whether the samples fect is shown in Figure 14, where Figure 14 (middle) shows encountered so far can be approximated by a linear function. a rendering where the light source is located in the camera When processing a sample, where this approximation would position, while the light source has been moved to the left in not be sufficient, i. e., a user defined error is exceeded, the Figure 14 (right). Due to the more complex filter kernel, the distance of the previous sample to the light source as well rendering time of this model is slightly higher than when us- as the accumulated alpha value at the previous sample are ing directional occlusion shading. The memory consumption stored in the next layer of the deep shadow map. This is re- is equally low. peated until all layers of the deep shadow map have been created.

5.3. Light Space Based Techniques ( ) The error threshold used when generating the deep As light space based techniques we consider all those tech- shadow map data structure is introduced to determine niques, where illumination information is directionally prop- whether the currently analyzed samples can be approximated agated with respect to the current light position. Unlike the by a linear function. In analogy to the original deep shadow slice based techniques, these techniques are independent of mapping technique [LV00], this error value constrains the the underlying rendering paradigm. However, since illumi- variance of the approximation. This can be done by adding nation information is propagated along a specific direction, (resp. subtracting) the error value at each sample’s position. these techniques usually support only a single light source. When the alpha function does not lie within the range given by the error threshold anymore, a new segment to be approx- Deep Shadow Mapping has been developed to enable imated by a linear function is started. A small error threshold shadowing of complex, potentially semi-transparent, struc- results in a better approximation, but more layers are needed tures [LV00]. While standard shadow mapping [Wil78] sup- to represent the shadow function. ports basic shadowing by projecting a shadow map as seen from the light source [SKvW∗92], it does not support semi- A different approach was presented by Jansen and transparent occluders which often occur in volume render- Bavoil [JB10]. They approximate the absorption function ing. In order to address semi-transparent structures, opac- using a Fourier series, which works especially well for ity shadow maps serve as a stack of shadow maps, which smooth media such as fog and smoke. The opacity is pro- store alpha values instead of depth values in each shadow jected into the Fourier domain and a fixed number of co- map [KN01]. Nevertheless, deep shadow maps are a more efficients are used to store the approximation in a Fourier compact representation for semi-transparent occluders. They Opacity Map (FOM). This enables them to get a more ac- also consist of a stack of textures, but in contrast to the opac- curate approximation at comparable memory consumption ity shadow maps, an approximation to the shadow function is and speed compared to the original Deep Shadow Map- stored in these textures. Thus, it is possible to approximate ping [LV00] approach. However, as Jansen and Bavoil point shadows by using fewer hardware resources. Deep shadow out in their paper, high density medias may cause ringing ar-

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

since the shadow propagation is unidirectional. Due to the same reason, only a single light source is supported. Both rendering and update time of the deep shadow data struc- ture, which needs to be done when lighting or transfer func- tion changes, are interactive. The memory footprint of deep shadow mapping is directly proportional to the number of used layers or coefficients. Desgranges et al. [DEP05] also propose a gradient-free technique for shading and shadowing that is based on pro- jective texturing. Their approach produces visually pleasing shadowing effects and can be applied interactively. Image Plane Sweep Volume Illumination [SYR11] al- lows advanced illumination effects to be integrated into a GPU-based volume ray-caster by exploiting the plane sweep paradigm. By using sweeping, it becomes possible to reduce the illumination computation complexity and achieve inter- active frame rates, while supporting scattering as well as shadowing. The approach is based on a reformulation of the optical model that either considers single scattering with a single light direction, or multiple scattering resulting from a Figure 15: The visible human head data set rendered with- rather high albedo [MC10]. Multiple scattering is simulated out shadows (top left), with shadow rays (top right), with by assuming the presence of a high albedo, making multiple shadow mapping (bottom left) and with deep shadow maps scattering predominant and possible to approximate using a (bottom right) (images taken from [RKH08]). diffusion approach:

Z ~x 1 tifacts. Therefore, Delalandre et al. [DGMF11] improved the 0 ~ 0 Ims(~x,ωo) = ∇di f f 0 · Le(~x ,ωo)d~x . (19) FOM method and used it to represent transmittance instead ~xb |~x −~x| of opacity, which enabled them to quickly evaluate single Here, ∇di f f represents the diffusion approximation and scattering with projective light sources. Furthermore, they 1 proposed a projective sampling scheme, which take samples |~x0−~x| results in a weighting, such that more distant sam- uniformly in light space, and also allow geometries to be in- ples have less influence. ~xb represents the background posi- tegrated in the rendering. Gautron et al. [GDM11] further tion lying outside the volume. In practice, the computation expanded on the work of Delalandre et al. [DGMF11] by of the scattering contributions can be simplified, assuming encoding the variation of the extinction along a ray and en- the presence of a forward peaked phase function, such that abling the use of particle-based media. all relevant samples lie in direction of the light source. Thus, the computation can be performed in a synchronized man- Once the deep shadow map data structure is obtained, ner, which is depicted by the following formulation for sin- the resulting shadowing information can be used in different gle scattering: ways. A common approach is to use the shadowing informa- tion in order to diminish the diffuse reflectance when using 0 gradient-based shading. Nevertheless, it can also be used to Iss(~x,ωo) =L0(~xl,ωl) · T(~xl,~x) directly modulate the emissive contribution Le(~x,~ωo) at~x. ~x Z 0 0 0 0 + σt (~x ) · Le(~x ,~ωo) · T(~x ,~x)d~x A comparison of deep shadow mapping, which enable ~x00 (20) semi-transparent occluders, with shadow rays and standard ~x00 Z 0 0 0 0 shadow mapping is shown in Figure 15. As it can be seen + σt (~x ) · Le(~x ,~ωo) · T(~x ,~x)d~x , ~x deep shadow maps introduce artifacts when thin occluder l structures are present. The shadows of these structures show and a similar representation for the multiple scattering con- transparency effects although the occluders are opaque. This tribution: result from the fact that an approximation of the alpha func- tion is exploited and especially thin structures may diminish Z ~x 1 when approximating over too long depth intervals. 0 0 ~ 0 Ims(~x,ωo) =∇di f f 0 · Le(~x ,ωo)d~x + ~x00 |~x −~x| The deep shadow mapping technique is not bound to 00 (21) Z ~x 1 a specific rendering paradigm. It supports directional and 0 ~ 0 ∇di f f 0 · Le(~x ,ωo)d~x . point light sources, which should lie outside the volume ~xb |~x −~x|

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

whereas the diffuse and the specular light intensity is re- placed by the direct light Li(~x,~ωi). Zhang and Crawfis [ZC02, ZC03] present an algorithm volume data set which does not require storage of a shadow volume in mem- ory. Their algorithm instead exploits two additional im- age buffers for handling illumination information (see Fig- ure 16). These buffers are used to attenuate the light dur- ing rendering and to add its contribution to the final image. Whenever a light contribution is added to the final image buffer seen from the camera, this contribution is also added to the shadow buffer as seen from the light source.

Figure 16: Shadow splatting uses a filter kernel to splat the Shadow splatting can be used with multiple point light and light contribution into buffers in a pass sweeping from the distant light sources, as well as textured area lights. While light source direction. the initial algorithm [ZC02] only supports hard shadows, a later extension integrates soft shadows [ZC03]. The render- ing time is approximately doubled with respect to regular splatting when computing shadows. The algorithm by Zhang and Crawfis [ZC02, ZC03] updates the 2D image buffers Image plane sweep volume illumination facilitates this split- during rendering and thus requires no considerable update 00 ting at ~x , which is based on the current state of the sweep times, while the algorithm by Nulkar and Mueller [NM01] line, by performing a synchronized ray marching. During needs to update the shadow volume, which requires extra this process, the view rays are processed in an order, such memory and update times. A later extension makes the inte- that those being closer to the light source in image space gration of geometry possible [ZXC05]. are processed earlier. This is ensured by exploiting a mod- ified plane sweeping approach, which iteratively computes Historygram Photon Mapping [JKRY12] enabled interac- the influence of structures when moving further away from tive editing of transfer functions by only recomputing the the light source. To store the illumination information accu- parts of the light transport solution that actually changes. mulated along all light rays between the samples of a specific Photon mapping is used as the underlying light transport sweep plane, a 2D illumination cache in form of a texture is method, which is based on Equation3, but with the sin- 0 used. gle scattering term Lss(~x ,~ωo) exchanged with multiple scat- 0 tering Lms(~x ,~ωo) from Equation6. The recursive nature Image plane sweep volume illumination has been devel- of multiple scattering is avoided by approximating the in- 0 oped as a ray-based technique and can thus only be used scattered radiance with the n energy particles Φi(~x ,~ωp), within this rendering paradigm. It supports one point or di- called photons, within the spherical neighborhood of radius rectional light source, and simulates single as well as for- R: ward multiple scattering effects. Since all illumination com- putations are performed directly within a single rendering pass, it does not require any preprocessing and does not need n 0 0 1 0 ∆Φi(~x ,~ωi) to store intermediate results within an illumination volume, Lms(~x ,~ωo) ≈ s(~x ,~ωi,~ωo) . (22) σ (~x0) ∑ 4 3 which results in a low memory footprint. The interactive ap- s i=1 3 πR plication of clipping planes is inherently supported. Shadow Splatting was first proposed in 2001 by Nulkar and The method boils down to a two pass algorithm that first Mueller [NM01]. As the name implies, it is bound to the sends photons from the light sources, stores them, and then splatting rendering paradigm, which is exploited to attenu- sends view rays from the camera that gathers the transmit- ate the lighting information as it travels through the volume. ted photons according to the equation above (see Figure 17). Single scattering is supported and this method is therefore A hash data structure is used to store and retrieve the pho- R ~x0 based on Equation3, but uses a filter kernel to splat light in- tons lying within the radius of the location . Since both formation into a shadow volume. The shadow volume is up- the process of sending and gathering photons is expensive, dated in a first splatting pass, during which light information a concept for reducing these computations, called History- H is attenuated as it travels through the volume using Equa- gram, is proposed. A Historygram, , is the representable tion5 for each light source. In the second splatting pass, set of parameters that have been used during the computa- (x) which is used for rendering, the shadow volume is used to tion, and a mapping function δ is used in order to map acquire the incident illumination. all possible parameters into this representable set. A history- gram is created by applying the union of the previous Histo- The actual shading is performed using Phong shading, rygram, Hi, with the n new mapped parameters x j, j = 1..n:

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

volume data set R

Figure 17: Photon mapping sends light particles into the Figure 18: The neck of a human skull rendered with the scene and then collects all photons within a radius R to ap- Historygram Photon Mapping method, which can handle proximate the in-scattered radiance. complex light setups and multiple scattering (images taken from [JKRY12]).

5.4. Lattice Based Techniques ( ) n [ Hi+1 = Hi δ(x j). (23) We classify all those approaches that compute the illumi- j=1 nation directly based on the underlying grid as lattice based techniques. While many of the previous approaches are com- A binary data structure is used for the Historygrams, which puting the illumination per sample during rendering, the lat- enables low memory overhead and hardware instructions tice based techniques perform the computations per voxel, to be used when evaluating the mapping function δ(x) and usually by exploiting the nature of the lattice. In this survey comparing Historygrams. For photon mapping, a History- we focus on techniques which allow interactive data explo- gram represent the data that is used and each photon stores a ration, though it should be mentioned that other techniques 64-bit Historygram. View rays are divided into segments that working on non-regular grids exist [QXF∗07]. also each have a Historygram. When the user changes the Shadow Volume Propagation is a lattice-based approach, transfer function, the affected data is encoded into a change where illumination information is propagated in a prepro- Historygram and then evaluated against each photon’s and cessing step through the regular grid defining the volumet- view ray segment’s associated Historygram. A photon or ric data set. The first algorithm realizing this principle was view ray segment is only recomputed if it was affected by proposed by Behrens and Ratering in the context of slice the transfer function change. The view ray segments must based volume rendering [BR98]. Their technique simulates be recomputed whenever the camera changes, but the pho- shadows caused by attenuation of one distant light source. tons can be reused as they are not dependent on the camera These shadows, which are stored within a shadow vol- placement. Jönsson et al. [JKRY12] showed that speedups ume, are computed by attenuating illumination slice by slice of up to 15 times can be achieved when applying the His- through the volume. More recently, the lattice-based prop- torygram concept compared to a brute force photon mapper, agation concept has been exploited also by others. Ropin- thus enabling interactive transfer function editing. ski et al. have exploited such a propagation to allow high This method inherits many properties from the original frequency shadows and low frequency scattering simulta- photon mapping method introduced by Jensen [Jen96]. As neously [RDRS10b]. Their approach has been proposed as such, there is no restriction on the shape, number, or place- a ray-casting-based technique that approximates the light ment of light sources. Advanced material effects and mix- propagation direction in order to allow efficient rendering. It tures of phase function and BRDF can be used as well as facilitates a four channel illumination volume to store lumi- multiple scattering, which makes it possible to create real- nance and scattering information obtained from the volumet- istic images such as the one in Figure 18. The amount of ric data set with respect to the currently set transfer function. memory required is highly dependent on the number of pho- Therefore, similar to Kniss et al. [KPH∗03], indirect lighting tons and view ray segments used. The hash data structure is incorporated by blurring the incoming light within a given requires memory in addition to the photon data, which con- cone centered about the incoming light direction. However, sist of position, direction, power, and Historygram. Each ad- in difference from [KPH∗03], shadow volume propagation ditional view ray segment requires 16MB of GPU memory uses blurring only for the chromaticity and not the inten- for a 1024×1024 viewport resolution. Clip plane editing is sity of the light (luminance). Although this procedure is not supported, although not at interactive frame rates. physically correct, it helps to accomplish the goal of obtain-

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination ing harder shadow borders, which according to Wanger im- volume propagation supports multiple scattering. Rendering prove the perception of spatial structures [WFG92, Wan92]. times are quite low, since only one additional 3D texture Thus, the underlying illumination model for multiple scat- fetch is required to obtain the illumination values. However, tering in Equation6 is modified as follows: when the transfer function is changed, the illumination in- formation needs to be updated and thus the propagation step needs to be recomputed. Nevertheless, this still supports in- 0 0 0 0 Lms(~x ,~ωo) = t(~x ) · li(~x ,~ωl) · c(~x ,~ωo). (24) teractive frame rates. Since the illumination volume is stored and processed on the GPU, a sufficient amount of graphics where t(~x0) is the transport color, as assigned through the memory is required. 0 transfer function, li(~x ,~ωl) is the luminance of the attenuated Piecewise Integration [HLY08] focus on direct light, i. e., 0 direct light as defined in Equation5, and c(~x ,~ωo) the chro- Equation3, with a single light source. They propose to dis- maticity of the indirect in-scattered light. The transport color cretize Equation5 by dividing rays into k segments and eval- 0 0 t(~x ) as well as the chromaticity c(~x ,~ωo) are wave length uating the incoming radiance using: 0 dependent, while the scalar li(~x ,~ωl) describes the incident (achromatic) luminance. Multiplying all together results in 0 k the incident radiance. Chromaticity c(~x ,~ωo) is computed L (~x0,~ω ) = L · T(~x ,~x ). (27) from the in-scattered light i i l ∏ n n+1 n=0

Short rays are first cast towards the light source for each 0 Z 0 0 c(~x ,~ωo) = s(~x ,~ωi,~ωo) · Lchrom(~x ,~ωi)d~ωi, (25) voxel and the piecewise segments are stored in an in- Ω termediate 3D texture using a multiresolution data struc- where Ω is the unit sphere centered around ~x and ture [LLY06]. Then, again for each voxel, global rays are sent towards the light source now taking steps of size equal Lchrom(~x,~ωi) is the chromaticity of Equation3 with multiple scattering. To allow unidirectional illumination propagation, to the short rays and sampling from the piecewise segment 0 0 3D texture. In addition to direct illumination, first order in- s(~x ,~ωi,~ωo) is chosen to be a strongly forward peaked phase function: scattering is approximated by treating scattering in the vicin- ity as emission. To preserve interactivity, the final radiance is progressively refined by sending one ray at a time and blend-  0 0 if ~ωi ·~ωo < cos(θ(~x)) ing the result into a final 3D texture. s(~x ,~ωi,~ωo) = β (C ·~ωi ·~ωo) otherwise. The piecewise integration technique can be combined (26) with other rendering paradigms, even though it has been The cone angle θ is used to control the amount of scattering developed for volume ray-casting. It supports a single di- and depends on the intensity at position ~x0, and the phase rectional or point light source, which can be dynamically function is a Phong lobe whose extent is controlled by the moved, and it includes an approximation of first order in- exponent β, restricted to the cone angle θ. C is a constant, scattering. Rendering times are low, since only a single extra chosen with respect to β, which ensures that the function lookup is required to compute the incoming radiance at the integrates to one over all angles. sample position. The radiance must be recomputed as soon as the light source or transfer function changes, which can be During rendering, the illumination volume is accessed performed interactively and progressively. Three additional to look up the luminance value and scattering color of 3D textures are required for the computations. However, a 0 the current voxel. The chromaticity c(~x ,~ωo) and the lumi- lower resolution grid with multiresolution structure is used 0 nance li(~x ,~ωi) are fetched from the illumination volume to alleviate some of the additional memory requirements. 0 and used within Lms(~x ,~ωo) as shown above. In cases where Summed Area Table techniques were first exploited by a high gradient magnitude is present specular reflections, Diaz et al. [DYV08] for computation of various illumina- and thus surface-based illumination, is also integrated into 0 tion effects in interactive direct volume rendering. The 2D s(~x ,~ωi,~ωo). summed area table (SAT) is a data structure where each cell Shadow volume propagation can be combined with vari- (x,y) stores the sum of every cell located between the initial ous rendering paradigms and does not require a noticeable position (0,0) and (x,y), such that the 2D SAT for each pixel amount of preprocessing since the light propagation is car- p can be computed as: ried out on the GPU. It supports point and directional light sources, whereby a more recent extension also supports area x,y light sources [RDRS10a]. While the position of the light SATimage(x,y) = ∑ pi, j (28) source is unconstrained, the number of light sources is lim- i=0, j=0 ited to one, since only one illumination propagation direction is supported. By simulating the diffusion process, shadow A 2D SAT can be computed incrementally during a sin-

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination gle pass and the computation cost increase linearly with the number of cells. Once a 2D SAT has been constructed, an efficient evaluation of a region can be performed with only four texture accesses to the corners of the regions to be SAT volume data set evaluated. Diaz et al. [DYV08] created a method, known as Vicinity Occlusion Mapping (VOM), which effectively uti- lizes two SATs computed from the depth map, one which stores the accumulated depth for each pixel and one which stores the number of contributed values of the neighborhood of each pixel. The vicinity occlusion map is then used to incorporate view dependent ambient occlusion and halo ef- fects in the volume rendered image. The condition for which a pixel should be occluded is determined by evaluating the average depth around that pixel. If the average depth is larger Figure 19: Extinction based shading computes the inte- than the depth of the pixel, then the neighboring structures grated extinction towards the light using summed area tables are further away, which means this structure is not occluded. formed by a series of axis-aligned cuboids (2D for illustra- If the structure in the current pixel is occluded, the amount tion purposes). of occlusion, i. e., how much the pixel should be darkened, is determined by the amount of differences in depths around the corresponding pixel. A halo effect can be achieved by also considering the average depth from the neighborhood. bient occlusion and color bleeding is approximated using If the structure in the pixel lies outside the object, a halo a number of shells formed by cuboids around the sample color is rendered which decays with respect to the distance point. The extinction sum, σShi+i , is accumulated for each to the object. shell according to Density Summed Area Table [DVND10] utilizes a 3D SAT for ambient occlusion, calculated from the density val- −2 σSh = σSh + (SAT(Shi+1) − SAT(Shi)) · r , (30) ues stored in the volume. This procedure is view inde- i+i i Shi+1 pendent, supports incorporation of semi-transparent struc- where Sh denotes the shell of index i and r is the radius tures and is more accurate than the 2D image space ap- i Shi+1 of the next shell. Schlegel et al. [SMP11] reported that three proach [DYV08]. While only 8 texture accesses are needed shells are sufficient for a radius that does not exceed 10% of to estimate a region in a 3D SAT, the preprocessing is more the data set’s extent. Directional soft shadows and scattering computationally expensive and it results in a larger mem- effects are incorporated by estimating a cone defined by the ory footprint. However, the 3D SAT does not need to be re- light source. The approximation of the cone is performed computed if the view changes. with a series of cuboids (see Figure 19). Texture accesses Desgranges and Engel also describe a SAT based inexpen- in the SAT structure can only be performed axis-aligned, sive approximation of ambient light [DE07]. They combine thus the cuboids are placed along the two axis-aligned planes ambient occlusion volumes from different filtered volumes with the largest projection size of the cone towards the light into a composite occlusion volume. and the primary axis is defined to be the one with the small- est angle towards the light source. The light sources are not Extinction-based Shading and Illumination is a technique restricted to lie outside of the volume since extinction, and by Schlegel et al. [SMP11], where a 3D SAT is used to model not light propagation, is computed. A selection of renderings ambient occlusion, color bleeding, directional soft shadows using this method is shown in Figure 20. and scattering effects by utilizing the summation of the expo- nential extinction coefficient in the transmittance equation: Extinction-based Shading and Illumination is not limited to any specific rendering paradigm although it was imple- mented for ray-casting. The technique can handle point, di- R x~j 0 0 |x~j−~xi|/∆x − σt (~x )d~x − ∑ σt ∆x rectional and area light sources. The extinction must be es- T(~x ,~x ) = e ~xi ≈ e j~=i j . (29) i j timated for each additional light source, which means that rendering time increases with the number of light sources. Traditionally, this equation has been approximated with a In terms of memory consumption, an extra 3D texture is re- Taylor series expansion, resulting in an α-blending opera- quired for the SAT, which can be of lower resolution than tion. However, current GPUs does not need to perform this the volume data while still producing convincing results. It approximation, which allow the summation to be split and is possible to interactively change clip planes due to the fast performed in any order. A 3D SAT can then be used to ap- computation and lower resolution of the 3D SAT. proximate the sum in the exponent for a set of directions, thereby decreasing the computational cost significantly. Am- Light Propagation Volumes injects the illumination into

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

dissipated within the volumetric media. In order to solve this equation, Zhang and Ma discretize the radiance distri- bution into a uniform grid and divide the computation into two steps, convection and diffusion, both performed on the GPU. By assuming that the direct light propagation is unique for each grid point the directional part of the radiance can be discarded and only the color channels need to be stored, thus reducing the storage requirements. The convection step computes direct light propagation for each light source. The radiance grid is first initialized using different boundary con- ditions for directional and point light sources, essentially in- jecting light at the boundary of the light volume. In addition to this, point light sources inside the volume are approxi- mated using a Gaussian function and injected into the grid. Once initialized, a first-order upwind scheme is used to prop- agate the light until a steady state solution is obtained. The result of the propagation of each light source is summed up and then used as an initial estimate in the diffusion step. The Figure 20: Extinction-based Shading and Illumination diffusion step is solved using the Gauss-Seidel method with [SMP11] allows interactive integration of local and global time t = 1 as the final illumination volume. Once the light illumination properties and advanced light setups. (Images has been propagated, the rendering phase uses standard ray- taken from [SMP11], courtesy of Philipp Schlegel) casting with Phong shading and a simple lookup from the illumination volume to get the radiance at the location along the ray, and thus allow real-time frame rates when changing the viewpoint. a grid and propagates the light according to a convection- diffusion equation. The convection part of the equation de- Since the method only applies a lookup during render- scribes how direct light propagates inside the volumetric me- ing it can be combined with other rendering paradigms. It dia, while the diffusion part describes how the indirect light supports multiple dynamically moving directional and point is scattered inside the volumetric media. light sources that can be placed anywhere in the scene with interactive updates. However, Zhang and Ma [ZM13] report Geist et al. [GRWS04] used a lattice-Boltzmann method that only soft shadows are supported due to the use of a to solve the light transport, but their method did not reach in- first-order upwind scheme. Multiple scattering is supported teractive frame rates. More recently, Kaplanyan and Dachs- with the assumption of an isotropic phase function. The light bacher [KD10] presented a real-time method using spheri- propagation needs to be recomputed as soon as the transfer cal harmonics. They introduced a novel propagation method function or a light source changes, but the computation can that only requires light to be propagated among the main be solved in an iterative manner in order to improve the inter- axial directions instead of 26 directions in other schemes. action. Low rendering times are achieved by only perform- However, the method is primarily focused on low-frequency ing a single lookup is to compute the incoming radiance. lighting and simulates single-bounce indirect light. Zhang Two temporary 3D illumination volumes are needed during and Ma [ZM13] presented a method that simulates both sin- the computation in addition to the final illumination volume, gle and multiple scattering and is able to reach interactive each consisting of three channels. However, Zhang and Ma frame rates. Zhang and Ma describe the equation for radi- report that plausible results can be achieved using half the ance convection-diffusion, in differential form, as: resolution of the original (see Figure 21), which reduces the memory requirements. d L(~x) = −c ·~ω (~x) · ∇L(~x) − σ (~x) · L(~x) + σ · ∇2L(~x), dt l a s (31) 5.5. Basis Function Based Techniques ( ) The group of basis function based techniques covers those d where L(~x) is the radiance at location ~x, dt L(~x) is thus the approaches where the light source radiance and transparency rate of change of radiance at location ~x, c is the speed of (visibility) T(~xi,~x j), from the position ~xi to the position ~x j light and~ωl(~x) is the normalized light direction at~x. The first (usually border of volume), is separated and represented two terms describe light leaving and being absorbed at loca- using a basis function. Since radiance and visibility are tion ~x, respectively, which is referred to as convection. The computed independently, light sources may be dynamically third term, referred to as diffusion, approximates isotropic changed without an expensive visibility update, which is scattering and can intuitively be understood as light being the case for many other methods. Furthermore, the light

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

ther away from the voxel for which the coefficients need to be computed. Zhou et al. [ZRL∗08] presented a method for rendering smoke under low-frequency illumination by decomposing the smoke into a spherical harmonics basis repre- sentation and a sequence of residual fields. The light compu- tation is then performed using the spherical harmonics rep- resentation of the density field. The computed illumination is combined with the high-frequency residual fields during ray marching to get high details in the final rendering. In a more recent paper, spherical harmonics were used to support more advanced material properties [LR10]. Therefore, the authors exploit the approach proposed by Figure 21: Rendering of a vortex field from a turbulent flow Ritschel [Rit07] to compute spherical harmonic coefficients. simulation with an illumination volume of half the size of During ray-casting, the spherical integral over the prod- the original data using Light Propagation Volumes [ZM13], uct of an area light source and voxel occlusion is effi- which can handle multiple light sources as well as multi- ciently computed using the precomputed occlusion infor- ple scattering. (Images taken from [ZM13], courtesy of Yubo mation. Furthermore, realistic light material interaction ef- Zhang) fects are integrated while still achieving interactive volume frame rates. By using a modified spherical harmonic pro- jection approach, tailored specifically for volume rendering, it becomes possible to realize non cone-shaped phase func- tions in the area of interactive volume rendering. Thus, more source composition may be complex as they are projected realistic material representations become possible. to another basis that is independent of the number of light One problem with using a basis function such as spheri- sources. To this end, the only basis function used in the area cal harmonics is the increased storage requirement. Higher of volume rendering are spherical harmonics. However, to frequencies can be represented when using more coeffi- be able to incorporate future techniques into our classifica- cients, thus obtaining better results. However, each coeffi- tion, we have decided to keep this group more general. It is cient needs to be stored to be used during rendering. Kro- beyond the scope of this article to provide a detailed intro- nander et al. [KJL∗12] address this storage requirement is- duction to spherical harmonic based lighting, we focus on sue and also decrease the time it takes to update the visi- the volume rendering specific properties only, and refer to bility. In their method, both volume data and spherical har- the work by Sloan et al. [SKS02] for further details. This monic coefficients exploit the multiresolution technique of being said, we will mention two important properties. The Ljung et al. [LLY06]. The sparse representation of volume first property is that efficient integral evaluation can be per- data and visibility allows substantial reductions with respect formed, which supports real-time computation of the incom- to memory requirements. To also decrease the visibility up- ing radiance over the sphere. The second property is that it date time, the two pass approach introduced by Hernell et can be rotated on the fly, and thus supports dynamic light al. [HLY08] is used. First, local visibility is computed over sources. While spherical harmonics were first used in vol- a small neighborhood by sending rays uniformly distributed ∗ ume rendering by Beason et al. [BGB 06] to enhance isosur- on the sphere. Then, piecewise integration of the local visi- face shading, in this section, we cover algorithms applying bility is performed using the spherical harmonics represen- spherical harmonics in direct volume rendering. tation of Dirac delta functions in the direction of integration in order to compute global visibility. Since both local and Spherical Harmonics was first applied to direct volume ren- global visibility have been computed, the user can switch dering by Ritschel [Rit07] to simulate low-frequency shad- between them during rendering and thus reveal structures owing effects. To enable interactive rendering, a GPU-based hidden by global features while still keeping spatial com- approach is exploited for the computation of the required prehension. spherical harmonic coefficients. Therefore, a multiresolution data structure similar to a volumetric mipmap is computed The spherical harmonics based methods discussed here and stored on the GPU. Whenever the spherical harmonic are not bound to any specific rendering paradigm, although coefficients need to be recomputed, this data structure is ac- only volume ray-casting has been used so far. An arbitrary cessed and the ray traversal is performed with increasing number of dynamic directional light sources are supported, step size. The increasing step size is realized by sampling the i. e., environment maps. Area and point light sources are also multiresolution data structure in different levels, whereby supported, each adding an overhead since the direction to- the level of detail decreases as the sampling proceeds fur- wards the sample along the ray changes and the spherical

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

ticated refraction approaches have been presented by oth- ers [LM05, RC06]. Monte Carlo ray-tracing methods have also been applied in the area of interactive volume rendering in order to pro- duce realistic images. The benefit of these approaches is, that they are directly coupled with physically based light trans- port. The first Monte Carlo based approach has been pre- sented by Rezk-Salama [RS07]. It supports single and mul- tiple scattering, whereby interactivity is achieved by con- straining the expensive scattering computations to selected Figure 22: A CT scan of a human torso rendered with dif- surfaces only. However, due to this restriction it is not possi- fuse gradient-based shading (left) and spherical harmonic ble to render translucent materials such as clouds. lighting (right). (Images taken from [KJL∗12]) More recently, Kroes et al. presented an interactive Monte Carlo ray-tracer, which does not limit the scattering to se- lected boundary surfaces [KPB12]. Their approach simu- lates several real-world factors influencing volumetric illu- harmonics representation of the light source must be rotated. mination, such as multiple arbitrarily shaped and textured A comparison to diffuse gradient based shading can be seen area light sources, a real-world camera with lens and aper- in Figure 22. Storage and computation time limits the accu- ture, as well as complex material properties. This combi- racy of the spherical harmonics representation, thus restrict- nation allows images showing a high degree of realism to ing the illumination to low frequency effects. In general, be generated (see Figure 23). To adequately represent both, light sources are constrained to be outside of the volume. volumetric and surface-like materials, their approach uses However, when using local visibility, e. g., as in [KJL∗12], stochastic sampling with the gradient magnitude as thresh- they may be positioned inside. The rendering time is inter- old to determine if a BRDF or a phase function should be active, only requiring some (in general 1-4) extra texture used for material representation. It is restricted to single scat- lookups and scalar products. The light source representa- tering and uses a progressive approach to allow interactivity. tion must be recomputed or rotated when the illumination It is shown that the image converges after roughly a half to changes but this is generally fast and can be performed in one second when parameters such as clip plane or transfer real-time. The visibility must be recomputed when the trans- function is changed. A minor amount of additional data is fer function or clip plane changes, which is expensive but required in order to perform filtering and noise removal, but can be performed interactively. The memory requirements the method is independent of the data set size in terms of are high as each coefficient needs to be stored for the vis- memory. ibility in the volume. We refer to the work by Kronander et al. [KJL∗12] for an indepth analysis of how to tradeoff memory size and image quality. 5.7. Isosurface Based Techniques Besides the algorithms discussed above, which can be com- 5.6. Ray-Tracing Based Techniques ( ) bined with direct volume rendering and are therefore the main focus of this article, several approaches exist that allow The work discussed above focuses mainly on increasing advanced illumination when visualizing isosurfaces. While the lighting realism in interactive volume rendering appli- cations. This is often achieved by making assumptions re- garding the lighting setup or the illumination propagation, e. g., assuming that only directional scattering occurs. As ray-tracing has been widely used in polygonal rendering to support physically correct and thus realistic images, a few authors have also investigated this direction in the area of volume rendering. Gradients, as used for the gradient-based shading approaches, can also be exploited to realize a vol- umetric ray-tracer supporting more physically correct illu- mination. One such approach, considering only first order rays to be traversed, has been proposed by Stegmaier et Figure 23: Monte Carlo ray-tracing allows gradient- al. [SSKE05]. With their approach they are able to simulate based material and camera properties to be incorpo- mirror-like reflections and refraction, by casting a ray and rated, which result in highly realistic images. (Image taken computing its deflection. To simulate mirror-like reflections, from [KPB12], courtesy of Thomas Kroes) they perform an environment texture lookup. More sophis-

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

perceptual qualities of advanced illumination techniques are frequently pointed out, remarkably little research has been done in order to investigate them in a systematic fashion. One study showing the perceptual benefits of advanced volu- metric illumination models has been conducted with the goal to analyze the impact of the shadow volume propagation technique [RDRS10b]. The users participating in this study had to perform several tasks depending on the depth per- ception of static images, where one set was generated using gradient-based shading and the other one using shadow vol- ume propagation. The results of the study indicate that depth perception is more accurate and performed faster when using shadow volume propagation. Šoltészová et al. evaluated the perceptual impact of chro- matic soft shadows in the context of interactive volume ren- dering [vPV11]. Inspired by illustrators, they replace the lu- minance decrease, usually present in shadowed areas, by a Figure 24: A CT scan of a mummy displaying a com- chromaticity blending. In their user study they showed the parison between Lambertian shading with (left), and with- perceptual benefit of these chromatic shadows with respect out (right) perceptual gradient correction. (Image taken to depth and surface perception. However, their results also from [vTPV12], courtesy of Veronika Šoltészová) indicate that brighter shadows might have a negative impact on the perceptual qualities of an image and should therefore be used with consideration. dealing with direct volume rendering involves a transfer Šoltészová et al. also evaluated the perception of surface function that may extract several structures having different slant for surfaces with Lambertian shading and found that extinction and optical properties, when dealing with the sim- there is a systematic distortion [vTPV12]. They create a pre- plest form of isosurface rendering, only a single isosurface diction model that is used to alter the normal or gradient in with homogeneous optical properties is visible. This dras- the shading of the surface slant such that it is perceived as tically reduces the combinations of materials that interact the ground truth. This improves the perception of the sur- with each other and produce complex lighting effects. Sev- face slant when the angle is between 40-60◦. The difference eral approaches have been proposed, which exploit this re- before and after applying their corrective model can be seen duced complexity and thus allow advanced illumination with in Figure 24. higher rendering speeds and sometimes lower memory foot- prints. Vicinity Shading [Ste03], which simulates illumina- Lindemann and Ropinski presented the results of a user tion of isosurfaces by taking into account neighboring vox- study in which they have investigated the perceptual impact els, was the first approach addressing advanced illumination of seven volumetric illumination techniques [LR11]. Within for isosurfaces extracted from volume data. This method has their study depth and volume related tasks had to be per- a precomputation step where the vicinity of each voxel is formed. They showed that advanced volumetric illumina- analyzed and the resulting value, which represents the occlu- tion models make a difference when it is necessary to assess sion of the voxel, is stored in a shading texture that can be ac- depth or size of volumetric objects in images. However, they cessed during rendering. Wyman et al. [WPSH06] and Bea- could not find a relation between the time used to perform son et al. [BGB∗06] focus on a spherical harmonics based a certain task and the used illumination model. The results precomputation, supporting the display of isosurfaces un- of this study indicate that the directional occlusion shading der static illumination conditions. Penner et al. exploit an model has the best perceptual capabilities. ambient occlusion approach to render smoothly shaded iso- In terms of light source placement Langer et al. [LB00] surfaces [PM08]. More recently, Banks and Beason have showed that local shape perception increase if a directional demonstrated how to decouple illumination sampling from light source is angled slightly to the left or right above the isosurface generation, in order to achieve sophisticated iso- viewing direction (11◦) compared to placing it below the surface illumination results [BB09]. viewing direction. They further showed that diffuse light produced as good shape perception as the light coming from upper right or left. Caniard et al. [CF07] showed that 6. Perceptual Impact shape perception is heavily dependent on light source posi- In addition to the technical insights and the achievable illu- tion when using a single point light source under local illu- mination effects described above, the perceptual impact of mination. Creating more complex light setups can increase the current techniques is also of relevance. Although, the the shape perception [HM03]. As noted before, Wanger et

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination al. [Wan92] showed that the sharpness of shadows increase as both a reference for the existing techniques and for de- the accuracy of matching shapes. Interreflections (multiple ciding which advanced volumetric illumination approaches scattering) was shown by Madison et al. [MTK∗01] to be an should be used in specific cases. We hope that this helps re- important cue, in addition to shadows, for determining the searchers as well as application developers to make the right contact of objects. Interreflections were also used by Weigle decisions early on and identify challenges for future work. and Banks [WB08], which showed that physically-based il- lumination can in some cases increase the perception of 3D flow-field geometry. Acknowledgments This work was supported through grants from the Excel- 7. Future Challenges lence Center at Linköping and Lund in Information Tech- nology (ELLIIT), the Swedish Research Council through Though the techniques presented in this article allow real- the Linnaeus Center for Control, Autonomy, and Decision- istic volume rendered images to be generated at interactive making in Complex Systems (CADICS), the Swedish Re- frame rates, still several challenges exist to be addressed in search Council (VR, grant 2011-4113), and the Swedish e- future work. When reviewing existing techniques it can be Science Research Centre (SeRC). Many of the presented noticed that, although the underlying optical models already techniques have been integrated into the Voreen volume contain simplifications, there is sometimes a gap between rendering engine (www.voreen.org), which is an open the model and the actual implementation. In the future it is source visualization framework. The authors would like to necessary to narrow this gap by providing physically more thank Florian Lindemann, for implementing the half angle accurate illumination models. The work done by Kroes et slicing used to generate Figure1 (right). al. [KPB12] and Jönsson et al. [JKRY12] can be consid- ered as first valuable steps into this direction. One impor- tant ingredient necessary to provide further realism is the References support of advanced material properties. Current techniques [BB09]B ANKS D.C.,BEASON K.: Decoupling Illumination only support simplified phase function models and some- from Isosurface Generation Using 4D Light Transport. IEEE times BRDFs, while more realistic volumetric material func- Transactions on Visualization and Computer Graphics 15, 6 tions are not yet integrated. (2009), 1595–1602. 21 ∗ Another challenging area which needs to be addressed in [BGB 06]B EASON K.M.,GRANT J.,BANKS D.C.,FUTCH B.,HUSSAINI M. Y.: Pre-computed Illumination for Isosur- the future is the integration of several data sources. Although faces. In Conference on Visualization and Data Analysis (2006), ∗ Nguyen et al. [NEO 10] have presented an approach for in- pp. 1–11. 19, 21 tegrating MRI and fMRI data, more general approaches po- [Bli77]B LINN J. F.: Models of Light Reflection for Computer tentially supporting multiple modalities are missing. Along Synthesized Pictures. Proceedings of SIGGRAPH 11 (1977), these lines, also the integration of volumetric and polygonal 192–198.7 information needs more consideration. While the approaches [BR98]B EHRENS U.,RATERING R.: Adding Shadows to a ∗ by Schott et al. [SMGB12,SMG 13] provides an integration Texture-Based Volume Renderer. In IEEE Int. Symp. on Volume in the context of slice based rendering, other volume render- Visualization (1998), pp. 39–46.4, 15, 26 ing paradigms are not yet supported. [CCF94]C ABRAL B.,CAM N.,FORAN J.: Accelerated Volume Rendering and Tomographic Reconstruction using Texture Map- Apart from that, large data sets still pose a driving chal- ping Hardware. In Proceedings of the 1994 Symposium on Vol- lenge. Here, the illumination computation time increases ume Visualization (1994), pp. 91–98.4, 10 and it is not possible to store large intermediate data sets [CF07]C ANIARD F., FLEMING R. W.: Distortion in 3d shape es- due to memory limitations. Although memory capacity on timation with changes in illumination. In Proceedings of the 4th GPUs has increased during the last couple of years, the symposium on Applied perception in graphics and visualization speed at which data can be transferred for calculations (2007), ACM, pp. 99–105. 21 has not increased at the same rate. Multiresolution data [DE07]D ESGRANGES P., ENGEL K.: US patent application structures [WGLS05, LLY06] and explicit caches [JGY∗12] 2007/0013696 A1: Fast ambient occlusion for direct volume ren- could alleviate these problems but more work in relation to dering, 2007. 17 volumetric illumination is needed. [DEP05]D ESGRANGES P., ENGEL K.,PALADINI G.: Gradient- Free Shading: A new Method for Realistic Interactive Volume Rendering. In Int. Fall Workshop on Vision, Modeling, and Visu- 8. Conclusions alization (2005), pp. 209–216.5, 13 [DGMF11]D ELALANDRE C.,GAUTRON P., MARVIE J.-E., Within this article, we have discussed the current state of the FRANÇOIS G.: Transmittance Function Mapping. In Sympo- art regarding volumetric illumination models for interactive sium on Interactive 3D Graphics and Games (2011), pp. 31–38. volume rendering. We have classified and explained the cov- 13 ered techniques and related their visual capabilities using a [DK09]D ACHSBACHER C.,KAUTZ J.: Real-Time Global Illu- uniform mathematical notation. Thus, the article can be used mination. In ACM SIGGRAPH Courses Program (2009).1

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd.

Jönsson et al. / Survey: Interactive Volumetric Illumination

2 15 12 Splatting 9 3 11 9 1 6 13 Texture Slicing 16 14 7 5 4

Scales well withimage size well Scales Scales well with data size 12 15 10 Ray Casting 1 3 11 16 14 4 13 2 7 8 17 5 8 6 17 10

(a) Illumination complexity (b)

1 Gradient-based 7 Deep Shadow Mapping 13 Summed Area Table 3D

2 Local Ambient Occlusion 8 Image Plane Sweep 14 Extinction-based Shading

3 Dynamic Ambient Occlusion 9 Shadow Splatting 15 Light Propagation Volumes

4 Half Angle Slicing 10 Historygram Photon Mapping 16 Spherical Harmonics

5 Directional Occlusion 11 Shadow Volume Propagation 17 Monte Carlo ray-tracing

6 Multidirectional Occlusion 12 Piecewise Integration

Figure 25: In (a) the illumination complexity of a method is shown in relation to the underlying rendering paradigm, whereby the illumination complexity increases from left to right. The type of illumination complexity a method can handle is depicted on the horizontal axis. Starting with local shading to the left, advancing to a single static light source and finally ending up with dynamic light sources, multiple scattering and advanced materials. Plot (b) shows how each method scales in terms of performance with respect to large image and data size. Techniques in the upper right quadrant perform well independent of image and data size, while methods in the lower left quadrant degrade substantially for large screen and data sizes. Note that distances between methods are not quantifiable and should more be seen as rough guidelines.

[DVND10]D ÍAZ J.,VÁZQUEZ P.-P., NAVAZO I.,DUGUET F.: Eurographics conference on Rendering Techniques (2004), Eu- Real-Time Ambient Occlusion and Halos with Summed Area Ta- rographics Association, pp. 355–362. 18 Computers and Graphics 34 bles. (2010), 337–350. 17, 26 [HAM11]H OSSAIN Z.,ALIM U.R.,MÖLLER T.: Toward High- [DYV08]D ÍAZ J.,YELA H.,VÁZQUEZ P.-P.: Vicinity Occlu- Quality Gradient Estimation on Regular Lattices. IEEE Transac- sion Maps - Enhanced depth perception of volumetric models. In tions on Visualization and Computer Graphics 17 (2011), 426– Computer Graphics Int. (2008), pp. 56–63. 16, 17 439.7 [GDM11]G AUTRON P., DELALANDRE C.,MARVIE J.-E.: Ex- [HKSB06]H ADWIGER M.,KRATZ A.,SIGG C.,BÜHLER K.: tinction Transmittance Maps. In SIGGRAPH Asia 2011 Sketches GPU-Accelerated Deep Shadow Maps for Direct Volume Ren- (2011), pp. 7:1–7:2. 13 dering. In ACM SIGGRAPH/EG Conference on Graphics Hard- ware [GP06]G RIBBLE C. P., PARKER S. G.: Enhancing interactive (2006), pp. 27–28. 12, 26 particle visualization with advanced shading models. In Proceed- [HLY07]H ERNELL F., LJUNG P., YNNERMAN A.: Efficient ings of the 3rd symposium on Applied perception in graphics and Ambient and Emissive Tissue Illumination using Local Occlu- visualization (2006), pp. 111–118.1 sion in Multiresolution Volume Rendering. In IEEE/EG Int. Symp. on Volume Graphics [GRWS04]G EIST R.,RASCHE K.,WESTALL J.,SCHALKOFF (2007), pp. 1–8.8,9 R.: Lattice-boltzmann lighting. In Proceedings of the Fifteenth [HLY08]H ERNELL F., LJUNG P., YNNERMAN A.: Interactive

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

Global Light Propagation in Direct Volume Rendering using Lo- IEEE/EG Symposium on Visualization (2006), pp. 259–266.8, cal Piecewise Integration. In IEEE/EG Int. Symp. on Volume and 16, 19, 22 Point-Based Graphics (2008). 16, 19, 26 [LM05]L I S.,MUELLER K.: Accelerated, High-Quality Refrac- [HLY09]H ERNELL F., LJUNG P., YNNERMAN A.: Local Am- tion Computations for Volume Graphics. In Int. Workshop on bient Occlusion in Direct Volume Rendering. IEEE Transactions Volume Graphics (2005), pp. 73–229. 20 on Visualization and Computer Graphics 15, 2 (2009), 548–559. 7,8,9, 26 [LR10]L INDEMANN F., ROPINSKI T.: Advanced Light Material Interaction for Direct Volume Rendering. In IEEE/EG Int. Symp. [HM03]H ALLE M.,MENG J.: Lightkit: A lighting system for on Volume Graphics (2010), pp. 101–108. 19, 26 effective visualization. In Visualization, 2003. VIS 2003. IEEE (2003), IEEE, pp. 363–370. 21 [LR11]L INDEMANN F., ROPINSKI T.: About the Influence of Illumination Models on Image Comprehension in Direct Volume [JB10]J ANSEN J.,BAVOIL L.: Fourier Opacity Mapping. In Rendering. IEEE Transactions on Visualization and Computer SI3D: Symp. on Interactive 3D graphics (2010), pp. 165–172. 12 Graphics 17, 12 (2011), 1922–1931.1, 21 [Jen96]J ENSEN H. W.: using photon maps. [LV00]L OKOVIC T., VEACH E.: Deep shadow maps. In Pro- In Rendering Techniques (1996), Springer-Verlag, pp. 21–30. 15 ceedings of SIGGRAPH (2000), pp. 385–392. 12 [JGY∗12]J ÖNSSON D.,GANESTAM P., YNNERMAN A., [Max95]M AX N.: Optical Models for Direct Volume Rendering. DOGGETT M.,ROPINSKI T.: Explicit Cache Management for Volume Ray-Casting on Parallel Architectures. In EG Sympo- IEEE Transactions on Visualization and Computer Graphics 1, 2 sium on Parallel Graphics and Visualization (EGPGV) (2012). (1995), 99–108.1,2,3,5 22 [MC10]M AX N.,CHEN M.: Local and Global Illumination in [JKRY12]J ÖNSSON D.,KRONANDER J.,ROPINSKI T., YN- the Volume Rendering Integral. In Scientific Visualization: Ad- NERMAN A.: Historygrams: Enabling Interactive Global Illu- vanced Concepts (2010), pp. 259–274.2, 13 mination in Direct Volume Rendering using Photon Mapping. [MMC99]M UELLER K.,MÖLLER T., CRAWFIS R.: Splatting IEEE Transactions on Visualization and Computer Graphics 18, Without The Blur. In IEEE Visualization (1999), p. 61.4 12 (2012), 2364–2371.3, 14, 15, 22, 26 [MR10]M ESSC.,ROPINSKI T.: Efficient Acquisition and Clus- [KD10]K APLANYAN A.,DACHSBACHER C.: Cascaded Light tering of Local Histograms for Representing Voxel Neighbor- Propagation Volumes for Real-Time indirect Illumination. In hoods. In IEEE/EG Int. Symp. on Volume Graphics (2010), Symposium on Interactive 3D Graphics and Games (2010), I3D pp. 117–124.9 ’10, ACM, pp. 99–107. 18 [MTK∗01]M ADISON C.,THOMPSON W., KERSTEN D., [KJL∗12]K RONANDER J.,JÖNSSON D.,LÖW J.,LJUNG P., SHIRLEY P., SMITS B.: Use of interreflection and shadow for YNNERMAN A.,UNGER J.: Efficient Visibility Encoding for surface contact. Perception & Psychophysics 63, 2 (2001), 187– Dynamic Illumination in Direct Volume Rendering. IEEE Trans- 194. 22 actions on Visualization and Computer Graphics 18, 3 (2012), 447–462.6, 19, 20, 26 [NEO∗10]N GUYEN T. K., EKLUND A.,OHLSSON H.,HER- NELL F., LJUNG P., FORSELL C.,ANDERSSON M. T., [KN01]K IM T.-Y., NEUMANN U.: Opacity Shadow Maps. In Proceedings of the 12th Eurographics Workshop on Rendering KNUTSSON H.,YNNERMAN A.: Concurrent Volume Visual- Techniques (2001), pp. 177–182. 12 ization of Real-Time fMRI. In IEEE/EG Int. Symp. on Volume Graphics (2010), pp. 53–60.8, 22 [KPB12]K ROES T., POST F. H., BOTHA C. P.: Exposure Ren- der: An Interactive Photo-Realistic Volume Rendering Frame- [NM01]N ULKAR M.,MUELLER K.: Splatting with Shadows. In work. PLoS ONE 7, 7 (07 2012), e38586. 20, 22, 26 Volume Graphics (2001). 14 [KPH∗03]K NISS J.,PREMOZE S.,HANSEN C.,SHIRLEY P., [PBVG10]P ATEL D.,BRUCKNER S.,VIOLA I.,GRÖLLER MCPHERSON A.: A Model for Volume Lighting and Modeling. M. E.: Seismic Volume Visualization for Horizon Extraction. In IEEE Transactions on Visualization and Computer Graphics 9, 2 Proceedings of the IEEE Pacific Visualization Symposium 2010 (2003), 150–162. 10, 11, 15, 26 (2010), pp. 73–80.8, 11 [KPHE02]K NISS J.,PREMOZE S.,HANSEN C.,EBERT D.: In- [PM08]P ENNER E.,MITCHELL R.: Isosurface Ambient Oc- teractive Translucent Volume Rendering and Procedural Model- clusion and Soft Shadows with Filterable Occlusion Maps. ing. In IEEE Visualization (2002), pp. 109–116.2,4,5, 10, 11 In IEEE/EG Int. Symp. on Volume and Point-Based Graphics (2008), pp. 57–64. 21 [KW03]K RÜGER J.,WESTERMANN R.: Acceleration Tech- niques for GPU-based Volume Rendering. In Proceedings of [QXF∗07]Q IU F., XU F., FAN Z.,NEOPHYTOS N.,KAUFMAN IEEE Visualization (2003).4 A.,MUELLER K.: Lattice-Based Volumetric Global Illumina- [LB00]L ANGER M.S.,BÜLTHOFF H. H.: Depth discrimination tion. IEEE Transactions on Visualization and Computer Graph- from shading under diffuse lighting. Perception 29, 6 (2000), ics 13 (2007), 1576–1583. 15 649–660. 21 [RBV∗08]R UIZ M.,BOADA I.,VIOLA I.,BRUCKNER S., [Lev88]L EVOY M.: Display of Surfaces from Volume Data. FEIXAS M.,SBERT M.: Obscurance-based Volume Rendering IEEE Computer Graphics and Applications 8 (1988), 29–37.6, Framework. In IEEE/EG Int. Symp. on Volume and Point-Based 7, 26 Graphics (2008), pp. 113–120.8 [LL94]L ACROUTE P., LEVOY M.: Fast volume rendering using [RC06]R ODGMAN D.,CHEN M.: Refraction in Volume Graph- a shear-warp factorization of the viewing transformation. In Pro- ics. Graph. Models 68 (2006), 432–450. 20 ceedings of the 21st annual conference on Computer graphics [RDRS10a]R OPINSKI T., DÖRING C.,REZK SALAMA C.: Ad- and interactive techniques (1994), pp. 451–458.4 vanced Volume Illumination with Unconstrained Light Source [LLY06]L JUNG P., LUNDSTRÖM C.,YNNERMAN A.: Multires- Positioning. IEEE Computer Graphics and Applications 30, 6 olution Interblock Interpolation in Direct Volume Rendering. In (2010), 29–41. 16, 26

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

[RDRS10b]R OPINSKI T., DÖRING C.,REZK-SALAMA C.: In- [SYR11]S UNDÉN E.,YNNERMAN A.,ROPINSKI T.: Image teractive Volumetric Lighting Simulating Scattering and Shad- Plane Sweep Volume Illumination. IEEE Transactions on Visu- owing. In IEEE Pacific Visualization (2010), pp. 169–176.5,6, alization and Computer Graphics 17, 12 (2011), 2125–2134.4, 15, 21, 26 5,6, 13, 26 [Rit07]R ITSCHEL T.: Fast GPU-based Visibility Computation [vPBV10]Š OLTÉSZOVÁ V., PATEL D.,BRUCKNER S.,VIOLA for Natural Illumination of Volume Data Sets. In Eurographics I.: A Multidirectional Occlusion Shading Model for Direct Vol- Short Paper Proceedings 2007 (2007), pp. 17–20. 19, 26 ume Rendering. Computer Graphics Forum (Eurographics/IEEE [RKH08]R OPINSKI T., KASTEN J.,HINRICHS K. H.: Efficient VGTC Symp. on Visualization 2010) 29, 3 (2010), 883–891.4, Shadows for GPU-based Volume Raycasting. In Int. Confer- 11, 12, 26 ence in Central Europe on Computer Graphics, Visualization and [vPV11]Š OLTÉSZOVÁ V., PATEL D.,VIOLA I.: Chro- Computer Vision (2008), pp. 17–24. 12, 13 matic Shadows for Improved Perception. In Proceedings of [RMSD∗08]R OPINSKI T., MEYER-SPRADOW J.,DIEPEN- Non-Photorealistic Animation and Rendering (NPAR) (2011), BROCK S.,MENSMANN J.,HINRICHS K. H.: Interactive Vol- pp. 105–115. 21 ume Rendering with Dynamic Ambient Occlusion and Color [vTPV12]Š OLTÉSZOVÁ V., TURKAY C.,PRICE M.C.,VIOLA Bleeding. Computer Graphics Forum (Eurographics 2008) 27, I.: A Perceptual-Statistics Shading Model. IEEE Transactions on 2 (2008), 567–576.7,8,9, 26 Visualization and Computer Graphics 18, 12 (2012), 2265–2274. [RS07]R EZK-SALAMA C.: GPU-Based Monte-Carlo Volume 21 Raycasting. In Pacific Graphics (2007). 20 [Wan92]W ANGER L. C.: The effect of shadow quality on the per- [RSHRL09]R EZK SALAMA C.,HADWIGER M.,ROPINSKI T., ception of spatial relationships in computer generated imagery. In LJUNG P.: Advanced Illumination Techniques for GPU Volume SI3D: Symp. on Interactive 3D graphics (1992), pp. 39–42.1, 16, Raycasting. In ACM SIGGRAPH Courses Program (2009).1 22 ∗ [SHC 09]S MELYANSKIY M.,HOLMES D.,CHHUGANI J., [WB08]W EIGLE C.,BANKS D.: A comparison of the perceptual LARSON A.,CARMEAN D.M.,HANSON D.,DUBEY P., AU- benefits of linear perspective and physically-based illumination GUSTINE K.,KIM D.,KYKER A.,LEE V. W., NGUYEN A.D., for display of dense 3d streamtubes. Visualization and Computer SEILER L.,ROBB R.: Mapping High-Fidelity Volume Render- Graphics, IEEE Transactions on 14, 6 (2008), 1723–1730. 22 ing for Medical Imaging to CPU, GPU and Many-Core Architec- tures. IEEE Transactions on Visualization and Computer Graph- [WFG92]W ANGER L.C.,FERWERDA J.A.,GREENBERG ics 15 (2009), 1563–1570.1,4 D. P.: Perceiving Spatial Relationships in Computer-Generated Images. IEEE Comput. Graph. Appl. 12, 3 (1992), 44–51, 54–58. [SKS02]S LOAN P., KAUTZ J.,SNYDER J.: Precomputed Ra- 1, 16 diance Transfer for Real-Time Rendering in Dynamic, Low- Frequency Lighting Environments. In Proceedings of SIG- [WGLS05]W ANG C.,GAO J.,LI L.,SHEN H.-W.: A Multires- GRAPH (2002), pp. 527–536. 19 olution Volume Rendering Framework for Large-Scale Time- ∗ varying Data Visualization. In Int. Workshop on Volume Graphics [SKvW 92]S EGAL M.,KOROBKIN C., VAN WIDENFELT R., (june 2005), pp. 11 – 223. 22 FORAN J.,HAEBERLI P.: Fast shadows and lighting effects us- ing . In Proceedings of the 19th annual confer- [Wil78]W ILLIAMS L.: Casting curved shadows on curved sur- ence on Computer graphics and interactive techniques (1992), faces. In Proceedings of SIGGRAPH (1978), pp. 270–274. 12 Proceedings of SIGGRAPH, pp. 249–252. 12 [WPSH06]W YMAN C.,PARKER S.,SHIRLEY P., HANSEN [SMG∗13]S CHOTT M.,MARTIN T., GROSSET A. P., SMITH C.: Interactive Display of Isosurfaces with Global Illumination. S. T., HANSEN C. D.: Ambient Occlusion Effects for Combined IEEE Transactions on Visualization and Computer Graphics 12 Volumes and Tubular Geometry. IEEE Transactions on Visual- (2006), 186–196. 21 ization and Computer Graphics 19, 6 (2013), 913–926. 11, 22 [ZC02]Z HANG C.,CRAWFIS R.: Volumetric Shadows Using [SMGB12]S CHOTT M.,MARTIN T., GROSSET A.,BROWNLEE Splatting. In IEEE Visualization (2002), pp. 85–92.4, 14, 26 C.: Combined Surface and Volumetric Occlusion Shading. In [ZC03]Z HANG C.,CRAWFIS R.: Shadows and Soft Shadows Proceedings of Pacific Vis (2012). 11, 22 with Participating Media Using Splatting. IEEE Transactions on [SMP11]S CHLEGEL P., MAKHINYA M.,PAJAROLA R.: Visualization and Computer Graphics 9, 2 (2003), 139–149.4, Extinction-based Shading and Illumination in GPU Volume Ray- 14, 26 Casting. IEEE Transactions on Visualization and Computer Graphics 17, 12 (2011), 1795–1802.5, 17, 18, 26 [ZIK98]Z HUKOV S.,IONES A.,KRONIN G.: An Ambient Light Illumination Model. In EGRW: Proceedings of the Eurographics ∗ [SPH 09]S CHOTT M.,PEGORARO V., HANSEN C., Workshop on Rendering (1998), pp. 45–55.8 BOULANGER K.,BOUATOUCH K.: A Directional Occlu- sion Shading Model for Interactive Direct Volume Rendering. [ZM13]Z HANG Y., MA K.-L.: Fast Global Illumination for In- Computer Graphics Forum (Proceedings of Eurographics/IEEE teractive Volume Visualization. In SI3D: Symp. on Interactive VGTC Symposium on Visualization) 28, 3 (2009), 855–862.4,6, 3D graphics (2013). 18, 19, 26 11, 12, 26 [ZRL∗08]Z HOU K.,REN Z.,LIN S.,BAO H.,GUO B.,SHUM [SSKE05]S TEGMAIER S.,STRENGERT M.,KLEIN T., ERTL H.-Y.: Real-Time Smoke Rendering Using Compensated Ray T.: A simple and flexible Volume Rendering Framework for Marching. ACM Trans. Graph. 27, 3 (Aug. 2008), 36:1–36:12. Graphics-Hardware-Based Raycasting. Int. Workshop on Volume 19 Graphics (2005), 187–241. 20 [ZXC05]Z HANG C.,XUE D.,CRAWFIS R.: Light Propagation [Ste03]S TEWART A. J.: Vicinity Shading for Enhanced Percep- for mixed Polygonal and Volumetric Data. In CGI: Proceedings tion of Volumetric Data. In IEEE Visualization (2003), pp. 355– of the Computer Graphics International (2005), pp. 249–256. 14, 362. 21 26

c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd. Jönsson et al. / Survey: Interactive Volumetric Illumination

Table 1: Comparison between the different illumination methods.

Method Light Source Scattering Refresh Time Memory SNR Consu- Indepen- mption dent Type(s) Location # Render Update Constraint Gradient-based [Lev88] D, P None N S (local) Local Ambient D, P, None N S (local) Occlusion [HLY09] AMB

Dynamic Ambient D, P, None N S Occlusion [RMSD∗08] AMB

Half Angle Slicing [KPH∗03] D, P Outside 1 S,M Directional Occlusion [SPH∗09] P Head 1 S,M Multidirectional Occlusion P View Hemi- N S,M [vPBV10] sphere Deep Shadow Mapping D, P Outside 1 S,M [HKSB06]

Image Plane Sweep [SYR11] D, P None 1 S,M Shadow Splatting [ZC02, ZC03, D, P, Outside 1 S,M ZXC05] A Historygram Photon Mapping D, P, None N S,M [JKRY12] A, T

Piecewise Integration [HLY08] D, P None 1 S

Shadow Volume Propagation D, P, None 1 S,M [BR98, RDRS10b, RDRS10a] A

Summed Area Table 3D AMB - - S [DVND10]

Extinction-based Shading D,P,A, None N S [SMP11] AMB

Light Propagation Volumes D,P None N S, M [ZM13]

Spherical Harmonics [Rit07, D, P, Outside N S,M LR10, KJL∗12] A, T, AMB Monte Carlo ray-tracing D, P, None N S [KPB12] A, T, AMB Legend Light Types: D (Directional), P (Point), A (Area), T (Textured), AMB (Ambient). Light Source #: Number of supported light sources, - (not applicable), N (Many). Scattering: S (Single), M (Multiple). : No quantity of rendering time or memory usage. : High quantity of rendering time or memory usage. Update Trigger: (Light Source Update), (Transfer Function Update). SNR Independent: (Sensitive to noisy data), (Insensitive to noisy data). c 2013 The Author(s) c 2013 The Eurographics Association and Blackwell Publishing Ltd.