ADVANCED

DISSERTATION

Presented in Partial Fulfillment of the Requirement for

The Degree Doctor of Philosophy in the Graduate

School of The Ohio State University

By

Caixia Zhang, M.S.

* * * * *

The Ohio State University 2006

Dissertation Committee: Approved by

Professor Roger Crawfis, Adviser

Professor Raghu Machiraju Adviser Science and Engineering Professor Han-Wei Shen Graduate Program

ABSTRACT

Although many advances have been achieved within the community in the last decade, many challenging problems are still open in volume rendering: high- dimensional rendering, time-varying datasets, large datasets, complex flow fields, improvement of rendering accuracy, fidelity and interactivity, interdisciplinary research with other application communities, and so on.

In this thesis, we study three challenging topics in advanced volume rendering: volumetric shadow and soft shadow algorithm in order to generate more realistic scenes; interval volumes and time-varying interval volumes for structured and unstructured grids; and implicit flow fields for three-dimensional .

A shadow is a region of relative darkness within an illuminated region caused by an object totally or partially occluding a light source. Shadows are essential to realistic and informative scenes. In volume rendering, the shadow calculation is difficult because the light intensity is attenuated as the light traverses the volume. We investigate a new shadow algorithm that properly determines the light attenuation and generates shadows for volumetric datasets, by using a 2D shadow buffer to keep track of the light attenuation through the volumetric participating media. Our shadow algorithm can generate accurate shadows with low storage requirements. The generation of soft shadows is a challenging

ii

topic in which requires integrating the contributions of extended light sources on the illumination of objects. We have extended our shadow algorithm to deal with extended light sources and generate volumetric soft shadows with an analytic method and using a convolution technique. This shadow and soft shadow algorithm also has been applied to mixed scenes of volumetric and polygonal objects. Multiple light scattering is also modeled in our model.

Interval volume algorithm is a region-of-interest extraction algorithm for steady and time-varying three-dimensional structured and unstructured grids. A high-dimensional iso-surface algorithm is used to construct interval volumes. The algorithm is independent of the dimension and topology of the polyhedral cells comprising the grid, and thus offers an excellent enhancement for volume rendering of unstructured grids. We present several new rendering operations to provide effective visualizations of the 3D scalar field, and illustrate the use of interval volumes to highlight contour boundaries or material interfaces. This interval volume technique has been extended to four dimensions to extract time-varying interval volumes, using five-dimensional iso-contour construction.

The time-varying interval volumes are rendered directly, from 4-simplices to space, allowing us to visualize the integrated interval volumes over the time period and see how interval volumes change over time in a single view.

Three-dimensional flow visualization is a challenging topic due to clutter, occlusion, and lack of depth perception cues in three dimensions. We propose an implicit flow field method to visualize 3D flow fields. An implicit flow field is first extracted using an advection operator on the flow, with a set of flow-related attributes stored. Two techniques are then employed to render the implicit flow field: a slice-based three-

iii

dimensional approach and an interval volume approach. In the first technique, the implicit flow representation is loaded as a 3D texture and manipulated using a dynamic texture operation that allows the flow to be investigated interactively. In the second technique, a geometric flow volume is extracted from the implicit flow and rendered using the projected tetrahedron method implemented with the graphics hardware. With the second technique, we can achieve a complete system which can render streamlines, time-lines, stream surfaces, time surfaces and stream volumes together.

iv

Dedicated to my daughter, my husband and my parents

v

ACKNOWLEDGMENTS

I would like to express my most sincere appreciation and gratitude to my adviser,

Dr. Roger Crawfis, for his guidance, encouragement and solid support, which made this dissertation possible. It is his enthusiasm and guidance that bring me to this new field of visualization. His insights, research motivation and solid professional knowledge guide me through a lot of challenges. Without his guidance and assistance, this dissertation would not be possible.

I would also like to thank Dr. Raghu Machiraju and Dr. Han-Wei Shen for giving me valuable suggestions during my research, reading my dissertation and acting as my defense committee. Another thank should be given to Dr. Rephael Wenger for his valuable guidance and sample code in the time-varying interval volume extraction.

A special thank should be given to Daqing Xue, my colleague. We worked together, and came up with some ideas and good implementation from numerous discussions during my study and research in the graphics group. Also I want to thank him for his help in the implementation using the graphics hardware.

I wish to thank Ming Jiang, Chaoli Wang, Guangfeng Ji, Liya Li, and all other students in the graphics group for their open discussions and valuable help. I spent a very happy life with them during my graduate study.

vi

VITA

July10, 1973……………………………………Born – Taiyuan, Shanxi Province, China

1994…………………………………………… B.S. University of Science and Technology Beijing, China

1997…………………………………………….M.S. University of Science and Technology Beijing, China

September 1998 – August 1999………………..University Fellowship, The Ohio State University

September 1999 – March 2001…………………Graduate Research Associate, The Ohio State University

2001…………………………………………….M.S. Industrial and Systems Engineering The Ohio State University

2002…………………………………………….M.S. Computer and The Ohio State University

June – September, 2005………………………..Summer Intern, Siemens Medical Solutions, Princeton, NJ

September 2001 – present……………………...Graduate Research and Teaching Associate, The Ohio State University

PUBLICATIONS

Research Publication

1. Caixia Zhang, Praveen Bhaniramka, Daqing Xue, Roger Crawfis, Rephael Wenger, “Interval Volumes: Scalar Representations for Static and Time-Varying Data”, submitted for journal publication (2006).

vii

2. Roger Crawfis, Leila De Floriani, Michael Lee, Caixia Zhang, “Modeling and Rendering Time-varying Scalar Fields”, submitted to Dagstuhl 2005 Proceedings.

3. Caixia Zhang, Daqing Xue, Roger Crawfis, Rephael Wenger, “Time-Varying Interval Volumes”, International Workshop on Volume Graphics 2005, pp.99-107, 2005.

4. Caixia Zhang, Roger Crawfis, “Light Propagation for Mixed Polygonal and Volumetric Data”, Computer Graphics International 2005, pp.249-256, 2005.

5. Daqing Xue, Caixia Zhang, Roger Crawfis, “iSBVR: -aided Hardware Acceleration Techniques for 3D Slice-Based Volume Rendering”, International Workshop on Volume Graphics 2005, pp.207-215, 2005.

6. Praveen Bhaniramka, Caixia Zhang, Daqing Xue, Roger Crawfis, Rephael Wenger, “Volume Interval Segmentation and Rendering”, IEEE Volume Visualization 2004 Symposium, pp.55-62, 2004 (Best Paper).

7. Daqing Xue, Caixia Zhang, Roger Crawfis, “Rendering Implicit Flow Volumes”, IEEE Visualization 2004, pp.99-106, 2004.

8. Roger Crawfis, Daqing Xue, Caixia Zhang, “Volume Rendering Using Splatting, A Tutorial and Survey”, Visualization Handbook, eds. Charles Hansen, Christopher Johnson, Academic Press, 2004.

9. Ming Jiang, Naeem Shareef, Caixia Zhang, Roger Crawfis, Raghu Machiraju, Han-Wei Shen, “Visualization Fusion: Hurricane Isabel Dataset”, Technical Report OSU-CISRC-10/04- TR59, The Ohio State University, October 2004.

10. Caixia Zhang, Roger Crawfis, “Shadows and Soft Shadows with Participating Media Using Splatting”, IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 2, pp.139-149, 2003.

11. Caixia Zhang, Roger Crawfis, “Volumetric Shadows Using Splatting”, IEEE Visualization 2002, pp.85-92, 2002.

FIELDS OF STUDY

Major Field: Computer Science and Engineering

Minor Field: Computer Networking

Minor Field: Computer Architecture

viii

TABLE OF CONTENTS

Page Abstract……………………………………………………………………………………ii Dedication…………………………………………………………………………….…...v Acknowledgments………………………………………………………………………..vi Vita………………………………………………………………………………………vii List of Tables…………………………………………………………………………….xii List of Figures…………………………………………………………………………...xiii

Chapters:

1. Introduction………………………………………………………………………..1

1.1 General introduction…………………………………………………………..1 1.2 Introduction to volume rendering……………………………………………..2 1.3 Challenges and strategies……………………………………………………...8 1.3.1 Volumetric shadows and soft shadows……………………………...8 1.3.2 Interval volumes and time-varying interval volumes……………...10 1.3.3 Three-dimensional flow visualization……………………………...11 1.4 Contributions………………………………………………………………....13 1.5 Overview of dissertation……………………………………………………..17

2. Volumetric lighting models……………………………………………………...18

2.1 Introduction…………………………………………………………………..18 2.2 Previous work………………………………………………………………..19 2.2.1 Shadow algorithms…………………………………………………19 2.2.2 Multiple light scattering……………………………………………28 2.2.3 Rendering of mixed volumetric and polygonal objects……………30 2.3 Summary of volumetric shadow algorithm…………………………………..31 2.3.1 Illumination models………………………………………………..31

ix

2.3.2 Implementation of volumetric shadows……………………………33 2.3.3 Textured projective lights………………………………………….45 2.3.4 Multiple light sources……………………………………………...51 2.4 Volumetric soft shadow algorithm…………………………………………...53 2.4.1 Volumetric soft shadow algorithm………………………………....53 2.4.2 Discussion of volumetric soft shadow algorithm…………………..60 2.4.3 Volumetric soft shadow results…………………………………….64 2.5 Volumetric shadows and soft shadows for mixed volumetric and polygonal scenes…………….…………………………………………..68 2.5.1 Shadow algorithm………………………………………………….68 2.5.2 Soft shadow algorithm……………………………………………..77 2.6 Multiple light scattering……………………………………………………..79 2.7 Conclusion…………………………………………………………………...86

3. Time-varying interval volumes…………………………………………………..88

3.1 Introduction…………………………………………………………………..88 3.2 Previous work………………………………………………………………..89 3.2.1 Unstructured volume rendering……………………………………90 3.2.2 Interval volumes……………………………………………………90 3.2.3 High-dimensional visualization……………………………………91 3.3 Review of high-dimensional iso-surfacing algorithm and interval volume computation……………………………………….……92 3.4 Visualization techniques of 3D interval volumes……………………………96 3.4.1 Intervals with textured boundary surfaces…………………………99 3.4.2 Multi-attribute datasets…………………………………………....101 3.5 Direct rendering of time-varying interval volumes………………………...102 3.5.1 Projection of 4-simplices to 3D…………………………………..105 3.5.2 Classification of projected 4-simplices…………………………...105 3.5.3 Tetrahedralization of projected 4-simplices………………………108 3.5.4 Projection of 3-simplices to image space…………………………117 3.6 Visualization techniques of time-varying interval volumes………………..119 3.6.1 Temporal color encoding…………………………………………119 3.6.2 Highlighted boundaries…………………………………………...123 3.7 Results and Analysis………………………………………………………..126 3.8 Conclusion………………………………………………………………….131

x

4. Implicit flow fields……………………………………………………………...133

4.1 Introduction to flow visualization…………………………………………..133 4.2 Related work………………………………………………………………..134 4.2.1 Geometry techniques……………………………………………..134 4.2.2 Texture techniques………………………………………………..135 4.2.3 Volume rendering with embedded textures………………………136 4.2.4 Hybrid techniques………………………………………………...137 4.2.5 Implicit methods………………………………………………….137 4.3 Research motivation and framework……………………………………….138 4.4 Construction of the implicit flow field……………………………………...141 4.5 Rendering of implicit flow fields using the interval volume approach…….145 4.5.1 Surface and textures……………………………………..148 4.5.2 Textured stream surface boundaries……………………………...149 4.5.3 Time surface rendering…………………………………………...153 4.5.4 Time clipping……………………………………………………..155 4.6 Rendering of implicit flow fields using 3D texture mapping approach…….156 4.7 Comparison of rendering techniques……………………………………….161 4.7 Conclusion and future work………………………………………………...164

Bibliography……………………………………………………………………………167

xi

LISTS OF TABLES

Table Page

2.1 Shadow determination of three regions………………………………………….72

3.1 Original decomposition of the projected 4-simplices…………………………..109

3.2 Interval volume lookup table statistics…………………………………………127

3.3 3D Interval volume computation and rendering performance………………….129

3.4 4D interval volume computation and rendering performance………………….129

3.5 Comparison of the number of extracted simplices……………………………..130

3.6 Comparison of the number of decomposed 3-simplices………………………..131

4.1 Comparison of the flow volume visualization techniques……………………...161

xii

LISTS OF FIGURES

Figure Page

2.1 Soft shadows……………………………………………………………………..23

2.2 Illumination model…………………………………………………………….…32

2.3 The light attenuation model……………………………………………………...34

2.4 Determining the opacity value for the considered ………………………....35

2.5 Sphere with and without shadows……………………………………………….37

2.6 A robot with the shadow………………………………………………………....38

2.7 Rings with shadows……………………………………………………………...39

2.8 A smoky room with a cube inside……………………………………………….39

2.9 A room scene with shadows……………………………………………………..40

2.10 A scene of a HIPIP data set……………………………………………………...41

2.11 Curved shadows on curved objects………………………………………………41

2.12 A hypertextured object with the shadow………………………………………...42

2.13 Teddy Bear……………………………………………………………………….43

2.14 Bonsai Tree………………………………………………………………………43

2.15 uncBrain with shadow……………………………………………………………44

2.16 Room scene with light coming in from the back………………………………...44

2.17 A of projective textured light models………………………………...45

xiii

2.18 A scene with the shadow for a light screen with ring pattern…………………....47

2.19 A cube with the shadow for a light screen with stripe pattern…………………...47

2.20 A CVG sphere for a parallel area light with grid texture………………………...48

2.21 HIPIP dataset with grid pattern…………………………………………………..48

2.22 A room scene with “light window”……………………………………………...49

2.23 A room scene for a light screen with an image of OSU logo……………………49

2.24 A scene with three light beams that pass through the cube……………………...50

2.25 A scene with a beam of light that passes through the rectangular parallelepiped.51

2.26 Half-way vector for multiple light sources………………………………………52

2.27 A robot with two light sources…………………………………………………...53

2.28 A schematic of the light source, the occluder and the receiver………………….56

2.29 A schematic of the shadow region with respect to the light source……………..58

2.30 Soft shadow algorithm in slice-based volume rendering………………………...58

2.31 Construction of a virtual light source……………………………………………60

2.32 Computed and exact penumbra regions………………………………………….61

2.33 Soft shadows of rings…………………………………………………………….64

2.34 Robots……………………………………………………………………………65

2.35 Soft shadow passing through the translucent rectangular parallelepiped………..66

2.36 Soft shadows of Bonsai tree……………………………………………………..66

2.37 Soft shadow of a hypertextured object…………………………………………..67

2.38 A scene with a beam of light that passes through a rectangular parallelepiped, with soft shadows implemented………………………………………………….68

xiv

2.39 Position relationship between polygons and volume with respect to the slicing direction………………………………………………………………………….70

2.40 Flow of the shadow algorithm combining volumes and polygons………...73

2.41 Bonsai tree and mushrooms……………………………………………………...74

2.42 Bonsai tree and mushrooms……………………………………………………...74

2.43 Shadows of mushrooms………………………………………………………….75

2.44 Shadows of rings and a dart……………………………………………………...75

2.45 A scene of a teapot inside a translucent cube……………………………………76

2.46 A scene of a desk inside a smoky room……………………………………….…76

2.47 The whole region rendered slice by slice………………………………………...77

2.48 Soft shadows of mushrooms………………………………………………….….79

2.49 Soft shadows of Bonsai tree and mushrooms……………………………………79

2.50 Clouds without multiple scattering………………………………………………80

2.51 A schematic of light transport……………………………………………………82

2.52 Clouds with multiple forward scattering only…………………………………...84

2.53 Clouds with both multiple forward scattering and multiple back scattering…….84

2.54 An airplane flying above the clouds……………………………………………..85

2.55 Indirect scattering………………………………………………………………..85

3.1 Two-dimensional of the interval volume algorithm…………………93

3.2 Four-dimensional cell for 3D interval volumes………………………………….94

3.3 Linear colored interval volume of two views……………………………………96

xv

3.4 Multiple constant colored intervals………………………………………………97

3.5 Interval volumes extracted by progressively increasing the mean interval value.98

3.6 Prioritized intervals………………………………………………………………99

3.7 Interval volume with boundary surface highlighted……………………………100

3.8 Boundary surface with 2D texture mapping……………………………………100

3.9 Interval volume computed using density but rendered using energy…………..101

3.10 Intersection of interval volumes for two attributes……………………………..102

3.11 Results of time slicing…………………………………………………………..104

3.12 Classification of projected 4-simplex…………………………………………..106

3.13 Flow chart of the classification of the projected 4-simplex…………………….107

3.14 Incorrect rendering result of a constant plate in four dimensions………………110

3.15 Classification and tetrahedralization of projected 4-simplex…………………...114

3.16 Twenty-four projected 4-simplcies in 3D for a constant colored hypercube…...115

3.17 Projected tetrahedral components and ∆t distribution inside a constant tetrahedron …………………………………………………………………………..116

3.18 Components of a constant prism………………………………………………..117

3.19 A constant plate in four dimensions ……………………………………………119

3.20 Direct rendering result of a time-varying interval volume……………………..120

3.21 Temporal color encoding for two time steps…………………………………...120

3.22 Time-varying interval volume for the delta dataset…………………………….121

3.23 Time-varying interval volumes for vortex dataset (two time steps)……………121

xvi

3.24 Time-varying interval volumes for the NASA Tapered Cylinder dataset……...122

3.25 Time-varying interval volumes for vortex dataset (three time steps)…………..122

3.26 Temporal color encoding for three time steps………………………………….123

3.27 Time-varying interval volume with four iso-surfaces highlighted……………..124

3.28 Time-varying ……………………………………………………….124

3.29 Interval volumes………………………………………………………………...125

3.30 Two interval volumes at t1 and t2 for the Tapered Cylinder dataset are rendered using MIP……………………………………………………………………….126

4.1 Visualization framework of the implicit flow field…………………………….141

4.2 A to show the construction of an implicit flow field………………….143

4.3 Visualization for van Wijk’s implicit stream and our implicit flow volume…………………………………………………………………………..145

4.4 Two examples of the inflow mapping………………………………………….146

4.5 Interval volume with a boundary isosurface……………………………………148

4.6 A stream surface inside the flow is textured using a 3D LIC texture…………..150

4.7 Textured stream surfaces……………………………………………………….151

4.8 Streamlines using a non-averaged 1D mip- texture………………………..152

4.9 A coupled-charge dataset rendered using interval volumes with five stream surfaces textured by streamline-like texture……………………………………152

4.10 Isabel Hurricane dataset rendered using interval volumes with four stream surfaces textured by streamline-like texture…………………………………....153

4.11 Textured time surfaces………………………………………………………….154

4.12 Flow volumes with three textured time surfaces……………………………….154

xvii

4.13 (a) Truncated time surfaces and stream surface; (b) Three time surfaces and one stream surface…………………………………………………………………..156 4.14 Visualization diagram of the implicit flow field using slice-based 3D texture approach………………………………………………………………………...157

4.15 Hand-painted image as inflow textures to advect through the volume…………158

4.16 Imported image as inflow textures to advect through the volume……………...158

4.17 Inflow texture and flow volume of the Isabel Hurricane dataset……………….159

4.18 Dual inflow textures…………………………………………………………….160

xviii

CHAPTER 1

INTRODUCTION

1.1 General Introduction

Visualization is a computational process that converts raw data into graphical , in order to understand, analyze, and explore the data. The data can come from different sources: device measurement (e.g. CT, MRI, PET, and ultrasound data), scientific (e.g. finite element method and computational fluid dynamics), or real world applications (e.g. commercial, financial and economical data). Scientific visualization mainly deals with the data from device measurement and scientific simulations. The data can be organized using different structures: regular grids, curvilinear grids or unstructured grids (e.g. tetrahedral grids). Also the data can represent scalar fields or vector fields, and can be static or time-varying data.

Volume rendering is used to display 3-dimensional fields, such as density, pressure, temperature, or velocity fields. There are five popular volume rendering algorithms: raycasting, splatting, shear-warp, cell projection, and hardware-assisted 3D texture mapping. Although a lot of progress has been made in the field of volume rendering in the last decade, many challenging problems are still open in volume rendering: high-

1

dimensional rendering, time-varying datasets, large datasets, complex flow fields, improvement of rendering accuracy, fidelity and interactivity, and interdisciplinary research with other application communities.

In this dissertation, I study three challenging topics in advanced volume rendering:

(1) volumetric light propagation models, including volumetric shadow and soft shadow algorithms, including multiple light scattering, in order to generate more realistic scenes;

(2) interactive region-of-interest rendering of unstructured grids and direct rendering of time-varying data in unstructured grids; (3) interactive three-dimensional flow visualization to show the flow details effectively and achieve user-guided flow representations and appearance.

1.2 Introduction to Volume Rendering

Volume rendering considers the scattering of light as it traverses through a participating media. It is most commonly used in the scientific visualization community to represent abstract quantities like pressure, temperature or wind velocity.

There are two ways to perform the volume rendering:

(1) Indirect volume rendering (IVR): the volumetric data is first converted into

polygonal representations and then rendered with polygon rendering techniques.

One example of IVR is the generation of iso-surfaces using the marching cube

algorithm [86].

2

(2) Direct volume rendering (DVR): the volumetric data is directly rendered without

the intermediate conversion step. In many cases, there is no guarantee that the iso-

surfaces can provide a complete description of the true structure in the 3D field,

due to the complexity in the data set and noise that cannot be completely filtered

out. Also indirect volume rendering cannot deal with amorphous phenomena. In

these cases, direct volume rendering is more efficient.

There are five popular direct volume rendering algorithms: ray-casting, splatting, shear-warp, cell projection and hardware-assisted 3D texture mapping.

Of all volume rendering algorithms, many publications are on raycasting

[135][79][80]. In raycasting, a ray is cast into the data set from each pixel on the . Along the ray, samples are taken and subsequently composited in depth order to get the final color at the pixel. Researchers have used pre-classification [79][80] as well as post-classification [53][5][131][132]. The density and gradient in post-classification, or color and opacity in pre-classification, are generated via point sampling, most commonly by means of a trilinear interpolation. Most implementations space the ray samples at equal distance. Some optimizations have been taken to accelerate raycasting.

Early ray termination can stop the ray traversing when the accumulated opacity has reached full opacity, and speed up the performance of raycasting. Also, space leaping

[34][165] is applied for accelerated traversal of empty regions. For strict isosurface rendering, recent research analytically computes the location of the isosurface, based on the data values in a local neighborhood [111]. Raycasting is also used to render volumetric data defined on unstructured grids [145].

3

Splatting was proposed by Westover [149] and its basic principles are: (1) represent the volume as an array of overlapping basis functions with amplitudes scaled by the values; (2) project these basis functions to the screen to achieve an approximation of the volume integral. A major advantage of splatting is that only relevant are projected and rasterized. This can tremendously reduce the volume data that needs to be processed and stored.

The early splatting approach [149] summed the voxel kernels within volume slices most parallel to the image plane. This results in the popping problem (i.e. severe brightness variation) in animated viewing when the parallel plane changes. Mueller et al.

[101] eliminates this popping drawback by aligning the sheets to be parallel to the image plane. This splatting method is called image-aligned sheet-based splatting. All the voxel kernels that overlap a slab are clipped to the slab and summed into a sheet buffer. The sheet buffers are composited front-to-back to form the final image. While this significantly improves image quality, it requires much more compositing and several footprint sections per voxel to be scan-converted. Using a front-to-back traversal, this method can make use of the culling of occluded voxels by keeping an occlusion map and checking whether the that a voxel projects to have reached full opacity [54].

Some further research has been done based on the image-aligned sheet-based splatting. Traditional splatting classifies and shades the voxels prior to projection.

Projecting the fuzzy color voxels provides a uniform screen image for homogenous object regions, but leads to a blurry appearance of object edges. Mueller et al. [103] solves this blur problem by performing the classification and shading process after the

4

voxels have been projected onto the screen, which is called post classification.

Perspective projection leads to non-uniform sampling, which results in potentially severe aliasing artifacts. Mueller et al. [100] introduces an anti-aliasing extension to the basic splatting algorithm by scaling the splats so that the splats become larger with increasing distance from the view plane. Huang [54] explores a fast software alternative to splat rasterization called FastSplat, to accelerate the footprint rasterization in splatting.

Recently, some researchers are working on the implementation of splatting using modern graphics hardware. Xue et al. [160] presents three techniques: immediate mode rendering, vertex rendering, and point convolution rendering, to implement splatting using a GeForce4 graphics card. Neophytou et al. [104] implemented GPU accelerated image-aligned splatting. They take advantage of the early z-culling hardware support to achieve the early elimination of hidden splats and the skipping of empty buffer-space.

Shear-warp was proposed by Lacroute and Levoy [72] to achieve an extremely fast software renderer. It achieves this by employing a clever volume and image encoding scheme, coupled with a simultaneous traversal of volume and image that skips opaque regions and transparent voxels. In a pre-processing step, voxel runs are encloded using run length encoding based on pre-classified opacities. This requires the construction of a separate encoded volume for each of the three major viewing directions. The rendering is similar to raycasting, while it is simplified by shearing the appropriate encoded volume such that the rays are perpendicular to the volume slices. The rays obtain their sample values via bilinear interpolation within the traversed volume slices. A final warping step

5

is to transform the volume-parallel base plane image into the screen image. For shear- warping, the sampling interval distance is view-dependent: 1.0 for axis-aligned views,

1.41 for edge-on views, and 1.73 for corner-on views. Thus the Nyquist theorem is potentially violated. Since larger viewports are achieved by bilinear interpolation, the image quality is very low if the resolution of the viewport is significantly larger than the volume resolution.

Many extensions, like stereo rendering [49], parallel algorithm [74], clipping planes

[164], and performance improvements [31] have been added to the shear-warp algorithm for parallel projections. For projections, Chen et al. [21] improve the warping. Schulze et al. [118][119] prove the correctness of the permutation of projection and warping mathematically, and present a parallelized version of a perspective shear- warp algorithm.

Cullip and Neumann [32] first addressed the capability to render a volume on the 3D texture hardware. Cabral [19] described a slice-based volume rendering and used 3D texture mapping for non-shaded volume rendering. The volume is loaded into texture memory, and the hardware rasterizes the slices parallel to the viewplane. The slices are then blended back to front, so there is no need to keep the accumulated opacity. The interpolation is a trilinear function and the slice distance can be chosen freely. Some research has been done to add shading capabilities [33][97][136][146]. Both pre- classification [136] and post-classification [33][96][146] are possible with multiple passes. Recently, with the fast development of the modern graphics hardware, a lot of

6

progress has been made to accelerate the GPU-based volume rendering using different techniques [71][83][162], including empty space leaping and early ray termination.

The above four algorithms are mainly used to render volumetric data specified on a regular grid, and compared and evaluated by Meissner et al. [98]. For curvilinear or irregular grids, these methods require re-sampling the data. Interactive visualization of curvilinear and unstructured data sets is critical and has been an active area of research for quite some time now. Maximum interactivity has been achieved using massively parallel supercomputers [22][87][88] to render the data in parallel. Alternatively, the unstructured grids can be re-sampled into regular rectilinear grids and then rendered taking advantage of hardware accelerated rendering using 3D textures [78].

Cell projection [95][125][155] is a method which works directly on the curvilinear or irregular grids. The cells are composited onto the image in back to front sorted order.

The projections of the edges of a single cell divide the image plane into polygons, which can be scan converted and composited by standard graphics hardware. Shirley and

Tuchman [125] presented an algorithm of rendering tetrahedral grids by approximating the projection to screen space using a set of triangles. Grids consisting of different cells can first be decomposed into a tetrahedral representation using simplicial decomposition techniques [1][94]. Williams extended Shirley-Tuchman’s approach to implement direct projection of other polyhedral cells in their HIAC rendering system [155]. Max et al. [95] further presented the cell projection of meshes with non-planar faces. Recently, with the advent of programmable graphics hardware, a tremendous amount of work has been done

7

in implementing the Shirley-Tuchman algorithm on graphics hardware using the programmable vertex and fragment shader pipelines on the GPUs [144][159][70].

1.3 Challenges and Strategies

In this section, we will discuss three challenging topics in advanced volume rendering, and provide introductions to the corresponding strategies.

1.3.1 Volumetric Shadows and Soft Shadows

A shadow is a region of relative darkness within an illuminated region caused by an object totally or partially occluding one or more light sources. Shadows are essential to realistic and informative scenes. They provide strong clues about the shapes and relative positions of objects and they can indicate the approximate location, intensity, shape and size of light sources.

In volume rendering, the shadow calculation is not a binary decision of whether a point is in shadow or not. When the light passes through a volume, the light intensity is attenuated. This kind of shadow doesn’t have uniform darkness. The illumination depends on how much light intensity can arrive at the point of interest.

In my Master thesis, I investigated a new shadow algorithm that properly determines the light attenuation and generates shadows for volumetric datasets by using a 2D shadow buffer to keep track of the light attenuation through the volumetric participating media.

Since the shadow buffer is a 2D buffer, the memory requirement is low: only a 2D buffer for each light source. At the same time, we can use a high-resolution 2D shadow buffer to keep more accurate information for the light attenuation. Our shadow algorithm can thus

8

generate accurate shadows with low storage requirements. I continue and extend this

work as part of my dissertation.

The generation of soft shadows is a challenging topic in computer graphics. Soft shadows include an umbra region, areas for which no part of the extended light source is visible, and a penumbra region, areas in which part of the extended light source is visible and part is hidden or occluded. The generation of soft shadows requires integrating the contributions of extended light sources on the illumination of objects. It is very computationally expensive. In this dissertation, we investigate an approximate analytic method to generate soft shadows for volumetric data using a convolution technique, which can take advantage of the graphics hardware support and generate volumetric soft shadows in an efficient way.

Some applications require both volumetric and geometrical objects together in a single image. For example, geometrically defined objects may be surrounded by clouds, smoke, fog, or other gaseous phenomena. In this dissertation, we will generate shadows and soft shadows for a scene including both volumetric datasets and polygonal geometries.

Another volumetric lighting topic is multiple light scattering. For a high-albedo participating media, such as clouds, multiple scattering cannot be ignored. Here, we implement multiple forward scattering and back scattering, by taking advantage of the strong forward scattering characteristics of some high-albedo media and using the convolution technique.

9

1.3.2 Interval Volumes and Time-Varying Interval Volumes

With the widespread use of high performance computing systems, some application simulations (e.g. Finite Element Method and Computational Fluid Dynamics) are capable of producing large datasets. This data is usually defined on curvilinear grids or tetrahedral grids, and tend to be time varying, adding another dimension to the problem.

Additionally, these simulations produce multiple attributes like density, momentum and energy at each of the sample points.

Interactive visualization of curvilinear and unstructured data sets is critical and has been an active area of research. Along with the need for interactivity, there is a need for better tools, which allow navigating the data set in a more intuitive manner, as well as allow for correlation between the multiple attributes.

In this dissertation, we try to address some of these issues by using interval volumes as a region-of-interest extraction algorithm and by using several fast volume visualization techniques. We use interval volumes to segment the volume, highlight contour surfaces, and render the data effectively. The advantage of our interval volume algorithm is that it is independent of dimension and topology of the data set. It can be applied to both structured and unstructured grids, as well as multi-attribute and time-varying data.

How to render time-varying unstructured datasets is a challenging topic. A traditional method to render time-varying data is to take a snapshot of the data for each particular time step and generate an from the time series data. This method is useful, but it relies on human memory and cognitive abilities to tie together spatio- temporal relationships. An alternative method is to display the movement of the time

10

series data in a single image using direct rendering of high dimensional data. Some researchers have worked on hypervolume visualizations [6] and high dimensional direct rendering of time-varying volumetric data [158], but their algorithms do not apply for unstructured grids.

Under the framework of the high-dimensional iso-contouring algorithm [12], we can compute a 4-dimensional volume representing a time-varying interval volume. This can be accomplished by applying the isosurfacing algorithm directly on a 5-dimensional grid to generate a surface comprised of 4-simplices. In this dissertation, we are more interested in the direct rendering of the 4D time-varying interval volumes onto 2D image space. This allows the time-varying interval volumes to be displayed in a single image. In this way, we can get the distribution and relationship of the interval volumes across time steps and understand the time-varying structured and unstructured volumetric fields well.

Here, how to render 4D time-varying interval volume (i.e. 4-simplices) to a 2D image is a technically difficult problem. We solve this problem by first projecting the 4- simplices to 3D, then classifying and decomposing the projected 4-simplices to tetrahedra

(i.e. 3-simplices). After that, we render the tetrahedral mesh. The key point here is that both ∆t (the projection along the time axis) and ∆z (the projection along the viewing direction) contribute to the final opacity during the rendering.

1.3.3 Three-Dimensional Flow Visualization

Flow is a natural phenomenon around us: the rivers flow across the earth, air flows in the sky, and blood flows throughout the human body. Each point in the flow domain has a velocity value which forms the flow. Flow fields play an important role in

11

scientific, engineering and medical communities. For example, the study of the air flow around the jet plane helps to improve the design of the plane. Computational fluid dynamics is capable of generating large amounts of simulation data which include vector- valued variables in three spatial dimensions.

While there exist many flow visualization techniques to effectively represent two- dimensional flow fields, extending these techniques to three-dimensional flow fields encounters problems, due to clutter, occlusion, and lack of depth perception cues in three dimensions. Also, the expensive calculation of the advection in 3D flow visualization hinders the user when attempting to navigate through the flow field interactively, as well as limits the user’s ability to control the flow representation and appearance. Visualizing and understanding complex three-dimensional flow fields is a challenging topic. New flow visualization techniques for complex three-dimensional flow fields are needed for better comprehension of these complex processes.

In order to solve some of the above problems, I proposed an implicit method to represent the flow field. An implicit flow field is a multi-variant scalar field, which is constructed by pre-advecting the flow field and storing the flow information into multiple attributes. The implicit flow fields are constructed in a pre-processing stage, which avoid the advection at run time. Now the task is how to render the implicit flow fields. I, along with my colleague Daqing Xue, studied two visualization techniques to render the implicit flow field: slice-based 3D texture mapping, and interval volume rendering.

The slice-based 3D texture mapping technique renders the implicit 4-tuple flow field directly by loading the implicit flow field as the 3D texture and taking advantage of the modern graphics hardware. The advantages of this rendering method are high

12

interactivity and fine texture details rendered throughout the 3D flow volume, i.e., texture mapped flow volumes.

Another motivation of this work is to implement a complete system, which incorporates streamlines, time-lines, stream surfaces, time surfaces and flow volumes.

The second rendering technique, the interval volume rendering, is used to achieve such a complete system. The inflow mapping is necessary to obtain a scalar field on which the interval volume segmentation is applied. The flow volume is the extracted interval volume enclosed between two iso-surfaces, and the stream surfaces and time surfaces are iso-surfaces with respect to the scalar value and to the advection time. The 4-tuple attributes can be used as texture coordinates to map textures onto the stream and time surfaces to illustrate the flow details. For instance, the streamlines can be mapped onto the stream surfaces using one-dimensional Mip-map texture.

1.4 Contributions

In this section, I will summarize our contributions to volume rendering and scientific visualization. The details of the algorithms and implementations will be explained in later chapters.

For the volumetric lighting models, our main contribution is to achieve realistic rendering by modeling volumetric shadows and multiple scattering. Our research allows generating accurate volumetric shadows for different light sources: point lights, parallel lights, projective textured lights and extended lights. Also, we incorporate multiple scattering into our light attenuation model. Our detailed contributions are as follows:

13

• We investigated a new volumetric shadow algorithm which properly determines

the light attenuation in the volume and generates more accurate volumetric

shadows with low memory requirement: only a two-dimensional buffer for each

light source.

• We proposed an analytic method to generate volumetric soft shadows using a

convolution technique. In this way, our above volumetric shadow algorithm has

been applied to extended light sources, and we can generate volumetric soft

shadows in an efficient way.

• We extended our shadow and soft shadow algorithm to fused volumetric and

polygonal scenes for some applications which require both volumetric and

geometrical objects to appear together in a single image.

• We investigated multiple forward and backward scattering using a convolution

technique to improve the rendering reality of high-albedo participating media.

One limitation of our method is that the scattering along the perpendicular

direction is not modeled.

For the interval volumes and time-varying interval volumes, our main contribution is the investigation of interactive rendering and visualization techniques for curvilinear and unstructured grids as well as regular grids, including both static and time-varying datasets. Compared to other volume rendering techniques, interval volume rendering makes it possible to embed the boundary surfaces into interval volumes seamlessly, and these surfaces provide a better segmentation of the volume and highlight internal features. And more importantly, our research allows time-varying interval volumes to be

14

directly rendered to a single image, so that we can get the distribution and movement of interval volumes over time and understand the time-varying datasets well. The detailed contributions are as follows:

• We employed a high-dimensional isosurfacing algorithm to extract interval

volumes and time-varying interval volumes. The algorithm is independent of

dimension and topology of the dataset.

• We investigated different visualization techniques for interval volumes to

visualize 3D scalar field effectively and interactively.

• We proposed and investigated a direct rendering algorithm for time-varying

interval volumes, which is a projection from four-dimensional simplices to two-

dimensional image space. In this way, we can obtain the continuous integral of

the interval volume across time steps.

• We provided a proof of the correctness of the optical model of the direct

rendering for time-varying interval volumes.

• We implemented a modified hardware-based PT algorithm using both vertex

program and fragment program, by not only considering the ∆z along the viewing

direction, but also considering the ∆t for the projection along the time axis.

For the implicit flow fields, our main contribution is to propose the implicit flow fields and study the rendering techniques of the implicit flow fields. Our research allows displaying flow details, providing the user-controlled flexible representation and appearance of flow volumes and interactive runtime rendering performance. All these

15

features will be very helpful to explore 3D flow fields. Our detailed contributions are as follows:

• We proposed the concept of implicit flow fields to investigate the visualization of

three-dimensional flow fields. The construction of an implicit flow field is

actually a mapping from a vector field to a multiple-attribute scalar field. The

advantages of the implicit flow field include: it avoids the expensive advection at

run time and achieves interactive rendering, and the stored information will be

used for user-guided flow representation and appearance as well as displaying

flow details during the rendering process.

• We employed the interval volume rendering method to visualize the implicit flow

fields. We extracted stream surfaces and time surfaces during the construction

process of the flow volumes without extra computational cost, and applied a

texture mapping on these surfaces to show flow details and internal features.

• We achieved a complete representation which incorporates streamlines, time-

lines, stream surfaces, time surfaces and flow volumes.

• We proposed a framework to visualize the implicit flow fields using a 3D texture-

mapping technique and achieve the texture-mapped flow volumes, by taking

advantage of dependent textures. The advantage of this technique is its

interactivity and the flexibility for the user to control the representation and

appearance of the flow volume.

16

1.5 Overview of Dissertation

In the rest of this dissertation, we present the details of each algorithm. Chapter 2 first summarizes our volumetric shadow algorithm. After that, we explain the volumetric soft shadow algorithm for extended light sources. This work was done early in my academic career and is included as part of my Master thesis. It is included here for completeness. This volumetric shadow and soft shadow algorithm also applies for mixed scenes of volumetric and polygonal objects. Also multiple light scattering is covered in this chapter. Chapter 3 presents our work in the rendering of interval volumes and time- varying interval volumes. We first review the high-dimensional iso-surfacing algorithm.

We then describe visualization techniques for 3D interval volumes. The focus of this chapter is the direct rendering of time-varying interval volumes. Also, a complete analysis is conducted in this area. Chapter 4 deals with three-dimensional flow visualization. An implicit flow field method is proposed to store the flow advection information. After constructing the implicit flow field at the pre-processing stage, we study two visualization approaches to render an implicit flow field: 3D texture mapping technique and interval volume rendering technique. We then conclude and discuss future directions for this work.

17

CHAPTER 2

VOLUMETRIC LIGHTING MODELS

2.1 Introduction

A shadow is a region of relative darkness within an illuminated region caused by an object totally or partially occluding one or more light sources. Shadows are essential to realistic and informative scenes. They provide strong clues about the shapes and relative positions of objects and they can indicate the approximate location, intensity, shape and size of light sources.

Earlier shadow algorithms focused on hard shadows in surface graphics. Calculation of hard shadows determines whether a point in the scene is in the shadow of an opaque object. It is a binary decision-making problem. Volume rendering is used to display 3- dimensional scalar fields. In volume rendering, the shadow calculation is not a binary decision of whether a point is in shadow or not. When the light passes through a volume, the light intensity is attenuated. This kind of shadow doesn’t have uniform darkness. The illumination depends on how much light intensity can arrive at the point of interest.

18

We investigate a new shadow algorithm that properly determines the light attenuation and generates more accurate shadows for volumetric datasets, with low storage requirements: a 2D buffer for each light source is required.

The generation of soft shadows is a difficult topic in computer graphics. Soft shadows include an umbra region, areas for which no part of the extended light source is visible, and a penumbra region, areas in which part of the extended light source is visible and part is hidden or occluded. The generation of soft shadows requires integrating the contributions of extended light sources on the illumination of objects. In this thesis, we investigate an analytic method to generate soft shadows using the convolution technique.

Some applications require both volumetric and geometrical objects together in a single image. For example, geometrically defined objects may be surrounded by clouds, smoke, fog, or other gaseous phenomena. In this thesis, we will generate shadows and soft shadows for a scene including both volumetric datasets and polygonal geometries.

Another volumetric lighting topic is multiple light scattering. For a high-albedo participating media, such as clouds, multiple scattering cannot be ignored. Here, we implement multiple forward scattering and back scattering, and incorporate the multiple scattering into our shadow algorithm.

2.2 Previous Work

2.2.1 Shadow Algorithms

Shadows can be classified into the following types: hard shadows caused by opaque objects, soft shadows caused by opaque objects, and shadows caused by translucent objects. Hard shadows result from a binary decision at each point. When opaque objects

19

are lit by an extended light source, soft shadows including both umbra and penumbra regions are generated. The umbra region is due to full occlusion from the light, and the penumbra region is due to only partial occlusion from the light. A penumbra surrounds an umbra area and there is always a gradual change in intensity from a penumbra to an umbra. The relative size of the umbra/penumbra is a function of the size and the shape of the light source and its distance from the object. When the light passes through a translucent object, the light intensity is attenuated as in a shadow, but not completely.

Shadows resulting from translucent objects may not have a uniform shading. How dark a region is depends on how much light intensity reaches the point of interest. Shadows cast from clouds or smoke exhibit this type of behavior.

2.2.1.1 Hard Shadow Generation

Earlier implementations of shadows focused on hard shadows, in which a value of 0 or 1 is multiplied with the light intensity. Calculation of hard shadows involves only the determination of whether or not a point in the scene is in the shadow of an opaque object.

This is a binary decision problem.

In general, shadow determination is similar to the visibility determination from the eye; i.e. shadow determination is just a visibility determination with respect to the light source.

Shadows were added to a scan conversion algorithm by Appel [3] and Bouknight and Kelley [15]. This algorithm requires a pre-processing stage that generates a secondary data structure which links all polygons that may shadow a given polygon.

During the scan conversion process, the secondary data structure is used to determine if

20

any shadows fall on the polygon that generated the visible scan line segment under consideration. If no shadow polygon(s) exists, then the scan line algorithm works in normal way. If a shadow polygon exists, then the shadow is generated by projecting the shadow polygon onto the plane that contains the current polygon with respect to the light.

Normal scan conversion then proceeds simultaneously with a process that determines whether a pixel is in shadow or not. This shadow determination approach is only suitable for polygons.

Crow [30] introduces the concept of shadow volumes. A shadow volume is the polygonalized solid that models the volume of the shadow cast into space by an occluder.

During the rendering, a visible point is tested to determine whether it falls inside any shadow volumes before it is illuminated by the light source. Brotman and Badler [17] used this shadow volume idea as a basis for generating soft shadows produced by extended light source (detailed explanation in section 2.2.1.2). The most serious restriction of the original algorithm is that the objects must be convex polyhedrons.

Bergeron [11] developed a general version of Crow’s algorithm that overcomes this restriction and allows concave objects and penetrating polygons to cast shadows.

In the 2-pass hidden surface algorithm by Nishita and Nakamae [106] and Atherton et. al. [4], the first pass transforms the image to the view of the light source, and separates shadowed and unshadowed portions of the polygons by hidden surface removal and a polygon clipping algorithm. Then, a new set of polygons is created, each marked as either completely in shadow or not. In the second pass, visible determination from the eye is done, and the polygons are shaded taking into account their shadow flag. This algorithm takes advantage of the fact that shadow polygons are view point independent and can be

21

used to generate real-time shadows if the relative position of the object and the light source remains unchanged. However, it is algorithmically difficult to come up with a numerically robust polygon clipper, as well as a clipper that deals with modeling primitives other than polygons.

Williams [152] uses a z-buffer algorithm to generate shadows. The algorithm is a two-step process. In the first step, a scene is “rendered” and depth information is stored in the shadow z-buffer using the light source as a viewpoint. The second step is to render the scene using a z-buffer algorithm. During the rendering, the shadow z-buffer is used to determine if an object point visible from the eye is also visible from the light source. The advantage of this algorithm is that it can support primitives other than just polygons. However, this algorithm has aliasing problems due to the discretized depth map. Recent advances in computer graphics hardware, for example,

NVIDIA GeForce4 video cards, allow for shadow calculations using a z-shadow algorithm.

Ray tracing can also be used to implement hard shadows. A shadow ray is shot from the ray-surface intersection point to the light source. If the ray intersects any object between the intersection point and the light source, then the point is in shadow; otherwise it’s not in shadow. The basic ray-tracing algorithm requires no additional storage and preprocessing for shadow determination; shadow determination is evaluated when needed. However, the shadow determination is very expensive.

22

2.2.1.2 Soft shadow generation

Real objects may cast shadows containing both a penumbra and an umbra region (as

shown in Figure 2.1) in the case of extended light sources (linear or area light sources).

The umbra region is due to full occlusion from the light, and the penumbra region is due

to only partial occlusion from the light. The degree of partial occlusion from the light

results in different intensities of the penumbra region. The determination of soft shadows

requires calculating the fraction of occlusion, not just making a binary decision as was

the case for hard shadows. A fraction in the range (0,1) is multiplied with the light

intensity in the illumination calculation, where 0 indicates umbra, 1 indicates no shadow

and all other values between 0 and 1 indicate penumbra. The shape of the resulting soft

shadow depends on the occluding object and the light source, as well as the distance each

of these are from the object being shadowed.

Area light source Area light source

Opaque object Opaque object

Umbra Penumbra

Figure 2.1: Soft shadows

The implementation of soft shadows requires integrating the contribution of the extended light source on the illuminations of the object. In general, there are two main

23

techniques to treat the extended light source: sampling techniques and analytic techniques.

In the frame buffer algorithm by Brotman and Badler [17], some sample points are stochastically chosen to model area light sources. Shadow umbra polygons for each such point source are generated using Crow’s algorithm [30] and this shadow polygon generation is done during preprocessing. A 2D depth buffer for visible surface determination is extended to store cell counters. The cell count is calculated as follows: if the shadow polygons for that particular point source enclose the whole cell in which the intersected point resides, the associate counter is incremented by 1. If the corresponding cell count is equal to the number of chosen point light sources, then the point is in umbra region. If the cell count is less than the number of chosen point light sources but higher than zero, then the point is in the penumbra region. The preprocessing complexity and the shadow quality depends on the number of sample point sources. Artifacts such as aliasing can appear in the shadow areas if too few points are chosen.

Distributed as proposed by Cook et. al. [25] allows for soft shadow generation. A collection of shadow rays is shot from the intersected point to randomly selected locations on the light source. The intensity of the penumbra region depends on the number of rays intersected by occluding objects. Similar to the traditional ray tracing, no additional storage and preprocessing are necessary. However, it is a point sampling approach, which may not always provide good approximations to the correct solution.

To avoid aliasing, Amanatides [2] extends a ray to a cone. Instead of point sampling, cone tracing does area sampling. Exactly one conic ray needs to be shot per pixel to achieve anti-aliasing. By broadening the cone to the size of a circular (spherical) light

24

ource, soft shadows can be generated: a partial intersection with an object not covering

the entire cone indicates penumbra. The advantage of this algorithm is that the area

sampling should provide a good enough approximation to the penumbra intensity.

However, the approximation is only physically valid for circular light sources. So, cone

tracing is less suitable for light sources that cannot be closely approximated by one or

more spheres.

If all modeling primitives are polygons, soft shadows can be generated analytically.

Given the light source and the point P to be shaded, a candidate set of objects lying

between P and the light can be easily acquired through any intersection culler. Then the

candidate objects are projected onto the light source as viewed from P, and clipped using

the algorithm of Atherton et al. [4]. After clipping, the region of the light source that is

exactly visible from P is identified. This region is then passed to an intensity integral

solver.

A new method to calculate soft shadows using a convolution technique is proposed

by Soler and Sillion [127]. They avoid both sampling artifacts and the building of

expensive data structures to represent visibility.

In this method, an approximation is to separate the visibility from the irradiance

formula at a point y on the receiver:

cosθ cosθ ' H ( y) = E dx v(x, y)dx (2.1) ∫2 ∫ SSπd

The first term is the unoccluded point-to-polygon form factor from y to the source and it can be computed using integration formulae [117]. The second term

25

V ( y) = ∫ v(x, y)dx is the visible area of the source as seen from y. It is calculated using S convolution [127]. For a special case where the light source, the receiver and the occluder are all planar, and lie in parallel planes, accurate shadows are computed using the convolution technique. This algorithm is extended to the general configuration of non- parallel planes by using convolution for the visual geometry and transforming the shadow results to fit the actual geometry of the scene. The advantage of the convolution method over an explicit sampling method is that penumbra regions are always continuous.

2.2.1.3 Shadows of Translucent Objects or Participating Media

Determining shadows caused by translucent objects is more complex. The shadow calculation is not a binary decision of whether a point is in shadow or not. When the light passes through a translucent object, the light intensity is attenuated. This kind of shadow does not have uniform darkness. The darkness depends on how much light intensity arrives at the point of interest. Shadows resulting from clouds or smoke are common examples of this varying intensity due to volumetric data.

The shadow volume algorithm, 2-pass hidden surface algorithm and z-buffer depth map algorithm can only determine if an object point is in shadow or not, resulting in only binary values for the light intensity. These algorithms are not suitable for volume rendering. In volume rendering, as the light traverses the volume, the light intensity is continuously attenuated by the volumetric densities.

Raytracing offers the flexibility to deal with the attenuation of the light intensity. A shadow ray is shot to the light source. All the translucent objects that intersect the shadow ray contribute to the final shadow color through color filtering by the occluding

26

translucent surfaces. Raytracing has been used to generate soft shadows with translucent effects for volumetric datasets [126].

Behrens [10] uses texture mapping hardware to add shadows to a texture-based volume renderer. A shadowed volume which contains the light attenuation information is first produced by the hardware using the original unshadowed volume and the light vector. Then the shadowed volume is rendered using texture-based volume rendering.

This algorithm takes advantage of fast frame-buffer operations modern graphics hardware offers. The resulting image has diffusely illuminated effects and the performance decreases by less than 50% when shadows are added. However, high performance is limited to the case of parallel light sources.

Lokovic and Veach [85] proposed the concept of deep shadow to deal with light attenuation. Unlike traditional shadow maps, which store a single depth at each pixel, deep shadow maps store the fractional visibility through a pixel at all possible depths. A deep shadow map is a rectangular array of pixels in which every pixel stores a visibility function. The function value at a given depth is the fraction of the light beam's initial power that penetrates to that depth. The deep shadow map is equivalent to computing the approximate value of (1.0 - opacity) at all depths. They implemented deep shadow maps in a highly optimized scanline renderer.

Nulkar and Mueller [109] have also implemented an algorithm to add shadows to volumetric scenes using splatting. They use a two-stage splatting approach, where they first render the volume with respect to the light source using the image-aligned splatting algorithm. Secondly, they render the volume with respect to the eye. Due to the inconsistency between the two splattings, they construct a light volume to store the

27

ntensity values after the first-stage splatting. Shadows are generated using the intensity values stored in the light volume. This approach pre-processes intensity calculation and the light volume is view-independent. However, since the algorithm needs a 3D buffer to store the light volume, it requires substantial storage and memory costs. Accurate shadows are difficult to implement using this method due to the high resolution required for the light volume. Also this method does not work for post classification in the volume rendering.

Here we investigate a new shadow algorithm that properly determines the light attenuation and generates the shadows for volumetric datasets, with low memory requirement: only a 2D buffer for each light source.

2.2.2 Multiple Light Scattering

For low albedo participating media, the scattering is unimportant compared to the light attenuation. So, the two-pass algorithm proposed by Kajiya and Von Herzen [63] is used to calculate the illumination. The first pass computes the light intensity reaching each voxel, and in the second pass, the light is reflected or scattered to the viewpoint.

This two-pass method is a single scattering model and it is only valid for low albedo media.

Multiple scattering is important for realistic rendering of high albedo participating media, for example, clouds or water vapor. Multiple scattering must account for scattering in all directions. It is more physically accurate, but much more complicated and expensive to evaluate. Max [92] gives an excellent survey of optical models, including multiple scattering.

28

The calculation of multiple scattering can be divided into four methods [92]: the zonal method, the Monte Carlo method, the P-N method and the discrete ordinates method. The zonal method [115] is generated by extending the diffuse radiosity method for interreflecting surfaces to volumes. The volume is divided into a number of finite elements which are assumed to have constant radiosity. This method is valid only for isotropic scattering. In the Monte Carlo method [116], a random collection of photons or flux packets are traced through the volume, undergoing random scattering and absorption. The resulting images tend to appear noisy and/or take a long time to compute.

The P-N method [4][63] uses spherical harmonics to expand the light intensity at each point as a function of direction, resulting in a coupled system of partial differential equations for the spherical harmonic expansion coefficients. The discrete ordinates method uses a collection of M discrete directions, chosen to give optimal Gaussian quadrature in the integrals over a solid angle. Lathrop [76] points out that this process produces ray effects and presents modifications to avoid these ray effects. Max [93] describes an approximation to the discrete ordinates method, which reduces the ray effects by shooting radiosity into the whole solid angle bin, instead of in a discrete representative direction.

Recently, some research on approximate methods to multiple scattering has been examined to achieve real-time rendering. Harris and Lastra [48] provide a cloud shading algorithm that approximates multiple forward scattering along the light direction. They use impostors to accelerate cloud rendering. Kniss et al. [69] use an empirical volume shading model and add a blurred indirect light contribution at each sample. They approximate the diffusion by convolving several random sampling points and use

29

graphics hardware to do the volume rendering. They model only forward multiple scattering. Our motivation is to implement both forward multiple scattering and backward scattering, and incorporate multiple scattering to our shadow algorithm so that we can deal with high albedo participating media, like clouds.

2.2.3 Rendering of Mixed Volumetric and Polygonal Objects

Since some visualization applications require volumetric and geometrical objects to appear together in a single image, a volume rendering technique which incorporates objects described by surface geometries is necessary to render both surface geometries and volume modeled objects.

The most common solution is to convert polygonal and volumetric data into a common representation: either construct surface polygons from volume data [86] or change polygon data to volume data using 3D scan-conversion [64]. This conversion introduces artifacts and is generally expensive and inefficient.

An alternative approach is to directly render both data types. Levoy has developed a hybrid ray tracer for rendering polygon and volume data [81]. Rays are simultaneously cast through a set of polygons and a volume data array, samples of each are drawn at equally spaced intervals along the rays, and the resulting colors and opacities are composed together in depth-sorted order. In Levoy’s method, both volume and polygon objects are rendered using ray tracing.

Ebert and Parent use another method which combines volume rendering and scanline a-buffer techniques [38]. The scanline a-buffer technique is used to render objects described by surface geometries, while volume modeled objects are volume

30

rendered. The algorithm first creates the a-buffer for a scanline, which contains a list of all the fragments of polygons for each pixel that partially or fully cover that pixel. Based on the scanline-rendered a-buffer fragments, the volume-modeled objects are broken into sections and combined with the surface-defined a-buffer fragments. In their paper, the volumes are defined by procedural functions to model the gaseous phenomena.

The motivation of our work is to generate shadows and soft shadows for scenes with both volumes and polygons.

2.3 Summery of Volumetric Shadow Algorithm

In this section, we summarize our volumetric shadow algorithm by first giving the illumination model and then describing the implementation of the shadow algorithm. This shadow algorithm has been extended to different light types. The basic algorithm has been covered in my Master thesis. More results are presented in this dissertation.

2.3.1 Illumination Models

The general illumination model is:

kn C = Cobj * (ka I a + kd I L (N ⋅ L)) + ks I L (E ⋅ R) (2.2)

where ka is the material’s ambient reflection coefficient, kd is the diffuse reflection

coefficient, k s is the specular reflection coefficient, kn is the Phong exponent, Cobj is the

diffuse color of the object at the location to be illuminated, I a is the intensity and color

of the ambient light, I L is the intensity and color of the light source, N is the normal vector, L is the light vector, E is the eye vector, and R is the reflection vector. These

31

vectors are shown in Figure 2.2.

Light

N R L Eye

E

Figure 2.2: Illumination model

Here, ka , kd , k s , kn and I a are independent of the point to be illuminated, while N,

L, E and R depend on the point. We use N(x) , L(x) , E(x) and R(x) to represent these vectors at the point of interest x.

As the light traverses the volume, the intensity of the light source is attenuated. So in

the per-pixel illumination calculation, I L is not constant. I L is the intensity of the light, which is the fraction of the original light intensity that penetrates to the location x from the light source. We use I(x) to represent it.

Also in post-classification, Cobj (x) and N(x) are determined based on the value at

the pixel and the gradient at the pixel, respectively. Cobj (x) is the diffuse color of the object at the location corresponding to the pixel at the sheet and determined from the

32

transfer function using the value at the pixel. N(x) is calculated by estimating the gradient at each pixel using central differences.

For the per-pixel illumination at each sheet, the illumination model we use is:

kn C(x) = Cobj (x) *(ka I a + kd I(x) *(N(x) ⋅ L(x))) + k s I(x) *(E(x) ⋅ R(x)) (2.3)

Here, Cobj (x) , I(x) , N(x) , L(x) , E(x) and R(x) are all functions of the location x.

2.3.2 Implementation of Volumetric Shadows

Visibility algorithms and shadow algorithms are essentially the same. The former determine the visibility from the eye, and the latter determine the visibility from the light source. However, it’s hard to implement shadows, especially accurate shadows, in volume rendering, because the light intensity is continuously attenuated as the light traverses the volume. We need to determine the light intensity arriving at the point being illuminated.

In our shadow algorithm, we implement shadows by rendering the volume only once to generate per-pixel accurate shadows. The same rendering is applied for both the viewer and the light source [166]. For each pixel, while adding its contribution to the sheet buffer as seen from the eye, we also add its contribution to a sheet shadow buffer as seen from the light source. The accumulated light attenuation is stored in a 2D shadow buffer for each light source. In the per-pixel illumination, the light intensity arriving at the pixel can be obtained from the shadow buffer by mapping the pixel to the shadow buffer. In this way, we can track the light attenuation and generate accurate shadows.

33

In the slice-based volume rendering, the light passing through the front slices will be attenuated and cause shadows on the back slices along the light rays. This effect of front slices on back slices is shown in Figure 2.3.

In order to keep accurate light attenuation on the shadow buffer, even for the cases where the lighting direction is nearly perpendicular to the viewing direction, we use a non-image-aligned slice-based volume rendering to generate accurate shadows [166]. We first calculate the half way vector between the eye vector and the light vector. Rather than slicing the reconstruction kernels via planes parallel to the image plane, we chop the volume by slices perpendicular to the direction of the half way vector. We keep the image buffer aligned with the eye and the shadow buffer aligned with the light source (as shown in Figure 2.4) to avoid sampling and resolution problems. A consistent ray integration is generated with accurately reconstructed slices.

pixels light source

eye

light ray

slices

Figure 2.3: The light attenuation model (Front pixels cause shadows to the back pixels along the light ray)

34

the corresponding pixel light to the light, (i’,j’) shadow buffer plane half-way vector slices

slicing direction

eye

the pixel to the eye, (i,j) image plane

Figure 2.4: Determining the opacity value for the considered pixel

For the light source, we keep a shadow buffer. Here, we take the orientation of the shadow buffer to be light aligned. When a pixel’s contribution is added to a sheet buffer as seen from the eye, its contribution is also added to the sheet shadow buffer as seen from the light source.

During the rendering, when we calculate the illumination for a pixel at the current slice, we determine the accumulated light attenuation for the pixel from the shadow buffer by mapping the pixel to the shadow buffer. The pixel at the current slice is first transferred back to eye space, it is then re-projected to the shadow buffer as seen from the light source (as shown in Figure 2.4).

The pixel (i, j) on the current sheet buffer can be mapped to the pixel (i ' , j ' ) on the shadow buffer using the following transformation:

i '  i    = M M   (2.4)  '  2 1    j   j

35

where, M 1 is the matrix which transfers the pixel (i, j) on the current sheet buffer to the

point x in eye space; M 2 is the matrix which transfers the point x in eye space to the pixel

(i ' , j ' ) on the shadow buffer.

After the accumulated opacity α(x) is obtained from the shadow buffer, then the intensity of the light arriving at the pixel is:

I(x) = (1.0 −α(x)) * I light (2.5)

where, α(x) is the accumulated opacity at the location x in the shadow buffer, in the

range of [0,1], I light is the original intensity of the light source.

Considering the light attenuation, now the illumination model becomes:

kn C(x) = Cobj (x) *(ka I a + kd I light *(1.0 −α(x)) *(N(x) ⋅ L(x))) + k s I light *(1.0 −α(x)) *(E(x) ⋅ R(x)) (2.6)

For a given point x, we get its α(x) by choosing its nearest pixel’s opacity value in the shadow buffer. For better shadow quality, we can also calculate its α(x) by interpolating the opacity values of its nearby pixels.

If α(x) is 0.0, there is no light attenuation and the pixel is not in the shadow. If

α(x) is 1.0, no light can arrive at the pixel and the pixel is in the shadow. If α(x) is between 0.0 and 1.0, then the light is attenuated and only partial intensity of the light source can arrive at the pixel.

Compared to volume rendering without shadows, two more buffers are needed: a 2D shadow buffer to store the accumulated opacity from the light to the current slice, and a sheet shadow buffer to store the opacity caused by the current slice from the transfer

36

function with respect to the light source. The sheet shadow buffer is composited into the shadow buffer and used for the next slice.

Using the above algorithm, we have implemented shadows for two types of basic light sources: parallel lights and point lights.

Figure 2.5 shows the soft shadow of a fuzzy semi-transparent sphere for a point light.

The density of the sphere is defined by the following density function:

1.0,r ≤ r  x 1 D(x) = 0.0,rx ≥ r0 (2.7)  2 2 2 2 (r0 − rx ) /(r0 − r1 ),else

where, r0 is the outer radius and r1 is the inner radius.

In this sphere example, the running time with shadow (the right image in Figure 2.5) is about 60% longer than the running time of the scene without shadow (the left image of

Figure 2.5).

Figure 2.5: Sphere with and without shadows

37

Constructive volume geometry (CVG) technique [23] can be used to model complex spatial objects from simple primitives using combinational operations: union, intersection, difference, blend, cap and trim. Figure 2.6 shows the shadow of a robot which is a constructive volume geometry (CVG) model composed of two cube primitives and two rectangular parallelepiped primitives using union operations. The shadow of the rings composed of seven torus primitives is shown in Figure 2.7. We can notice that using per-pixel post-classification produces sharp shadows at a per-pixel value.

Figure 2.6: A robot with the shadow

38

Figure 2.7: Rings with shadows

Figure 2.8 is a scene of a smoky room with a cube inside. The densities of the smoke are given low values and the densities decrease from the bottom to the top, so that the smoke is translucent and looks realistic: high density at the bottom and low density at the top. The density distribution is disturbed with the turbulence function:

1 i ∑abs( i noise(2 x)) . Here we use Perlin’s noise [112]. i 2

Figure 2.8: A smoky room with a cube inside

39

Figure 2.9 shows a room scene including the robot, the rings and a smoke-like object constructed using the above turbulence function. Both the hard shadow caused by the robot and the rings, and the soft shadow caused by the smoke-like object, appear in this image.

Figure 2.9: A room scene with shadows

Figure 2.10 is the scene for a HIPIP (HIgh Potential Iron Protein) data set. The data set describes a one-electron orbital of a four-iron and eight-sulfur cluster found in many natural proteins. The data is the scalar value of the wave function “psi” at each point. We render the data set using the absolute value of the data. By comparing the scenes with and without shadows, it’s obvious that the scene with shadows gives us more spatial relationship information.

40

Generating curved shadows on curved objects is a difficult topic. Figure 2.11 shows the curved shadows on curved objects. The object is also a constructive volume geometry

(CVG) composed of two spheres using union operation.

Figure 2.10: A scene of a HIPIP data set (left: without shadow; right: with shadow)

Figure 2.11: Curved shadows on curved objects (left: without shadow; right: with shadow)

41

Figure 2.12 is the shadows of a volumetric hypertextured object, which is constructed using Perlin’s turbulence function [112].

Figure 2.12: A hypertextured object with the shadow

Figure 2.13 and 2.14 provides more results from our algorithm. Figure 2.13 shows the result rendering the CT scanned dataset of a Teddy bear, which is offered by Univ. of

Erlangen-Nuremberg in Germany. The left is without shadow, and the right with shadow.

Figure 2.14 is the result for a Bonsai tree dataset. By comparing the two images of the

Bonsai tree without shadows and with shadows, we can conclude shadows make the scene more realistic.

42

Figure 2.13: Teddy Bear (left: without shadows, right: with shadows)

Figure 2.14: Bonsai Tree (left: without shadows, right: with shadows)

Figure 2.15 is the uncBrain with and without shadows. The insets are close-up renderings and precise curved shadows are generated. Again, notice that the shadows are calculated per-pixel rather than per-voxel.

43

(a) (b) (c) (d) Figure 2.15: uncBrain with shadow: (b) without shadow, (c) with shadow, (a) and (d) Close-up rendering of the specified patch

The above images are generated using a front-to-back rendering. The room scene in

Figure 2.16 is an example of back-to-front rendering: light comes into the room through the window from the back. A desk and a chair reside in the room filled with a light haze.

Figure 2.16: Room scene with light coming in from the back

When light is attenuated, the running time is longer than the time without shadows, because footprint evaluation and shadow buffer compositing need to be done with respect

44

to the light source. The algorithm with shadows takes less than twice the time without shadows. For the Bonsai tree (256*256*128) rendered to a 512*512 image, the running time with shadows is only about 56% slower, making the algorithm attractive for high- quality volume rendering.

2.3.3 Textured Projective Lights

The basic shadow algorithm described in the above section has been extended to generate shadows for projective textured lights. Using projective textured lights, some images with special effects can be generated. We use a light screen to get the effect of the

“light window” or slide projector and map the light pattern to the scene.

transparent light screen with some texture on it

light region

Figure 2.17: A schematic of projective textured light models (left: point light, right: parallel light)

The projective textured lights are modeled as in Figure 2.17. The range of the shadow buffer is determined by projecting the light screen to the shadow buffer plane.

The energy distribution of the light is defined and stored in a buffer. Now, the light

45

intensity at point x not only depends on the light attenuation, but also depends on the light color.

I(x) = I light *light _ color(x)*(1.0 −α(x)) (2.8)

So in the illumination formula, the intensity of the light I light (x) should be treated as a vector (the color of the light). And we calculate the R, G and B components of C(x) separately:

kn R(x) = Robj (x)*(ka I a + kd Rlight *(1.0 −α(x))*(N(x)⋅ L(x))) + ks Rlight *(1.0 −α(x))*(E(x)⋅ R(x))

kn G(x) = Gobj (x)*(ka I a + kd Glight *(1.0 −α(x))*(N(x)⋅ L(x))) + ksGlight *(1.0 −α(x))*(E(x)⋅ R(x))

kn B(x) = Bobj (x)*(ka I a + kd Blight *(1.0 −α(x))*(N(x)⋅ L(x))) + ks Blight *(1.0 −α(x))*(E(x)⋅ R(x))

(2.9)

We warp the light pattern to a buffer aligned with shadow buffer plane, defining the initial distribution of the light intensity in the buffer. During the rendering, the corresponding intensity values can be obtained from this buffer to calculate the illumination at a pixel.

Figure 2.18 shows a scene where the light screen has a pattern of rings. From the figure, we can see the light window and the smooth transmission of the light intensity.

The light pattern is cast to the scene. And the transmission from inside the lighting region to outside the lighting region is smooth.

46

Figure 2.18: A scene with the shadow for a light screen with ring pattern

Figure 2.19 shows the shadow on a cube, from which we can see how the shadow changes with the shape of the object. Here, the light has a stripe pattern.

Figure 2.19: A cube with the shadow for a light screen with stripe pattern

47

A parallel area light with a grid texture casts the grid pattern on a CVG

(Constructive Volume Geometry) sphere composed of two spheres (in Figure 2.20), and on a HIPIP dataset (in Figure 2.21). It gives us some dimension information of the object in 3D space and it can be used for taking measurements in 3D.

Figure 2.20: A CVG sphere for a parallel area light with grid texture

Figure 2.21: HIPIP dataset with grid pattern

The room scene (as shown in Figure 2.22) is same as the room scene in Figure 2.9, except that a spot light pattern is used for the light. In Figure 2.22, A room scene is lit by a

48

light with an image of the logo of The Ohio State University. Shadows are generated by the robot and the rings which reside in the room.

Figure 2.22: A room scene with “light window”

Figure 2.23: A room scene for a light screen with an image of OSU logo

49

Figure 2.24 compares images with light beams passing through a semi-transparent cube. Three light beams with red, green and blue colors enter the cube at the right top, traverse the cube and come out from the left bottom. The image in Figure 2.24(a) is without consideration of light attenuation, while the image in Figure 2.24(b) is with light attenuation. The light intensity exiting the cube is the same as the original intensity entering the cube in the image in Figure 2.24(a), while the resulting light intensity exiting the cube is diminished in the image in Figure 2.24(b). Within the cube, the beam colors are partially blocked by the front participating media of the cube.

Figure 2.24: A scene with three light beams that pass through the cube (left: without light attenuation; right: with light attenuation)

In Figure 2.25, a light beam perpendicular to the eye vector passes through a translucent rectangular parallelepiped, which is rotated by 35°. The image in Figure

2.25(a) is without attenuation, while the image in Figure 2.25(b) considers the light attenuation. In the right image, most of the energy is attenuated, and only a little energy

50

escapes from the rectangular parallelepiped.

Figure 2.25: A scene with a beam of light that passes through the rectangular parallelepiped. (a) Without shadow. (b) With shadow

2.3.4 Multiple Light Sources

In the case of multiple light sources, we need to keep track of light attenuation for each of them. The shadow algorithm can be easily extended to multiple light sources by using multiple shadow buffers. Each light source has its own shadow buffer. For each pixel, when we add its contribution to the sheet buffer as seen from the eye, we also add its contribution to all shadow buffers as seen from multiple light sources.

For multiple light sources, the average light vector of all the light sources are first calculated. Then the half way vector can be calculated using the average light vector and the eye vector, as shown in Figure 2.26.

During the rendering, when we calculate the illumination of a pixel at the current slice, we map the pixel to the multiple shadow buffers and get the opacity values for each light source. The intensity of the i th light arriving at the pixel is:

I i (x) = (1.0 −α i (x))*(I light )i (2.10)

51

where, αi (x) is the accumulated opacity at the location x in the shadow buffer with

th th respect to the i light source, (I light )i is the original intensity of the i light source.

average light vector

light 1

light 2 half way vector

eye COI

slice

Figure 2.26: Half-way vector for multiple light sources

For multiple light sources, the illumination at a point is affected by all the light sources. The illumination model becomes:

C(x) = Cobj (x)*ka I a + ∑(Cobj (x)*kd (I light )i *(1.0 −αi (x))*(N(x) ⋅ Li (x))) i (2.11) kn + ∑(ks (I light )i *(1.0 −αi (x))*(E(x) ⋅ Ri (x)) ) i

This extension has one limitation: all the lights need to lie either behind the viewer or behind the volume, with respect to the viewer, so that we can render the scene from front to back or from back to front.

52

Figure 2.27 shows the shadow of a robot with two light sources. The intersection region of the two shadows with respect to the two lights is darker than the region that is only in the shadow of one light.

Figure 2.27: A robot with two light sources (left); A robot with one light source (right)

2.4 Volumetric Soft Shadow Algorithm

2.4.1 Volumetric Soft Shadow Algorithm

The generation of soft shadows is a difficult topic in computer graphics. Soft shadows include an umbra region, areas for which no part of the extended light source is visible, and a penumbra region, areas in which part of the extended light source is visible and part is hidden or occluded. The generation of soft shadows requires integrating the contributions of extended light sources on the illumination of objects.

In general, there are two main techniques to treat the extended light source: sampling techniques [17][25] and analytical techniques [156]. The first technique is to sample the light source, and add the contributions of all the samples together to form a

53

soft shadow. The sampling techniques are prone to image artifacts unless they are pushed to a stage where they become too expensive. In the second technique, the contribution of the extended light source is integrated using some form of numerical quadrature. These techniques typically require expensive data structures.

Soler and Sillion [127] use a convolution technique to calculate soft shadows that avoids both sampling artifacts and the building of expensive data structures to represent visibility. For the special case where the light source, the receiver and the occluder are all planar, and lie in parallel planes, they express the shadow as a convolution operation. For a general configuration, they construct a virtual light source, a virtual occluder and a virtual receiver, which are all planar and parallel to each other. They then compute the shadow for the virtual receiver using the constructed virtual geometry. Finally, they project the resulting shadow back to the actual receiver.

We investigate an analytic method to generate soft shadows using the convolution technique. The original idea and implementation is presented in my Master thesis. In this dissertation, we present more details, perform an analysis and generate more results.

This soft shadow algorithm is based on the basic shadow algorithm discussed in section 2.3.2. Since we proceed in the volume rendering slice by slice, where all slices are parallel to each other, we can avoid some constraints and artifacts present in Soler’s virtual occluders.

For an extended light source, we integrate over the light source to determine the contribution at a given point x.

54

C(x) = Cobj * k a I a + (C *k I (y) * (1.0 − α (x, y)) * (N(x) ⋅ L(x, y)))dy ∫ obj d light A + (k I (y) *(1.0 −α(x, y))*(E(x) ⋅ R(x, y))kn )dy (2.12) ∫ s light A where, y is a point on the light source and A is the area of the extended light source.

At a given point x, I light (y) , α(x, y) , L(x, y) and R(x, y) depend on the extended light source. We assume the light intensity is uniform across the extended light source.

Also we denote the light vector from the center of the light to the point x as L(x), and approximate N(x) ⋅ L(x, y) by N(x) ⋅ L(x) . Here, N(x) ⋅ L(x) can be considered to approximate the average of the N(x) ⋅ L(x, y) across the extended light source. This approximation is reasonable in cases where the light source is not very close to the objects. Similarly, we use E(x) ⋅ R(x) to approximate E(x) ⋅ R(x, y) .

This leads to the following illumination model:

C(x) = C *k I + C * k I * (N(x) ⋅ L(x)) (1.0 − α(x, y))dy obj a a obj d light ∫ A + k I * (E(x) ⋅ R(x))kn (1.0 −α(x, y))dy (2.13) s light ∫ A The term ∫ (1.0 − α(x, y))dy in the above equation is the integral of the light fraction A arriving at the point x over the extended light source. We can also express the integral as

∫ v(x, y)dy , where v(x,y) is how much fraction of the light intensity at y on the light source A arrives at point x on the receiver.

We calculate the term ∫ (1.0 − α(x, y))dy using convolutions. We use a box kernel, A having a width determined by the penumbra region for the current slice. If L is the size of the extended light source, Z is the distance from the light to the occluder, and ∆Z is the

55

distance from the occluder to the receiver (Figure 2.28), then the width of the penumbra region is calculated by the formula:

L * ∆Z ∆x = (2.14) Z

∆Z Z ∆ x

L umbra penumbra

light occluder

receiver

Figure 2.28: A schematic of the light source, the occluder and the receiver

We notice the ∆x is constant across the receiver if the light source, the occluder, and the receiver are parallel, due to the geometrical properties of equivalent triangles. To achieve soft shadows, we can easily apply this mathematical formulation to analytically determine the penumbra region.

Using the shadow algorithm in section 2.3.2, we generate the shadow region with respect to the center of the extended light source (Figure 2.29). The soft shadow, including both an umbra region and a penumbra region, is generated by convolving the above shadow region (as shown in Figure 2.29) with a kernel size of ∆x obtained from the above formulation. Referring to Figure 2.29, the boundary of the shadow region with respect to the center of the virtual light is exactly in the middle of the penumbra region.

56

We can derive it from the geometrical properties of the equivalent triangles. The shadow region is convolved using a box convolution kernel of size ∆x . Thus, we get the exact penumbra region for the configuration in Figure 2.29. The penumbra region depends on the size of the extended light, the distance from the light to the occluder, and the distance from the occluder to the receiver, as illustrated by equation (2.14). At a given point, the average shadow value of its neighborhood within the kernel is taken as its convolved shadow value.

In slice-based volume rendering, we implement rendering slice by slice. At the current slice, all slices in front of it are occluders, and the current slice itself is the receiver. The contribution of the current slice should be composited into the accumulated shadow buffer to prepare for the next slice. Here, Z is the distance from the extended light source to the current slice, and ∆Z is the distance between two adjacent slices

(Figure 2.30). The penumbra region ∆x is calculated for each slice using equation (2.14) and transformed to screen space. The contribution of the current slice is obtained by projecting the occluder on the current slice onto the shadow buffer with respect to the center of the extended light source. The accumulated shadow image, including the contribution of the current slice, is taken to do the convolution and the convolved shadow values are stored in the accumulated shadow buffer to be used for the next slice.

57

shadow region light wrt to the light center center

occluder

receiver

Figure 2.29: A schematic of the shadow region with respect to the light source

∆Z

Z

extended light

i i+1 i+2

shadow buffer plane slices

Figure 2.30: Soft shadow algorithm in slice-based volume rendering

At the slice i, the shadow value obtained from the accumulated shadow buffer is the convolved shadow value, which has considered the contribution of the extended light source. The sheet shadow buffer contributed by the current slice is composited to the accumulated shadow buffer, which is then convolved to prepare for the illumination at next slice. We repeat the above convolution slice by slice (as shown in Figure 2.30). At a

58

pixel to be illuminated, we transfer it back to the eye space, then project it to the accumulated shadow buffer and obtain the light attenuation for it (Figure 2.4). The obtained light intensity includes the contribution of the extended light source on the pixel.

Since we convolve the accumulated shadow buffer slice by slice, the contribution of a front slice on the subsequent slices is updated with the convolutions. For example, consider the contribution of the slice i on the slice i+50. The contribution of the slice i is composited to the accumulated shadow buffer, and the shadow buffer has been convolved many times when the rendering proceeds to the slice i+50. This satisfies equation (2.14), where the penumbra region caused by the slice i on the slice i+50 depends on the distance ∆Z .

As discussed in section 2.3.2, we still slice the volume along the half-way vector.

But, we keep the normal of the shadow buffer aligned with the half way vector, instead of the light vector, so that the shadow buffer is parallel to the slices (Figure 2.31). This is required by equation (2.14).

To accomplish soft shadows, we add one extra step to the shadow algorithm given in

Section 2.3.2. At each slice, after compositing the sheet shadow buffer into the accumulated shadow buffer, we calculate the ∆x . The accumulated shadow buffer is convolved using a kernel size of ∆x to prepare for the next slice.

59

extended virtual light light source source shadow buffer plane half way vector slicing direction

eye

slices image plane

Figure 2.31: Construction of a virtual light source

2.4.2 Discussion of Volumetric Soft Shadow Algorithm

In this section, we will discuss several factors which may affect the accuracy of the soft shadows.

(1) Constructing a virtual light

In our soft shadow algorithm, the occluder and the receiver are rendered slice by slice. They have arbitrary geometry, but they are treated as parallel slices during the volume rendering. So, there is no need to approximate the occluder and the receiver in our soft shadow algorithm as in [127]. However, the above analytic soft shadow algorithm requires the extended light source to be parallel to the slices such that the penumbra and umbra regions can be calculated using equation (2.14). If the extended light source is not parallel to the slices, a virtual light source is created by using an orthogonal projection of the original light source (as shown in Figure 2.31 and Figure

2.32).

60

virtual light original light

computed penumbra region

region exact penumbra

Figure 2.32: Computed and exact penumbra regions

If the angle between the normal of the extended light source and the volume slicing direction is small, the virtual light source generated by the above orthogonal projection will not introduce artifacts. If the angle is large (since the slicing direction is the half way vector between the eye vector and the light vector, the maximum degree is 45°), the virtual light will change the distribution of the penumbra region (as shown in Figure

2.32). Here, we use a parallel planar occluder and receiver to analyze the approximation error. In Figure 2.32, the penumbra region is smaller on the left side and bigger on the right side for the original extended light source, while the virtual light generates the same-size penumbra region on the both sides. In the cases where the light is small or not close to the occluder, and/or the receiver is not far from the occluder, the difference in the

61

penumbra region will be small. A variable convolution kernel can be used to adjust the distribution of the penumbra region.

(2) Dealing with discretized shadow buffers

The above description is section 2.4.1 dealt with continuous convolution functions.

The soft shadow algorithm is accurate mathematically. However, since we convolve the shadow buffer in screen space, we need to handle the discrete pixels. This implementation can introduce some artifacts.

From equation (2.14), we know the penumbra region ∆x is calculated using

L * ∆Z / Z . In slice-based volume rendering, ∆Z is the distance between two adjacent slices. Since ∆Z is very small, ∆x may also be very small. When ∆x is transformed to the screen plane, it may be smaller than two pixels. In screen space, we accumulate ∆x until it is greater than two pixels. Then, we do the convolution using the kernel size of the integer part of the ∆x , and the rest part of the ∆x is counted into the next accumulation. Therefore, we need to keep the last recent convolved slice position, and use it to calculate the accumulated ∆x . Here, Z is the distance from the light source to the last convolved sheet, and ∆Z is the distance from the last convolved slice to next slice following the current slice.

The above convolution calculation can cause some inaccuracy problems. Since we keep only one accumulated shadow buffer, when the accumulated ∆x is greater than two pixels, the contributions of all the slices, between the last recent convolved slice and the current slice, are convolved. This problem is an implementation problem in dealing with discrete pixels, and it is not a problem mathematically. Using high-resolution shadow

62

buffer can improve the accuracy. Also, obtaining shadow value using bilinear interpolation can improve the accuracy.

(3) Calculating average light size

In our soft shadow algorithm, box kernels are used to do the convolution. Given a light source with an irregular shape, we first calculate its center and average size, then use these as the light center to which the occluder is projected and the light size L in equation (2.14) to calculate ∆x .

The penumbra region depends on the average light size. The light shape is not considered. The effect of the light shape on the penumbra can be implemented by extending the convolution method by Soler and Sillion [127], to deal with the occluder with values in [0,1] in our algorithm.

Compared with the convolution technique [127] by Soler and Sillion, our method has some advantages. Firstly, we don’t need to approximate the occluder and the receiver.

They are rendered slice by slice, so the occluders and the receivers are parallel slices during the volume rendering. We just need to create the virtual light. Secondly, we use a slice-based volume rendering method, so our soft shadow algorithm deals with the visibility in the range of [0,1], not just 0 or 1. Also we model the light attenuation slice by slice and we can generate self shadows.

Similar to the soft shadow algorithm in [127], the disadvantage of our soft shadow algorithm is that it is an approximate method. The orthogonal projection of light sources and the convolution of the shadow buffer using the accumulated ∆x introduce some approximation.

63

2.4.3 Volumetric Soft Shadow Results

The soft shadows, including umbra and penumbra, for extended light sources, are shown in Figures 2.33 – 2.38, where the extended light source is a round area light. In the soft shadows of the rings (Figure 2.33) and the robots (Figure 2.34(b)), there is a penumbra region due to the extended light source. Compared to the hard shadows (Figure

2.7 and Figure 2.34(a)), the soft shadows have penumbra regions. The further the receiver is, the more blurred the shadow. For example, in Figure 2.34(b), the shadows near the foot and the legs are hard, and the shadows of the body and the head become soft.

Figure 2.33: Soft shadows of rings

64

Figure 2.34: Robots: (a) hard shadows, (b) soft shadows

Figure 2.35 shows that the shadow caused by the blue object passes through the translucent rectangular parallelepiped. The image in Figure 2.35(a) is the hard shadow, while the image in Figure 2.35(b) is the soft shadow. At the top entrance, the penumbra region is pretty small, so there is nearly no difference between the two images. As the shadow traverses the rectangular parallelepiped and comes out, the penumbra region becomes obvious for the soft shadow in Figure 2.35(b), compared to the hard shadow in

Figure 2.35(a).

65

(a) (b) Figure 2.35: Soft shadow passing through the translucent rectangular parallelepiped. (a) Hard shadow. (b) Soft shadow

The soft shadow of the Bonsai tree is shown in Figure 2.36. Compared with the hard shadow in Figure 2.14(b), the Bonsai tree with soft shadows is more realistic. Also, the soft shadow of the hypertextured object is shown in Figure 2.37. The corresponding hard shadow of the hypertexture is in Figure 2.12.

Figure 2.36: Soft shadows of Bonsai tree

66

Figure 2.37: Soft shadow of a hypertextured object

In Figure 2.38, a beam of light passes through a hole of an opaque planar occluder

(modeled in the light attenuation, but not displayed in the image), then traverses the translucent rectangular parallelepiped. In this image, soft shadows are implemented, so the light beam expands to the penumbra region. Also, due to the light attenuation as the light beam traverses the rectangular parallelepiped, the resulting light intensity is lower than the original light intensity.

67

Figure 2.38: A scene with a beam of light that passes through a rectangular parallelepiped, with soft shadows implemented

Our method is a fast soft shadow calculation method. It keeps only one accumulated shadow buffer which stores the convolved shadow values. The contribution of the extended light source is integrated slice by slice using a convolution technique.

2.5 Volumetric Shadows and Soft Shadows for Mixed Volumetric and Polygonal Scenes

2.5.1 Shadow Algorithm

Some visualization applications require volumetric and geometrical objects to appear together in a single image. Based on the shadow algorithm for volumetric data in section

2.3.2, we can also generate shadows for scenes of mixed polygonal and volumetric data.

We assume the polygons are opaque in this section to aid in our description of the algorithm.

68

Our shadow algorithm for mixed polygonal and volumetric data has two stages.

First, the polygons are partially rendered with respect to both the viewer and the light source. For the second stage, slice-based rendering is used to render the volumes, and any relevant polygon information is composited into the scene slice by slice with proper shadow attenuations. The depth information from the rendering of opaque polygons is combined with the shadow buffer which stores the accumulated light attenuation to determine the shadow value. Illumination of both polygons and volumes is thus calculated during this second stage. Next we will explain our algorithm in detail.

The first stage is to render the polygons. Since our purpose is to generate shadows, the polygons are rendered with respect to both the viewer and the light source. At this stage, the light attenuation is not determined for the polygons, so no illumination is calculated and the intermediate rendering results are stored for final rendering. For the image generation, we store the z value with respect to the viewer into the z-buffer, and store the object color and normal into two textures. For the shadow generation, we save the z value with respect to the light into another texture.

The second stage is to render the volumes and composite the polygon information slice by slice. Similar to the shadow algorithm for volumes only in section 2.3.2, the volume is sliced along the half-way vector between the eye vector and the light vector, in order to keep track of accurate light attenuation information. The image buffer is aligned with the eye, and the shadow buffer is aligned with the light source.

69

Polygons in a volumetric scene can be surrounded by transparent air, or inside a semi-transparent volumetric participating media. If there are several volumetric datasets in a scene, we treat them as a single large volume. The positioning relationship between the polygons and the non-empty volume is shown in Figure 2.39, with respect to the slicing direction. Region 1 can cast shadows through the volume, as well as on objects within region1. In region 2, volumetric shadows are needed as well as volumetric shadowing of interior polygons. For region 3, shadows are cast from the prior regions.

volume

polygons polygons polygons

front back

Slicing direction

Region 1 Region 2 Region 3

Figure 2.39 Position relationship between polygons and volume with respect to the slicing direction

The polygons at different regions are processed in different ways. All the polygons in region 1 are rendered in one step. Whether a point is in shadow or not is determined by the depth value with respect to the light stored in the texture. After rendering the polygons in this region, the shadow of the polygons is used for the first volumetric slice.

The region 2 is rendered slice by slice. The contribution of the volume data at the current slice is composited into the frame buffer with the z-test enabled so that only the volume in front of the polygons is rendered. The polygons in region 2 are rendered slice by slice, because the light attenuation by the volumetric data changes slice by slice.

70

Whether there are polygons in the current slice is determined by the depth information of the polygons. If there are polygons at the current slice, the part of the polygons in this slice is rendered. The contribution of the polygons in the current slice is composited into the frame buffer with the opacity set to 1.0 at the corresponding pixels. Similarly, the contributions of the volumes and the polygons in this slice on the light attenuation are composited to the accumulated shadow buffer aligned to the light source for the illumination of the next slice. In region 2, the shadow value of a pixel is determined by a corresponding pixel on the accumulated shadow buffer (as shown in Figure 2.4).

The polygons in region 3 are rendered in one step in a similar way to the polygons in region 1, but they are different in how they are shadowed. We cannot just use the shadow z-buffer to determine the shadow values for the polygons in region 3, since we need to consider the shadows cast by the volume in region 2. So, we use both the accumulated shadow buffer and the z-buffer to determine the shadow values for the polygons in this region. The opacity from the z-buffer is set to 1 or 0 depending on whether there is a polygon occluder. We then take the maximum opacity value as the shadow value for the pixels corresponding to the polygons in region 3.

The above shadow algorithm for mixed volumes and polygons applies to all the possible configurations of volumes and polygons, without any restriction on the geometric positioning and overlap of the volumes and polygons. For special cases, pure volumes are rendered in region 2, and pure polygons are rendered in region 1.

The region division aids in the efficiency of the algorithm. The polygons within only air can use a slice skipping technique and are rendered faster. We summarize the shadow calculation of the three regions in table 2.1.

71

Region 1 Region 2 Region 3 Shadow z-buffer Projective shadows Shadow z-buffer + + Volume slices Shadow z-buffer

Table 2.1: Shadow determination of three regions

The shadow algorithm combining volumes and polygons using texture-based rendering is demonstrated with the flow chart in Figure 2.40.

In cases of semi-transparent polygonal objects, we cannot render the polygons in one step, because the light attenuation is accumulated slice by slice. Also, we cannot set the opacity to 1.0 and use the depth stored in z-buffer to do z-test for the rendering of the volumetric data. So, the first stage, which renders the polygons first and stores the depth information with respect to the light into a texture and the depth with respect to the eye into the z- buffer, is unnecessary. Both the polygonal and volumetric objects are rendered slice by slice, with respect to both the eye and the light source, respectively. This is actually the case of pure region 2.

72

Render the polygons w.r.t. the eye

Store color and normal to two textures, the depth w.r.t. the eye to the z-buffer

Stage 1

Render the polygons w.r.t. the light

Store the depth w.r.t. the light to a texture

Render and illuminate the polygons in region 1, and composite to the frame buffer

Add the contribution of the polygons in region 1 to the accumulated shadow buffer

Slice by slice Stage 2 Render and illuminate the volume and polygons in region 2, and composite to the frame buffer

Add the contribution of the volume and polygons in region 2 to the accumulated shadow buffer

Render and illuminate the polygons in region 3, and composite to the frame buffer

Figure 2.40: Flow chart of the shadow algorithm combining volumes and polygons

Using the above shadow algorithm, we have implemented shadows for scenes including both volumes and polygons. Figure 2.41 and Figure 2.42 show some polygonal mushrooms under the volumetric Bonsai tree. Figure 2.41(a) is the image without shadows and Figure 2.42(b) is with the shadows. The tree casts shadows on the bottom plate and the container as well as on the mushrooms. Compared with the image without shadows in Figure 2.41(a), the shadows in Figure 2.41(b) provide the spatial relationship and make the scene more informative and realistic. In Figure 2.42, some mushrooms cast

73

shadows on the bottom plate and the other three mushrooms are in the shadows of the

Bonsai tree.

Figure 2.43 shows the shadows cast from three mushrooms. The mushrooms are represented as polygons, while the bottom plate is represented as volumetric data. We can see the polygonal mushrooms cast shadows on themselves and on the plate. Figure 2.44 shows a scene in which a dart is pointing at the rings. Here, the dart is a polygon model, and the rings and the plate are volumetric data.

(a) (b) (c) Figure 2.41: Bonsai tree and mushrooms (a) without shadows, (b) with shadows, (c) with soft shadows

Figure 2.42: Bonsai tree and mushrooms

74

Figure 2.43: Shadows of mushrooms

Figure 2.44: Shadows of rings and a dart

Figure 2.45 and Figure 2.46 are two scenes in which polygons are inside semi- transparent volumetric data. Figure 2.45 shows a polygon teapot inside volumetric translucent media. We can see the shadows cast by the teapot on itself and through the translucent media. In Figure 2.46, light comes into the room from the back and a polygon-defined desk resides in the smoky room. Here, the smoke is modeled using

Perlin’s turbulence function [112].

75

Figure 2.45: A scene of a teapot inside a translucent cube

Figure 2.46: A scene of a desk inside a smoky room

76

2.5.2 Soft Shadow Algorithm

The soft shadows are generated by convolving the shadows with respect to the center of the extended light source, which is discussed in section 2.4.1. When there are polygons in the scene, we will render the polygons slice by slice in the same way as the volume, no matter where the polygons are (as shown in Figure 2.47). This is because the accumulated shadow buffer is convolved slice by slice, even for the pure polygons. So, we treat both the volumes and the polygons as a whole volume.

volume

polygons polygons polygons

front back

Slicing direction

Whole region Is sliced

Figure 2.47: The whole region rendered slice by slice

Similar to the shadow algorithm in section 2.5.1, there are two stages for the soft shadow algorithm. The first stage is mainly to render the polygons with respect to the viewer and store the depth information of the polygons into the z-buffer.

At the second stage, texture-based rendering is used to render the volumes and polygons, and generate soft shadows slice by slice. In order to generate soft shadows for polygons, the polygons are not rendered in one step as in section 2.5.1. Instead, they are rendered slice by slice using two clipping planes, regardless as to whether they are inside

77

the volumetric material or not. Therefore, the range of the volume to be rendered slice by slice is determined by both volumes and polygons.

At each slice, the volumes in the current slice are rendered with the z-test enabled, and the part of the polygons belonging to the current slice is then rendered. Their contribution to the light attenuation is used to update the shadow buffer. Then the updated accumulated shadow buffer is convolved to prepare for the next slice. Since the polygonal objects are modeled as the boundary polygons of the solid objects, we should use the to add the contribution of the solid polygonal objects to the shadow buffer for the convolution. In this way, we get realistic soft shadows.

For semi-transparent polygonal objects, the first stage is unnecessary, because the main function of the first stage is to keep the depth information with respect to the eye into the z-buffer for the z-test of the rendering of the volumetric data at the second stage.

Now the rendering of the volumetric data cannot use z-test to do the z-culling.

We have generated soft shadows for scenes with both volumes and polygons, using the above soft shadow algorithm.

Figure 2.48 shows the soft shadows of the mushrooms. We can see the soft shadows have penumbra regions. Figure 2.41(c) and Figure 2.49 show the soft shadows of the

Bonsai tree dataset and the mushrooms. In Figure 2.49, we can notice a higher degree of blur in the shadows of the regions farther from the plane, like the shadows of the top part of the tree, while the shadows of the two small components in front of the tree look pretty solid, since they are close to the plane. These soft shadows look more realistic than the hard shadows in Figure 2.43, Figure 2.41(b) and figure 2.42.

78

Figure 2.48: Soft shadows of mushrooms

Figure 2.49: Soft shadows of Bonsai tree and mushrooms

2.6 Multiple Light Scattering

In order to simulate light transport for participating media with high albedo, multiple scattering cannot be ignored. In this section, we will explain how we implement multiple scattering using a slice-based rendering method and incorporate it with the shadow algorithm by displaying clouds, a high albedo participating medium. The clouds are modeled as a collection of ellipsoids, and Perlin’s fractal function [112] is used to disturb

79

the density distribution. Figure 2.50 shows the clouds with the light emanating behind it.

The clouds look unnaturally dark, because only light attenuation is modeled. In this section, we will explain our implementation of multiple scattering and its effects on the cloud appearance.

Figure 2.50: Clouds without multiple scattering

For multiple scattering, the light intensity at a point P is the sum of the direct energy from the light source that is not absorbed by intervening particles and the energy scattered to P from all other particles. Figure 2.51 is a schematic showing forward scattering and back scattering among particles. The calculation of multiple scattering requires accounting for the scattering from all directions. We let I(P,ω) be the intensity at each point P and each light flow direction ω , which can be expressed as:

D s − ∫τ (P−tω)dt D −∫τ (P−tω)dt I(P,ω) = I (ω)⋅e 0 + g(s,ω)⋅e 0 ds (2.15) 0 ∫ 0 where I 0 (ω) is the original light intensity in direction ω , τ is the extinction coefficient of the participating media, D is the depth of P in the media along the light direction, and

g(s,ω) = ∫ r(P − sω,ω,ω')⋅ I(P − sω,ω')dω' (2.16) 4π

80

represents the light from all directions ω' scattered into direction ω at the point P. If we denote P − sω as x, then r(x,ω,ω') is the bi-directional scattering distribution function

(BSDF) which determines the percentage of light incident on x from direction ω' that is scattered in direction ω . We can treat r(x,ω,ω') = a(x) ⋅τ (x) ⋅ p(ω,ω') [48][92], where a(x) is the albedo of the media at x, τ (x) is the extinction coefficient of the media at x, and p(ω,ω')is the phase function.

A full multiple scattering algorithm must compute this quantity for all light flow directions. This would be very expensive. Nishita et al. [108] take advantage of the strong forward scattering characteristics of many volumetric materials and limit the sampled light flow directions to sub-spaces of high contribution. Harris et al. [48] further approximate multiple forward scattering only in the light direction, resulting in good cloud images. However, they did not model back scattering. As pointed out by Max [92], without back scattering, the edges of very dense clouds are not illuminated properly. In this thesis, we model both multiple forward scattering as well as back scattering and we approximate the multiple scattering along the light direction based on the strong forward scattering characteristics of clouds for the purpose of efficient calculation.

We approximate the integration of equation (2.16) over a solid angle γ around the light direction. Here, ω' is within the solid angle γ along the light direction l.

g f = a (x)⋅τ (x)⋅ p(l,l )⋅ I (x,l )dl (2.17) k ∫ k k γ A γ A γ A γ A

g b = a (x)⋅τ (x)⋅ p(l,l )⋅ I(x,l )dl (2.18) k ∫ k k γ B γ B γ B γ B

81

light light

particles particles

P2 P2

P1 P1

P3 P3

(a) (b) Figure 2.51: A schematic of light transport (a: forward scattering; b: forward scattering and back scattering)

f b The above g k represents the forward scattering and g k is for the back scattering. γ A and γ are solid angles for the forward scattering and the back scattering respectively. l B γ A and l are directions within γ and γ . The subscript k here indicates that the values are γ B A B from the slice k.

Now, assume we have accurately calculated the flux for each point in a plane perpendicular to the light direction. Let I(x,l) be the light distribution in a slice, the above

f b formulas for g k and g k can be calculated using a convolution over the light intensity

I(x,l) . The convolution calculation is supported and easy to implement in slice-based rendering. This shows that multiple scattering can be modeled as light diffusion along the light direction. Kniss et al. [68][69] also use a convolution technique to model the light diffusion, but they only model multiple forward scattering.

82

If only multiple forward scattering is considered, we have the recurrence relation:

gk −1 + Tk −1 ⋅ Ik −1, 2 ≤ k ≤ N Ik =  (2.19)  I0 , k = 1

−τ k −1 where, Tk −1 = e is the transparency of slice k-1. The recurrence relation says the light incident on slice k is equal to the intensity scattered to slice k from the slice k-1 plus the

direct light transmitted to the slice k. Here, g k−1 is calculated using a convolution operation based on (2.17).

For multiple forward scattering, we just need to keep the intensity of the previous

slice, then calculate the multiple forward scattering g k−1 using convolution over I k −1 and

add it to I k . Figure 2.52 shows the clouds with multiple forward scattering only. The clouds look brighter than the clouds in Figure 2.50 without multiple scattering.

From the physical viewpoint, when the light is scattered forward, some light will scatter backwards. In Figure 2.51(a), P1 receives energy from P2 by forward scattering, while in Figure 2.51(b), P1 also gets energy from P3 by back scattering. In order to implement back scattering, we keep the light intensity of the slices which contribute to the current slice k. The light intensity stored is the sum of the direct light plus the multiple forward scattering. And the back scattering is calculated using a convolution operation based on (2.18). Once a slice obtains new energy from back scattering, it will add this to the stored light intensity, and then bounce back energy to its upper slice. This process continues until the slice k to be illuminated is reached. After the illumination, the total energy at the slice k is scattered forward to slice k+1. For the practical

83

implementation, we maintain a limited number of slices, for example, three slices, and calculate their back scattering contribution on the current slice.

Figure 2.52: Clouds with multiple forward scattering only

Figure 2.53: Clouds with both multiple forward scattering and multiple back scattering

If we add back scattering to the clouds in Figure 2.52, we get the clouds in Figure

2.53. As pointed out by Max [92], as energy bounces around, the edges are brighter, as is the interior region of the clouds.

Our shadow algorithm for combining volumes and polygons also works for high- albedo media. In Figure 2.54, a polygonal airplane flies above clouds. The clouds are modeled with light attenuation and multiple scattering.

84

Figure 2.54: An airplane flying above the clouds

Our multiple scattering method is a forward and backward scattering method using a convolution technique. It is an approximate and efficient method which works well for some participating media with strong scattering characteristics along the light direction, for example, clouds. Actually our method also models the indirect scattering, as shown in

Figure 2.55. One limitation of our method is the scattering along the direction perpendicular to the lighting direction is not modeled and our method may not work well for some applications like the scattering on the skin. In these situations, the diffusion model [129][96] can be applied to simulate the multiple scattering.

k-1

P k

k+1

Figure 2.55: Indirect scattering

85

2.7 Conclusions

In this section, we have described an algorithm to model the light attenuation in a slice-based volume rendering. This algorithm models the light attenuation with respect to the light source and generates shadows. We need a 2D buffer to store the accumulated opacity. For the running time, the algorithm with shadows takes less than twice the time without shadows. This algorithm has the advantage of saving storage and running time.

The basic shadow algorithm has been extended for projective textured light sources.

Projective textured lights are used to create images with special effects or quantitative analysis. From some images lit by projective textured lights, we can see the light attenuation visually. Also the algorithm also works for multiple light sources.

We propose an analytic volumetric soft shadow algorithm to deal with extended light sources and generate soft shadows with penumbra and umbra for volumetric scenes.

Our soft shadow algorithm is a fast analytic method using a convolution technique. We discuss several factors which may affect the accuracy of the soft shadows. Also, we generate soft shadows and compare them with the shadows generated using the basic shadow algorithm.

Based on the shadow and soft shadow algorithm for volumetric data, we have extended the algorithm to generate shadows and soft shadows for scenes including both volumes and polygons. The polygons are first rendered with respect to both the eye and the light source, and the depth information is retrieved. A slice-based volume rendering is

86

then used to render the volumes. During the volume rendering, polygons are composited into the volumes slice by slice in a depth-sorted order. We have implemented our shadow algorithm to include soft shadows combining volumes and polygons. This shadow algorithm handles all combinations of volumes and polygons, without any restriction on the geometric positioning and overlap of the volumes and polygons.

We also describe how to implement multiple scattering for high-albedo participating media, and incorporate multiple scattering with our light attenuation model. We use a convolution technique to approximate the multiple forward scattering and back scattering for clouds, a high albedo participating medium.

Now our shadow algorithm can generate shadows or soft shadows for point lights, parallel lights, projective textured lights and extended light sources. Also, our algorithm can deal with both volumes (including volumetric datasets and hypertextured objects) and polygons, and combine multiple scattering and light attenuation model. Now our shadow algorithm is a complete system for shadow generation.

87

CHAPTER 3

TIME-VARYING INTERVAL VOLUMES

3.1 Introduction

With the widespread use of high performance computing systems, some application simulations, for example, Computational Fluid Dynamics (CFD) and Finite Element

Method (FEM), are capable of producing large datasets of curvilinear or unstructured grids. Also, these simulations tend to be time varying, adding another dimension to the problem. Additionally, these simulations produce multiple attributes like density, momentum and energy at each of the sample points (nodes). It is valuable to visualize the relationship between the time steps and the relationship between these values. Along with the need for interactive visualization of curvilinear and unstructured data sets, there is a need for better data visualization tools, which allow navigating the data set in a more intuitive manner.

Recently, Bhaniramka et al. [12] have shown that interval volumes can be computed interactively and automatically for arbitrary polyhedral cells using a fast isosurfacing algorithm, and they prove their algorithm correctly produces a triangulation of a (d-1)-

88

manifold for an input of d-dimensional structured and unstructured grid [13]. During our cooperative work with Bhaniramka et al. [14], we addressed some of the above issues by using interval volumes as a region-of-interest extraction algorithm. We further built upon this work to come up with new visualization techniques for effective visualization of interval volumes. Compared with other direct volume rendering methods, this geometry- based volume rendering can segment the volume data and highlight the boundary surfaces between the interval volumes.

For time-varying datasets, a traditional method is to take a snapshot of the data for each particular time step and generate an animation from the time series data. This method is useful, but it replies on human memory and cognitive abilities to tie together spatio-temporal relationships. An alternative method is to display the movement of the time series data in a single image using direct rendering of high dimensional data. In this dissertation, we are more interested in the second approach.

We have extended the interval volume algorithm to time-varying three-dimensional volumetric datasets, and rendered the time-varying interval volumes directly [167]. Our motivation is to visualize the integrated interval volumes over the time period in a single image and show the relationship of the interval volumes between different time steps. In this way, we can see how interval volumes change with time in one view.

3.2 Previous Work

Previous work that relates to our research primarily focuses on unstructured volume rendering, interval volumes and high-dimensional scientific visualization.

89

3.2.1 Unstructured Volume Rendering

Shirley and Tuchman [125] presented an algorithm for hardware accelerated rendering of unstructured tetrahedral grids by approximating the projection to screen space using a set of triangles. Grids consisting of different cells are first decomposed into a tetrahedral representation using simplicial decomposition techniques [1][94]. Williams extended Shirley-Tuchman’s approach to implement direct projection of other polyhedral cells in their HIAC rendering system [155]. Recently, with the advent of programmable graphics hardware, a tremendous amount of work has been done in implementing the

Shirley-Tuchman algorithm on graphics hardware using the programmable vertex and fragment shader pipelines on the GPUs [144][159][70]. As an alternative to projection, polyhedral cells can also be rendered using [145].

3.2.2 Interval Volumes

Fujishiro [41] introduced interval volumes as a solid fitting algorithm. A few applications of interval volumes were presented in [44][42]. Fujishiro computed a tetrahedralization of the interval volume by computing the intersection of two convex polyhedra enclosed by the isosurfaces given by the algorithm [86], within each cell. Nielson [105] computed the tetrahedralization by first decomposing each cube in the grid to five tetrahedra. Nielson then used an efficient lookup table to compute the interval volume within each simplex and decompose it into tetrahedra. The tetrahedralization was constructed manually by analyzing all the possible intersections of a tetrahedron with an interval enclosed by two isosurfaces. Bhaniramka et al. have shown

90

that interval volumes can be computed interactively and automatically for arbitrary polyhedral cells using a fast isosurfacing algorithm[12][13]. Banks [8][9] counted the cases for a family of visualization techniques, including iso-contours and interval volumes. For interval volumes of a time-varying dataset, Ji et al. [60] tracked the interval volumes using higher dimensional isosurfacing. However, they rendered iso-contour surfaces of the interval volumes and did not directly render the time-varying interval volumes.

3.2.3 High-Dimensional Visualization

Hanson et al. [45][46][47] introduced a general technique, as well as an interactive system, for visualizing surfaces and volumes embedded in four dimensions. In their method, 3D scalar fields were treated as elevation maps in four dimensions in the same way 2D scalar fields could be viewed as 3D terrains. Bajaj et al. [6] developed an interface that provides “global views” of scalar fields independent of the dimension of their embedded space and generalized the object space projection technique into a hyper- volume projection method. Lee et al. [77] study a recursive decomposition of a four- dimensional hypercube into a hierarchy of nested 4-dimensioal simplexes and discuss an application of this representation to multi-resolution representation of four-dimensional scalar fields. Woodring et al. [157][158] treated the time-varying data as four- dimensional data, and applied high dimensional slicing and projection techniques to generate an image hyperplane. The results of their technique generated a volume that is the projection of hyperplanes along a 4D projection vector, which can be rendered using traditional volume rendering techniques.

91

3.3 Review of High-dimensional Iso-surfacing Algorithm and Interval Volume Computation

Bhaniramka et al. presented a new algorithm in [12] for computing iso-surfaces in arbitrary dimensional data sets. The algorithm calculates iso-surface patches within each polyhedral cell of the d-dimensional grid and generates a set of (d-1)-dimensional simplices forming a piecewise linear approximation to the iso-surface. They presented a proof of correctness in [13] for the d-dimensional iso-surface construction and show that it correctly produces a triangulation of a (d-1)-manifold with boundary.

d For a function f : R →R in any dimension, the interval volume is defined by If(α,β) =

{(x1,…,xd): α ≤ f(x1,…,xd) ≤ β }. Intuitively, the interval volume is the set of points enclosed between the two iso-surfaces corresponding to the iso-values, α and β. For a d- dimensional grid, the interval volume is a d-dimensional subset of the grid and can be represented by a collection of d-simplices.

We can construct an interval volume using the high-dimensional iso-surface algorithm. The scalar field is lifted into one higher dimension, an iso-surface is constructed in that higher dimension, and the iso-surface is projected back down into the original dimension.

The interval volume algorithm proceeds as follows: 1. Let f(x1,…,xd) define a d-dimensional function. 2. Let scalar values, α, β (α < β), be the desired iso-values bounding the interval.

3. Let F(x1,…,xd,w) be the (d+1)-dimensional function, given by, F(x1,…,xd,w) =

f(x1,…,xd) − (α (1−w) + β w), such that

92

f x , , x −α, for w = 0  ( 1 L d ) F(x1,L, xd , w) =   f (x1,L, xd ) − β, for w =1

4. Compute the zero-valued iso-surface, S, given by F(x1,…,xd, w) = 0 for 0 ≤ w ≤ 1.

d+1 d 5. Let π be the projection function mapping R to R given by π(x1,…,xd,xd+1) =

(x1,…,xd). The desired interval volume, If(α,β), is then given by π(S).

Figure 3.1 provides a two-dimensional analogy for the interval volume algorithm.

Here, the 2D function is f(x,y)=|x|+|y|, and we are interested in the region α ≤ f(x,y) ≤ β.

We construct a 3D volume according to the step 3 above. The zero-valued iso-surface in the volume is the triangulation using an iso-surfacing algorithm, like [11]. The final interval surface enclosed by the α− and β− valued iso-contours is given by the 2D projection of the iso-surface triangulation. Since two iso-contours for different iso-values can never intersect, flipped triangles are avoided in the resulting mesh.

2D interval surfaces 3D volume F(x,y,t) Interval surface triangulation

f(x,y)=α f(x,y)=β f(x,y)=α f(x,y)=β F(x,y,0)=f(x,y)-α F(x,y,1)=f(x,y)-β

Figure 3.1: Two-dimensional illustration of the interval volume algorithm

For a regular three-dimensional scalar grid with hexahedral cells, the interval volume is constructed by lifting the hexahedral cells to four-dimensional hypercubes (as

93

shown in Figure 3.2(a)) and building the iso-surface piecewise within each hypercube.

The interval volume is then given by projecting the iso-surfaces to R3. In our implementation, we pre-compute a lookup table for all possible intersections of the 4- dimensional iso-surface with a 4-cell. See [110] for the lookup table generation code. We lift the 3-cell to a 4-cell with 16 vertices. The iso-surface lookup table for a 4-cell with

16 vertices has 216 = 48 cases. However, some of these cases are never used in interval volume generation. Each vertex of a 3-cell can have iso-value below α and β, or above

α and below β, or above α and β. Thus, there are only 38 cases which apply to interval

d volumes in the iso-surface lookup table [60]. More generally, there are 32 interval volume cases in an iso-surface lookup table for a d-cell in Rd. Once the table has been pre-computed, we use this lookup table to compute the iso-surface triangulations, piecewise, within each 4-cell. This approach provides considerable speedup to the iso- surface construction process.

w=1

w=1 w=0 w=0

(a) 4D Hypercube (b) 4-prism with a tetrahedral base

Figure 3.2: Four-dimensional cell for 3D interval volumes

94

Since the high-dimensional iso-surface algorithm proposed in [12] is independent of the topology of the polyhedral cells in the grid, the same lookup table generation and interval volume computation algorithm can be applied to unstructured grids composed of other cell types including tetrahedra, pyramids and prisms [1]. For a grid with tetrahedral cells, the 4-cells generated by this dimension elevation technique are not simplicial (4- tetrahedra). The new 4-dimensional grid consists of 4-prisms with tetrahedral faces as shown in Figure 3.2(b). Each cell has 8 vertices and 16 edges. The iso-surface lookup tables have 34 entries in this case.

Since the iso-surface triangulation is consistent, the interval volume triangulation will also be consistent. This high-dimensional iso-surfacing algorithm guarantees the consistency in the table generation stage by using a lexicographical ordering of the iso- surface vertices and then building the convex hull incrementally, adding one vertex at a time in the specified order. This is similar to the scheme used by [104] and [94], which ensures canonical triangulations across cell boundaries and generates consistent meshes.

Interval volumes keep only the related cells, providing a more compact representation of the regions-of-interest in both structured as well as unstructured datasets. Also, interval volumes provide a segmentation of the data into easily discernable regions. We will talk about the visualization techniques for interval volumes in next section.

95

3.4 Visualization Techniques of 3D Interval Volumes

For three-dimensional static scalar field, 3D interval volumes are extracted using the four-dimensional iso-surface extraction algorithm and represented as 3-simplices (3- tetrahedra). They are directly rendered using the projected tetrahedron algorithm from

Shirley and Tuchman [125]. We implement the Projected Tetrahedron algorithm using the vertex program of programmable graphics hardware [159]. For the visibility sorting, we use the MPVONC algorithm by Williams [153] which provides an O(nlogn) algorithm for approximate visibility ordering.

The basic idea to render an interval volume is to encode the color using the values.

Figure 3.3 shows two different views of such an interval volume from a curvilinear grid of a turbine blade dataset. In this figure, the order of the color (red, yellow, and pink) represents the value range from high values to low values. This method is useful to render interval volumes with wide value range.

Figure 3.3: Linear colored interval volume of two views

96

In the following sections, we develop several new rendering techniques for 3D interval volumes that provide a more effective visualization of volumetric data sets.

The first technique is constant colored intervals. We compute small intervals and use a constant color and opacity to render the complete interval. This has the advantage of accurate integration and the applicability of simplified and more efficient rendering algorithms. For multiple intervals, we assign each interval a distinct color value. Since these intervals are used to generate a single tetrahedral mesh for the cumulative intervals, the grid needs to be sorted in a visibility order before rendering. For best results, we keep the number of intervals small and the adjacent colors visibly distinct. Figure 3.4 shows the Tapered Cylinder data set with four intervals colored red, green, blue and yellow with constant opacities.

Figure 3.4: Multiple constant colored intervals

The second technique is cycle-displayed intervals. In this technique, we build a separate display list or vertex array for each interval and loop through each interval, displaying it using a constant color. Since the cells have a constant color, the volume integration can be calculated in any order and no sorting is required. Also, real-time rates can be achieved by limiting the interval size. This is similar to the Data Slicer technique

[26]. Figure 3.5 shows three adjacent intervals rendered using this technique. The

97

intervals contain 46K, 57K and 61K tetrahedra respectively which can be rendered at approximately 20 frames per second.

Figure 3.5: Interval volumes extracted by progressively increasing the mean interval value

The third technique is prioritized intervals. Maximum Intensity Projection (MIP)

[52] is used to prioritize features of interest and prevent an important feature from being occluded by a less important feature by bringing the important feature to the forefront.

We use a painters-like algorithm and let the user prioritize the intervals to ensure that the highest priority interval is the most visible. This is accomplished by sorting the intervals, not according to the viewing rays, but according to their priorities. Thus, we paint the higher priority intervals on top of the lower priority intervals. Here, we treat the distinct intervals as separate tetrahedral meshes and render them independent of the other intervals. Hence, no sorting of the tetrahedra is needed. Figure 3.6 shows two snapshots of this technique applied to the Tapered cylinder data set. The priorities are reversed in the two figures to show different features of the flow.

98

Figure 3.6: Prioritized intervals – first figure (Y, B, G, R) shows external surface of the flow, second figure (R, G, B, Y) shows the internals of the flow

3.4.1 Intervals with Textured Boundary Surfaces

Direct volume rendering of interval volumes might generate fuzzy-looking images for certain datasets. Highlighting the boundary surfaces between the interval volumes would allow us to provide a better mental segmentation of the volume and prevent important internal features from being occluded. Our interval volume computation algorithm offers the ability to compute the boundary surfaces between interval volumes without extra computational overhead. The surfaces are extracted during the interval volume construction, simply by checking if all vertices of a face are on the boundary or not. This information is easily encoded into the isotables. The surfaces are rendered as semi-transparent polygons to prevent occlusion of internal features. Coupled with texture mapping and surface shading, these surfaces give better depth cues and make the visualization more informative for certain applications. Figure 3.7 shows an interval volume with the inner boundary surface rendered in yellow with specular shading. Notice how the internal surface of the interval, which would otherwise be indistinguishable, is easily highlighted using our technique.

99

Figure 3.7: Interval volume with boundary surface highlighted

We also apply this technique to flow visualization. For a flow field, we first generate an implicit flow field by pre-advecting the field and storing the advection information.

We then use interval volumes to render the regions-of-interest along with boundary surfaces between these intervals. The surfaces are textured to provide better segmentation of the volume, and more importantly provide detailed flow information. Figure 3.8 shows a textured boundary surface using 2D texture mapping. In this figure, the surface is a stream surface, and the texture shows the streamlines and time-lines. A more detailed description on using interval volumes to extract flow volumes is presented in [161].

Figure 3.8: Boundary surface with 2D texture mapping

100

3.4.2 Multi-Attribute Datasets

Computing the interval volume using one attribute and then rendering this volume using another set of attributes allows better spatial correlation between these attributes.

Figure 3.9 shows an interval volume computed using density values and then rendered using the corresponding energy values. Animating this over time shows how the energy distribution changes in high-density regions of the grid.

Figure 3.9: Interval volume computed using density but rendered using energy

The interval volume algorithm can also be extended to implement constructive solid geometry operations on multi-attribute data sets using multi-pass algorithms. For example, to implement an intersection between ranges of two scalars defined over the field, we first extract an interval volume from the original grid using the range for the first scalar value. During the interval volume construction, we interpolate and store the second scalar values in the resulting grid as well. The output of the first pass is then used in a second interval volume construction pass using the range provided for the second scalar attribute. The resulting mesh would correspond to the geometric intersection of the two scalar ranges. The same algorithm can be applied in any dimension.

101

Figure 3.10 shows an example from a flow visualization application. Here, we consider two attributes: implicit value and advection time. Figure 3.10(a) and 3.10(b) are the interval volumes, computed using the implicit value and advection time in the flow, respectively. The stream volume is first generated using the range of implicit values as iso-values. The stream volume is then truncated to a range of the advection time steps by applying the interval volume algorithm to the volume in 3.10(a). Figure 3.10(c) is the intersection of Figure 3.10(a) and 3.10(b).

I

(a) (b) (c)

Figure 3.10: Intersection of interval volumes for two attributes

3.5 Direct Rendering of Time-Varying Interval Volumes

Time-varying interval volumes are extracted for time-varying scalar fields using five-dimensional iso-surface algorithm. To compute the time-varying interval volume, we first create a five dimensional scalar field F(x,y,z,t,w), such that F(x,y,z,t,0) = f(x,y,z,t)

- α and F(x,y,z,t,1) = f(x,y,z,t) - β. Then, the interval volume α ≤ f(x,y,z,t) ≤ β can be extracted by first computing the zero iso-surface of the five dimensional function

F(x,y,z,t,w), and then projecting the resulting iso-surface along the w axis to four

102

dimensional space. For a time-varying scalar field with hexahedral cells, we should note that the interval volume entries of the iso-surface lookup table for a 5D hypercube are too large to be processed and stored in the main memory. Since a 5D hypercube contains

32 vertices, the size of the table will contain 232 = 4G entries. As pointed out in [60], not all the four billion cases are possible. Only 316 ≈ 43M entries are possible for interval volumes. However, this size may be still too large to be processed in core. Our solution is to compute the entries of the lookup table at runtime and cache them into a hash table which is small enough to fit into the main memory. In this dissertation, we use this caching method to store the 5D isosurface lookup table.

For a time-varying grid with tetrahedral cells, each 5-cell after the dimension elevation has 16 vertices and 40 edges, with the 4-prisms as shown in Figure 3.2(b) as the faces. The iso-surface lookup tables have 38 entries.

As discussed in Section 3.3, the interval volume of a d-dimensional grid is a d- dimensional subset of the grid and can be represented by a collection of d-simplices.

Here, the time-varying interval volumes are composed of 4-simplices.

There could be two schemes to render these d-simplices to the image space. One rendering possibility is to repeatedly slice the d-simplices parallel to different coordinate axes until 3-simplices (3-tetrahedra) or 2-simplices (triangles) are obtained and rendered.

This scheme is analogous to rendering time-varying isosurfaces [12], but allows slicing at non-integral time steps to compute interpolated interval volumes between consecutive time steps. Figure 3.11 shows one example of the time slicing. The left image and the

103

right image are the interval volumes with respect to the two time steps t1 and t2. The middle image is the corresponding interval volume with the time value t = (t1 + t2)/2.

Figure 3.11: Results of time slicing

An alternative scheme is the direct rendering of the d-simplices into the image space by integrating in all other axes. In our work, we are more interested in direct rendering of the 4-simplices for time-varying interval volumes, in order to visualize the integrated interval volumes over the time, and understand the distribution and relationship of these volumes across time steps. The direct rendering of the time-varying interval volumes makes it easy to understand the time-varying structured and unstructured volumetric fields in one image. However, the direct rendering of time-varying interval volumes is technically challenging. There is no previous work on how to render the 4-simplices directly to the 2D image space.

Direct volume rendering of time-varying interval volumes, involves projecting the

4-simplices to 3D space, decomposing the projected 4-simplices to 3-simplices, and finally projecting the 3-simplices to the image space. We will explain these steps in the following subsections.

104

3.5.1 Projection of 4-simplices to 3D

As we know, each 4-simplex extracted from the 5D iso-surface lookup table has five vertices with coordinates (x, y, z, t). Since our motivation is to show the movement of interval volumes across time, the 4-simplex is projected to 3D along the time t axis.

During the projection, each vertex obtains a ∆t value. The value ∆t is calculated in a similar way as the calculation of ∆z in the projected tetrahedron algorithm, but along the time dimension. The basic idea is that a ray is cast along the time direction and ∆t is calculated as the length of the ray that passes through the 4-simplices. Here the projection is from 4D to 3D. So, a vertex in 3D has a non-zero ∆t value if it has two intersections with the ray in the t dimension and the ∆t is calculated as the difference of the two t values at the intersections.

3.5.2 Classification of Projected 4-simplices

The five projected vertices compose some volume in three dimensions, except in some degenerate cases where the five projected vertices form a triangle, or a line, or a point. In order to calculate the ∆t and tetrahedralize the projected 4-simplices, the projected 4-simplices are classified into two general cases and four degenerate cases, based on the spatial relationship of the five projected vertices in 3D space. Figure 3.12 illustrates the six cases of the 4-simplex projection. For the general cases, one case has one vertex inside the tetrahedron of the other four vertices (class 2), and the other case has no vertex inside a tetrahedron composed of the other four vertices (class 1). Some degenerate cases still provide volume in three dimensions. For example, four vertices are

105

coplanar (class 3), three vertices are collinear (class 4), and two vertices are coincident

(class 5). Class 3 has two sub-cases: four vertices (P1, P2, P3, P4) are coplanar in class

3(a), and P5 is inside the triangle of P1P2P3 in class 3(b).

P5 P4

P2 P2 P5

P1 P3

P1 P3 P4 Class 1 Class 2 P5 P4

P2 P3 P1 P3

P5

P1 P4 P2 Class 3(a) Class 3(b) P5 P4/P5

P2

P2

P1 P3 P4 P1 P3 Class 4 Class 5

Figure 3.12: Classification of projected 4-simplex

A projected 4-simplex with 5 vertices is classified step by step using the flow chart in Figure 3.13.

106

p roj ected 4-simp lex w ith 5 vertices

Any 2 vertices Yes coincident? C lass 5

N o

A ny 3 vertices Yes collinear? C lass 4

N o

A ny 4 vertices Yes coplanar?

N o O ne vertex inside Yes the other three?

N o Class 3(b)

C lass 3(a)

O ne vertex inside Yes the other four? C lass 2

N o

Class 1

Figure 3.13: Flow chart of the classification of the projected 4-simplex

Here, we classify the projected 4-simplices into two general cases and four degenerate cases. The degenerate cases generate fewer decomposed tetrehedra in next section and improve the rendering performance. We could just consider only general cases and combine the degenerate cases into the general cases. For example, class 3(b), class 4 and class 5 can be combined into class 2, with the vertex P5 moving from the face

P1P2P3, the line P1P3 or from the vertex P4 to inside the tetrahedron P1P2P3P4.

Similarly, class 3(a) can be combined into class 1, with the vertex P4 moved from the position coplanar with P1P2P3 to the position on the opposite side of P5 with respect to

107

the face P1P2P3. This generalization of the cases will generate more tetrahedra (many of them with nearly zero volume) and/or will need some checking to distinguish them at the tetrahedralization stage.

In the application examples of this paper, we have generated all the cases except class 2 (as shown in Table 3.6). The reason that class 2 does not happen is because the main purpose of this paper is to directly render time-varying interval volumes and we choose the t axis as the projection direction to project 4-simplices into 3D. Since the vertices are located on the edges of the hypercube, the projection along one axis will not generate the class 2 where one vertex is inside the tetrahedron composed of the other four vertices. In cases we project the 4-simplices into 3D along an arbitrary direction, like (1,

1, 1, 1), class 2 will happen.

3.5.3 Tetrahedralization of Projected 4-simplices

The projected 4-simplices can be decomposed into tetrahedra in 3D space based on the above classification in Figure 3.12. Table 3.1 was constructed by hand and shows the possible decomposition of each class.

108

Class Possible Tetrahedra Class 1 P1P2P3P4 and P1P2P3P5, or: P1P2P4P6, P1P3P4P6, P2P3P4P6, P1P2P5P6, P1P3P5P6 and P2P3P5P6 Class 2 P1P2P3P4, or: P1P2P3P5, P1P2P4P5, P1P3P4P5 and P2P3P4P5 Class 3(a) P1P2P3P5 and P1P3P4P5, or: P1P2P4P5 and P2P3P4P5, or: P1P2P5P6, P2P3P5P6, P3P4P5P6, P1P4P5P6 Class 3(b) P1P2P3P4, or: P1P2P4P5, P1P3P4P5 and P2P3P4P5 Class 4 P1P2P3P5, or: P1P2P4P5 and P2P3P4P5 Class 5 P1P2P3P4

Table 3.1: Original decomposition of the projected 4-simplices

We use a constant colored hypercube (actually a constant colored plate which is a

2D array of the constant colored hypercubes) to test our direct rendering algorithm. Our first attempt is to render the decomposed tetrahedra without the consideration the ∆t and we get an image with some obvious patterns on the plate (shown in Figure3.14). This result is not correct. What we expect is a uniform plate. We take a careful tracking and analysis of the distribution of the tetrahedra inside a cube in three dimensions to figure out the problem.

As we know, many 4-simplices are extracted from the lookup table for each hypercube cell, and then are projected to 3D and decomposed to tetrahedra. By keeping track of the tetrahedral components inside each cell, we find that each tetrahedron is in right place and the tetrahedra as a whole fill the cell. However, we also find that the projections of a set of 4-simplices overlap in three dimensions. For example, for a cube cell from our constant plate example as shown in Figure 3.14, some space is shared five

109

times by the projection of 4-simplices, while other space is shared only four times. This uneven overlapping distribution of the projected 4-simplices causes a non-constant opacity throughout the cell.

Figure 3.14: Incorrect rendering result of a constant plate in four dimensions

Then, what did we miss? The key observation in the incorrect opacity is that the length of the projection through time cannot be ignored during the projection of the 4- simplices along the time axis. During the projection, each vertex obtains a ∆t value. The vertices with a non-zero ∆t value are illustrated by the red points in Figure 3.15 for each case of the projected 4-simplices. In class 1, the new vertex P6 is the intersection point of the triangle P1P2P3 and the line P4P5. The ∆t value at P6 is the difference of the interpolated t value inside P1P2P3 and the interpolated t value along the line P4P5. For class 2, the ∆t value at P5 is the difference of P5.t and the interpolated t value inside the tetrahedron P1P2P3P4. In class 3(a), the new vertex P6 which is the intersection point of the lines P1P3 and P2P4 has a non-zero ∆t value which is the difference of two interpolated t values between P1P3 and P2P4. While for class 3(b), the ∆t at P5 is non- zero which is equal to the difference of P5.t and the t value interpolated inside the triangle P1P2P3. In class 4, P5 has a non-zero ∆t value which is the difference of P5.t and

110

the interpolated t value between P1 and P3. In class 5, P4 and P5 are coincident after the projection and P4 has a non-zero ∆t which is the difference between P4.t and P5.t.

After determining the vertex with a non-zero ∆t for each class, the decomposition of the projected 4-simplices into tetrahedra should make sure that the vertex with a non-zero

∆t is one vertex of the decomposed tetrahedra. Now the decomposition becomes a unique process. The unique decomposition is listed on the right column of Figure 3.15 for each case of projected 4-simplices. For each decomposed tetrahedron, one vertex has a non-zero ∆t value and each point inside the tetrahedron has an interpolated ∆t value.

The ∆t distribution inside the tetrahedron accounts for the overlapping distribution of projected 4-simplices in three dimensions and can be understood as a density distribution which occludes the light. According to the definition of the extinction coefficient by Max in [92], the extinction coefficient τ = ρA, where ρ is the number of particles per unit volume and A is the projected area of each particle. Here, the ∆t distribution inside the tetrahedron changes the density ρ to ρ ⋅ ∆t . Therefore, the ∆t distribution contributes to the extinction coefficient and to the opacity. The length of the projection through time cannot be ignored during the rendering. The ∆t distribution inside the tetrahedron contributes to the final opacity of the rendered tetrahedron.

As we know, the optical model for direct volume rendering in three dimensions is:

s L (−∫τ (t)dt) I (x,r) = C (s) ⋅τ (s) ⋅ e 0 ds (3.1) λ ∫ λ 0

where, Iλ (x,r) is the amount of light of wavelength λ coming from the ray direction r that is received at location x on the image plane, L is the length of the ray r, τ is the

111

extinction coefficient, and Cλ is the light of wavelength λ reflected and/or emitted at location s in the direction of r.

This optical model for still applies for the rendering of time-varying interval volumes and the only difference is that the extinction coefficient τ in 3D should be changed to τ ⋅ ∆t for 4D rendering of time-varying interval volumes.

The transparency along a ray is represented as following for the rendering of the 4- simplices:

T = exp(−∫∫τdtdz) = exp(−∫(τ ⋅ ∆t)dz) zt z (3.2) = exp(−τ ⋅ ∆t ⋅ ∆z) =e−τ ⋅∆t⋅∆z

Here, τ is the extinction coefficient. The opacity along a ray is represented as

α = 1− e−τ ⋅∆t⋅∆z (3.3)

We tested the above constant colored hypercube again by considering both the

∆t along the time axis and the ∆z along the viewing direction, and we will prove the correctness of our direct volume rendering using the example of the constant colored hypercube. From the 5-dimensional isosurface algorithm, twenty-four 4-simplices are extracted for the constant colored hypercube. And the 3D projections of these 4-simplcies are shown in Figure 3.16.

In Figure 3.16, the first 12 projected 4-simplices compose a prism which is a half of a cube, and the last 12 projected 4-simplices compose a second prism which is the rest half of the cube. Next we will prove these overlapped projected 4-simplices form a constant cube in three dimensions. Due to the similarity, we will only prove the correctness for a prism which is a half of the cube.

112

We first prove a constant tetrahedron. Figure 3.17(a) shows a constant tetrahedron

P1P2P3P4, which is composed of four projected 4-simplices, each with a non-zero ∆t value at one vertex (represented as red points). By adding the interpolated ∆t values from four tetrahedra, every point P inside the tetrahedron has a constant ∆t as shown in equation (3.4). Figure 3.17(b) shows the distribution of the ∆t inside the tetrahedron. This is what we expect for a constant tetrahedron which is composed of four projected 4- simplices extracted from a 5D isosurface lookup table.

∆t p = ∆t1 + ∆t2 + ∆t3 + ∆t4

volP2P3P4P volP1P3P4P = × ∆tP1 + × ∆tP2 + volP1P2P3P4 volP1P2P3P4 (3.4)

volP1P2P4P volP1P2P3P × ∆tP3 + × ∆tP3 volP1P2P3P4 volP1P2P3P4

= ∆t (if ∆tP1 = ∆tP2 = ∆tP3 = ∆tP4 = ∆t) The transparency along a ray passing through any point P inside the constant tetrahedron is:

T = T1 ⋅T2 ⋅T3 ⋅T4 =e−τ ⋅∆t1 ⋅∆z ⋅ e−τ ⋅∆t2 ⋅∆z ⋅ e−τ ⋅∆t3 ⋅∆z ⋅ e−τ ⋅∆t4 ⋅∆z (3.5) =e−τ ⋅(∆t1 +∆t2 +∆t3 +∆t4 )⋅∆z =e−τ ⋅∆t⋅∆z The opacity along the ray is α = 1− e−τ ⋅∆t⋅∆z . It shows the opacity along a ray passing through any point P inside the constant tetrahedron depends on the constant ∆t inside the tetrahedron and the ∆z from the projection along the z-axis.

113

P5 Class 1

P1P2P4P6, P2 P1P3P4P6, P6 P2P3P4P6, P1 P3 P1P2P5P6, P1P3P5P6, P4 P2P3P5P6 P4 Class 2

P1P2P3P5, P2 P5 P1P2P4P5, P1P3P4P5, P2P3P4P5 P1 P3 P5 Class 3(a)

P2 P3 P1P2P5P6, P2P3P5P6, P6 P3P4P5P6, P1 P4 P1P4P5P6 P4 Class 3(b)

P1P2P4P5, P1 P3 P1P3P4P5, P5 P2P3P4P5 P2 P4 Class 4 P2

P1P2P4P5,

P1 P3 P2P3P4P5 P5 P4/P5 Class 5

P1P2P3P4 P2

P1 P3 Figure 3.15: Classification and tetrahedralization of projected 4-simplex

114

P3 P4 P3 P4 P3 P4 P3 P4

P7 P7 P7 P8 P8 P8 P7 P8

P1 P1 P1 P1 P2 P2 P2 P2

P5 P5 P5 P5 P6 P6 P6 P6 (1) (2) (3) (4) P3 P4 P3 P4 P3 P4 P3 P4

P7 P7 P8 P8 P7 P8 P7 P8

P1 P2 P1 P1 P1 P2 P2 P2 P5 P5 P6 P5 P6 P5 P6 P6 (6) (5) (7) (8) P3 P4 P3 P4 P3 P4 P3 P4

P7 P8 P7 P8 P7 P8 P7 P8

P9 P9 P1 P2 P1 P2 P1 P2 P1 P2

P5 P5 P6 P6 P5 P6 P5 P6 (9) (10) (11) (12) P3 P4 P3 P4 P3 P4 P3 P4

P7 P8 P7 P8 P7 P8 P7 P8

P1 P1 P2 P2 P1 P2 P1 P2

P5 P5 P6 P6 P5 P6 P5 P6 (13) (14) (15) (16) P3 P4 P3 P4 P3 P4 P3 P4

P7 P8 P7 P8 P7 P8 P7 P8

P1 P1 P1 P2 P2 P2 P1 P2

P5 P5 P6 P5 P6 P6 P5 P6 (17) (18) (19) (20) P3 P4 P3 P4 P3 P4 P3 P4

P7 P8 P7 P8 P7 P8 P7 P8 P9 P9

P1 P1 P1 P2 P1 P2 P2 P2

P5 P6 P5 P5 P5 P6 P6 P6 (21) (22) (23) (24)

Figure 3.16: Twenty-four projected 4-simplcies in 3D for a constant colored hypercube (All the red points have a non-zero ∆t value, except that the red points in the components (11), (12), (23), (24) have a value of ∆t /2)

115

P3 P2 P3 P2

P1 P P1 P

P4 P4

P3 P2 P3 P2 +

P1 P1 P P + + P4 P4

P3 P2

P1 P

P4

(a) (b)

Figure 3.17: Projected tetrahedral components and ∆t distribution inside a constant tetrahedron

Based on the proof of the constant tetrahedron, the first four components in Figure

3.16 compose a constant tetrahedron (as shown in Figure 3.18(a)). The next eight components (5-12) compose the rest part of the prism, with P1P3P5P7 as a face and the vertex P2 as the fifth vertex (Case 3(a) in Figure 3.15). We divide this part into four tetrahedra (as shown in Figure 3.18 (b) to (e)) according to the right column in Figure

3.15. Now for each tetrahedron, we get a constant distribution of ∆t inside it. For example, for the tetrahedron P2P3P7P9 in Figure 3.18(b), non-zero ∆t at P3 comes from

116

the component (9) in Figure 3.16, the ∆t at P7 from the component (7) in Figure 3.16, the

∆t at P2 from the component (8) in Figure 3.16, and the non-zero ∆t at P9 is the sum of

∆t /2 at P9 in two components (11) and (12). The five components presented in Figure

3.18 (a) to (e) form a complete constant prism. Similarly, the components (13-24) in

Figure 3.16 form a second constant prism. And the two prisms compose a constant cube in three dimensions.

P3 P4 P3 P4 P3 P4

P7 P7 P7

P9 P9 P1 P1 P1 P2 P2 P2

P5 P5 P5 (a) (b) (c) P3 P4 P3 P4 P3 P4

P7 P7 P7

P9 P9

P1 P2 P1 P2 P1 P2

P5 P5 P5 (d) (e) (f)

Figure 3.18: Components of a constant prism

3.5.4 Projection of 3-simplices to Image Space

We use an implementation of the Projected Tetrahedron algorithm from Shirley and

Tuchman [125] to render the projected tetrahedra from the 4-simplices. The algorithm

117

approximates a tetrahedron using one to four triangles depending on the screen projection of the tetrahedron’s vertices. We implement the PT algorithm using a vertex program in programmable graphics hardware [159].

Compared to the projection of the normal tetrahedra, there is one difference in the rendering of the time-varying interval volumes: the tetrahedra here have a non-constant

∆t distribution from the projection along the time axis. Therefore, when we calculate the opacity of the projected triangles, we should consider both the contribution of the ∆t for the projection along the time axis and the contribution of the ∆z for the projection along the z-axis. The opacity along the ray is α = 1− e−τ ⋅∆t⋅∆z for the rendering of a 4-simplice.

Since the zero-thickness vertices in PT algorithm do not necessarily have zero ∆t thickness, and the vertex with non-zero thickness in PT algorithm may have zero thickness of ∆t , so we cannot directly multiply the ∆t and the ∆z at each vertex and then interpolate it inside the projected triangles. Actually, the bi-variant function should be evaluated at each pixel. That means, we should multiply the interpolated ∆t and the interpolated ∆z for each pixel inside the projected triangles.

We develop a modified implementation of the Shirley and Tuchman algorithm using the vertex and fragment programs to consider both the contributions of the ∆t and ∆z values. In the vertex program, we calculate the ∆t and ∆z for each vertex of the projected triangles. Later, their contributions to the opacity are multiplied in the fragment program for each fragment. Now, for the above example of a constant colored plate which is a 2D array of constant colored hyper-cubes, we get the correct rendering of a constant plate as shown in Figure 3.19.

118

Figure 3.19: A constant plate in four dimensions

3.6 Visualization Techniques of Time-Varying Interval Volumes

In this section, we build upon our work of the computation and projection of the time-varying interval volumes to come up with some visualization techniques for effective visualization of the time-varying volumetric data sets. As discussed in previous sections, the tetrahedra for time-varying interval volumes have ∆t distribution from the projection along the time axis and they overlap themselves in 3D space. This causes some occlusion and compositing problems. In this section, we will figure out the suitable visualization techniques for the time-varying interval volumes.

3.6.1 Temporal Color Coding

We can render the time-varying interval volumes directly from the extracted 4- simplices, using the projection methods as discussed in section 3.5. Since our motivation is to visualize the relationship of interval volumes across time, the color of the vertex is encoded using the time value. For the overlapped vertices (as shown with the red points

119

in Figure 3.15), the average t value of the two overlapped vertices is used for color encoding. In this section, an additive compositing operator is used to blend the projected

4-simplices into the image, so that we can easily interpret the movement of the interval volumes between different time steps.

Figure 3.20 is an example of the direct volume rendering result of a test function comprised of a simple wave over time. The color is encoded with time t (as shown in

Figure 3.21): green at t=t1, red at t=t2, and yellow for overlapped regions between t=t1 and t=t2. From Figure 3.20, we can see the transition from the green, to the yellow, and to the red as the field moves over time.

Figure 3.20: Direct rendering result of a time-varying interval volume

R G

1.0 1.0

0.0 0.0 t1 (t1+t2) t2 t t1 (t1+t2) t2 t 2 2

Figure 3.21: Temporal color encoding for two time steps

120

Figures 3.22 and 3.23 show time-varying interval volumes of a delta dataset and a vortex dataset respectively, using the above color encoding. From these figures, we can see how the interval volumes move over time. We can also notice that some new components are generated over time, such as the purely red one in Figure 3.23. Figure

3.24 shows time-varying interval volumes for the NASA Tapered Cylinder dataset. This dataset is a curvilinear grid. Similarly, this figure shows the movement of the interval volumes with the time.

Figure 3.22: Time-varying interval volume for the delta dataset

Figure 3.23: Time-varying interval volumes for vortex dataset (two time steps)

121

Figure 3.24: Time-varying interval volumes for the NASA Tapered Cylinder dataset

We also extend the technique to multiple time steps. Figure 3.25 shows interval volumes of the vortex dataset for three time steps. The color mapping with time t is in the following way (as shown in Figure 3.26): blue at t=t1, green at t=t2, red at t=t3, cyan for overlapped regions between t=t1 and t=t2, yellow between t=t2 and t=t3. So, for the overlapped region among t=t1, t=t2 and t=t3, the color is white using the additive compositing operator. In this figure, areas where contours are appearing over time are predominantly red, while areas that faded over time are predominantly blue. Areas which maintain a high isovalue over time appear white.

Figure 3.25: Time-varying interval volumes for vortex dataset (three time steps)

122

R G B 1.0 1.0 1.0

0.0 0.0 0.0 t1 t2 (t2+t3) t3 t t1 (t1+t2) t2 (t2+t3) t3 t t1 (t1+t2) t2 t3 t 2 2 2 2

Figure 3.26: Temporal color encoding for three time steps

3.6.2 Highlighted Boundaries

Similar to the 3D interval volumes with embedded boundary surfaces in section

3.4.1, we can extract the boundary iso-surfaces for time-varying interval volumes to highlight interior features. The boundary iso-surfaces are extracted during the construction of the 4D interval volumes without extra computation cost.

Given a time-varying interval volume defined by two iso-values α and β, and two time steps t1 and t2, there are four boundary surfaces at: (a) t=t1 and f(x,y,z,t)=α, (b) t=t1 and f(x,y,z,t)=β, (c) t=t2 and f(x,y,z,t)=α, (d) t=t2 and f(x,y,z,t)=β. The four surface boundaries, together with the time-varying interval volume, are illustrated in Figure 3.27 in the above order (a) to (d) from left to right. From the figure, we can see how the iso- surfaces change with time and with value.

123

Figure 3.27: Time-varying interval volume with four iso-surfaces highlighted (from left to right: (a) t=t1 and f(x,y,z,t)=α, (b) t=t1 and f(x,y,z,t)=β, (c) t=t2 and f(x,y,z,t)=α, (d) t=t2 and f(x,y,z,t)=β)

We can also extract time-varying iso-surfaces at f(x,y,z,t)=α or f(x,y,z,t)=β while t1

≤ t ≤ t2 (as shown in Figure 3.28), by simply checking if the vertices are on a boundary or not. Similarly, we can obtain interval volumes at t=t1 or t=t2 while α≤ f(x,y,z,t) ≤ β (as shown in Figure 3.29).

Figure 3.28: Time-varying isosurfaces at: (a) f(x,y,z,t)=α and t1≤ t ≤ t2 , and (b) f(x,y,z,t)=β and t1 ≤ t ≤ t2

124

Figure 3.29: Interval volumes at: (a) t=t1 and α≤ f(x,y,z,t) ≤ β, and (b) t=t2 and α≤ f(x,y,z,t) ≤ β

The volumetric boundaries (time-varying iso-surfaces at a specific value and interval volumes at a specific time) are rendered using the normal projected tetrahedron algorithm, without the need of considering the contribution of ∆t to the opacity. Also, a constant color is assigned to each volumetric boundary. For example, for several interval volumes at different time steps, the technique Maximum Intensity Projection (MIP) can be employed to sort the interval volumes, not according to the viewing rays, but according to their priorities (the time here) to bring an important interval volume at a specific time step to the forefront. Using this technique, we can render several interval volumes into one image and see how the interval volumes move with the time steps.

Figures 3.28 shows two interval volumes at t1 and t2 for the Tapered Cylinder dataset in one view using the MIP technique. Here, yellow represents the interval volume at t1, and red color at t2. In the first image, the interval volume at t2 is given higher priority, while the interval volume at t1 is given higher priority in the second image.

We can compare the two MIP images (for example, Figure 3.30(a) and Figure

3.30(b)) and the corresponding direct rendering result of the time-varying interval volumes (for example, Figure 24), to verify the correctness of the rendering.

125

Figure 3.30: Two interval volumes at t1 and t2 for the Tapered Cylinder dataset are rendered using MIP

3.7 Results and Analysis

All the results presented in this paper have been generated using a PC with a

QuadroFX 3000 graphics card and a Pentium IV 3.4 GHz processor.

We use our lookup table generation algorithm to compute the lookup table for different polyhedral cells and different dimensions. Table 3.2 presents interval volume lookup table statistics for our test cases. The table gives the maximum and average number of simplices over all possible combinations of vertex values. For example, the iso-surface lookup table for the 4-prism shown in figure 3.2(b) has 256 (= 28) entries, but only 81 (= 34) entries are used for interval volumes. The maximum number of tetrahedra in the interval volume is 6 for any tetrahedron, which matches the maximum number of

126

tetrahedra produced in [105]. The average number over the interval volume cases is approximately 3.9 tetrahedra.

For the hexahedral cells, entries in the four-dimensional hypercube iso-surface lookup table can have as many as 26 simplices. However, as previously noted, not every four-dimensional case can be realized by interval volumes. Our algorithm produces at most 22 simplices in the interval volume for any three dimensional cube. We contrast this with [155] which divides the cube into five tetrahedra and constructs the interval volume in each tetrahedron. Within each tetrahedron the interval volume can require up to six simplices giving a total of thirty tetrahedra for the cube.

Polyhedron Dimension Table Average Maximum Entries Simplices Simplices Tetrahedron 3 34 3.96 6 4-simplex 4 35 8.35 14 5-simplex 5 36 17.27 30 Hexahedron 3 38 12.09 22 5-prism with 4- 5 38 19.71 34 prism face 4D hypercube 4 316 41.25 128

Table 3.2: Interval volume lookup table statistics

For the Tapered Cylinder data set, the current implementation of our algorithm takes approximately 172 milliseconds per time-step to compute the interval volume. This number was computed using the average over 20 time-steps (13000 to 13190) for a constant interval size (0.9934 - 0.9944) using the density attribute. The average number of tetrahedra generated in this case was approximately 55.9K per time-step. Our iso- surfacing algorithm does a naive linear search through the grid cells for iso-surface intersection. Preprocessing schemes [122] can be used to speed up the interval volume

127

computation considerably by skipping empty cells. In the above case, the interval volumes intersect approximately 9150 cells on an average (the total number of tetrahedra is 55.9K and the average is 6 tetrahedra per cell), which is ~7.4 % of the number of cells in the grid. A histogram of the data set indicates that a large portion of the values have rather low and insignificant density values. In addition to saving valuable rendering cycles, interval volumes allow skipping these irrelevant regions, which could otherwise occlude other interesting features in the data set.

The interval volume extraction time and the volume rendering time for the steady- state datasets are listed in Table 3.3. In this table, we select the iso-values corresponding to the images shown in this paper.

The 4D interval volume computation time and the volume rendering time for the time-varying datasets are listed in Table 3.4. In this table, the 4D interval volume construction and decomposition time includes the time to calculate the entries of the iso- surface lookup table, the time to construct 4-simplices and the time to decompose 4- simplices to tetrahedra. Due to the decomposition of 4-simplices to 3-simplices and the overlapping copies of the 3-simplices in 3D space, there are more tetrahedra for time- varying interval volumes.

128

Interval Number Rendering time Data set volume of (frames per second) Construction tetrahedra Constant Linear time color color Turbine blade dataset 117 ms 252.0K 6.2 fps 3.64 fps (curvilinear, 51x8x121) Tapered Cylinder 172 ms 55.9K 27.2 fps 15.8 fps (curvilinear, 64x64x32) Implicit flow Dataset 377 ms 601.8K 2.53 fps 1.52 fps (64x64x64) Vortex dataset 440 ms 62.6K 24.5 fps 14.2 fps (128x128x128) Torus distance field 6,127 ms 1,868.7K 0.81 fps 0.48 fps Dataset (256x256x256)

Table 3.3: 3D Interval volume computation and rendering performance

4D interval volume Number Number Rendering Data set construction of 4- of time and simplices tetrahedra (linear decompo- (with color) sition time volume) Test function 430ms 82, 840 234,726 3.85 fps (2x40x20x 20) Vortex (2x128x 12.2s 319,304 882,044 1.03 fps 128x128) Tapered Cylinder (curvilinear 18.5s 349,624 941,098 0.97 fps 2x64x64 x32) Vortex (3x128x 22.5s 654,846 1,807,460 0.51 fps 128x128 Delta (2x111x 46.8s 1,003,290 2,963,812 0.31 fps 126x51)

Table 3.4: 4D interval volume computation and rendering performance

For a hexahedral grid, we extract the interval volume directly without pre- decomposing the hexahedra into tetrahedra. As shown in [12], the hexahedral algorithm

129

generates a smaller number of simplices compared to previous algorithm, which do a tetrahedral decomposition of the input grid as a first step. Here, we do a comparison of the hexahedral grid and tetrahedral grid using the vortex dataset for the same isovalue range. To generate tetrahedral grid, we decompose each hexahedral cell into five tetrahedra [1]. Table 3.5 shows the number of the extracted simplices for each case of the vortex dataset.

Cases Number of 3- or 4-simplices 3D interval volume Hexahedral grid 62, 636 Tetrahedral grid 78, 678 4D interval volume Hexahedral grid 319,304 Tetrahedral grid 344,853

Table 3.5: Comparison of the number of extracted simplices

From Table 3.5, we can see the hexahedral grid generates about 10-20% fewer simplices. Since 3D interval volumes are rendered directly using the extracted 3- simplices, we can conclude that using the hexahedral cells directly makes subsequent visualization process more efficient and interactive for 3D interval volumes. For 4D interval volumes, fewer 4-simplices need shorter time to project and decompose into 3- simplices. However, the direct rendering of the 4D interval volumes decomposes the projected 4-simplices into 3-simplcies based on the classification in Figure 3.15. Table

3.6 shows the number of the decomposed 3-simplices of each class for the test function and the vortex dataset. From this table, we can observe that it is very possible that the tetrahedral grid generates less number of decomposed 3D tetrahedra than hexahedral grid,

130

because the projected 4-simplices from the tetrahedral mesh are more likely to be Case

3b, Case 4 or Case 5, which generate less 3-simplices, while the projected 4-simplices from the hexahedral mesh are more likely to be Case 1 or Case 3a. So, for 4D interval volumes, there is a tradeoff between the time to project the 4-simplices to 3-simplices and the time to render the 3-simplices.

Data set Test function Vortex dataset Grid type Hexahedral Tetrahedral Hexahedral Tetrahedral Number of 4- 82,840 87,970 319,304 344,853 simplices Number of decomposed 3- 234,726 128,320 882,044 508,809 simplcies Class 1 61,104 17,934 237,606 82,542 Class 2 0 0 0 0 Class 3a 128,744 16,896 455,712 56,432 Class 3b 0 4,794 6 16,371 Class 4 8,816 19,992 45,990 85,890 Class 5 36,062 68,704 142,730 267,574

Table 3.6: Comparison of the number of decomposed 3-simplices

3.8 Conclusions

We have shown how the high-dimensional iso-surfacing algorithm can be used to extract interval volumes and how interval volumes can be used for interactive and more informative volume visualizations by providing distinct and discernible layers of volumetric material. This interval volume rendering can segment the volume data and highlight the boundary surfaces between the interval volumes. Different rendering techniques have been demonstrated for interactive visualization of the data set.

We have also extended interval volumes to four dimensions by extracting 4- simplices from a five-dimensional grid. We have explained the direct rendering method

131

for 4-simplices by projecting and decomposing the 4-simplices to 3-simplices, and using a modified hardware-implemented projected tetrahedron method. This technique allows us to render time-varying interval volumes directly and integrate multiple time steps into a single view. This can be used to see the movement of the interval volumes over time in one view. Different visualization techniques are employed to effectively visualize time- varying structured and unstructured data sets.

132

CHAPTER 4

IMPLICIT FLOW FIELDS

4.1 Introduction to Flow Visualization

Flow fields play an important role in scientific, engineering and medical communities. Now computational fluid dynamics is capable of generating large simulation data composed of vector variables in three spatial dimensions. To analyze and understand these flow fields, scientific visualization, a computational process to convert numerical values to graphical images, has become an indispensable tool.

Many researchers have done some work to visualize the flow fields. They use advection techniques, such as streamlines, stream surfaces, and stream volumes, and texture generation techniques, such as spot noise and line integral convolution (LIC) to visualize the flow fields. While there exist many flow visualization techniques to effectively represent two-dimensional flow fields, extending these techniques to three- dimensional flow fields encounters problems, due to clutter, occlusion, and lack of depth perception cues in three dimensions. Also, the expensive calculation of the advection in

3D flow visualization hinders the user when attempting to navigate through the flow field interactively, as well as limits the user’s ability to control the flow representation and

133

appearance. Effective and efficient three-dimensional flow visualization is still a challenging topic in the recent years. Some problems exist for 3D flow visualization: how to represent the flow information effectively, how to avoid the occlusion in 3D flow visualization, how to implement the flow visualization efficiently, etc.

4.2 Related Work

4.2.1 Geometry Techniques

The geometry techniques are based on the advection operator. The original representation is streamlines. Given some seed particles, the streamlines are constructed using some high-quality integration schemes, like the fourth-order Runge-Katta method.

The main property of the streamlines is their ability to separate steady-state flows.

In order to make the classical streamlines more useful for three-dimensional flows,

Zoeller, Stalling, and Hege applied the curve illumination model of Banks [7] to produce illuminated streamlines [168]. The illumination greatly improves the depth perception and illusion in three-dimensional space.

To show the flow orientation in three dimensional flow fields, stream ribbon, stream tube and stream surface are introduced. Hultquist [56] stitched together several adjacent streamlines to build a stream surface. For steady-state flows, the stream surfaces can locally separate the three-dimensional flow. One disadvantage is the flow direction tangential to the surface is lost.

Max, Becker and Crawfis [91] extended the idea to the flow volumes. Instead of starting with a single curve to emanate the streamlines from, they used a small surface

134

patch to start the flow and construct the flow volumes represented with tetrahedra. Then, the projected tetrahedron method [125] was used to render the flow volumes.

4.2.2 Texture Techniques

This category of the techniques provides the entire domain of the flow field and good visual cues about the direction of the flow. These techniques are very effective in 2D flow fields, on a 2D surface within the 3D flow, or on slice planes through the 3D flow.

But it is not so useful for the whole 3D flow visualization.

Spot noise introduced by van Wijk [137] was amongst the first texture-based techniques for vector field visualization. Spot noise generates a texture by distributing a set of spots over the vector field. Each spot moves in the direction of the vector field associated with the point and finally a pattern is developed to represent the flow field.

One limitation of the original spot noise algorithm was the lack of velocity magnitude information in the resulting texture. de Leeuw and van Wijk introduced the enhanced spot noise [35] to address this problem.

Line Integral Convolution (LIC) [18] is the most widely-used texture technique. The algorithm takes a scalar field of a white noise and a vector field in study as input, and outputs another scalar field which correlates this noise function along the direction of the input vector field. LIC is very effective in visualizing 2D vector fields. However, the computation of the LIC is very expensive, even for 2D vector fields. Stalling and Hege

[128] proposed a fast LIC to speed up the process.

135

Some research in flow visualization extends LIC in different ways. Shen et al. [120] added directional cues into LIC by combining animation and introducing dye advection into the computation. Kiu and Banks [65] used a multi-frequency noise for LIC to show velocity magnitude. Shen and Kao [121][123] presented unsteady flow LIC (UFLIC) to incorporate time into the convolution.

Occlusion and interactive performance are challenges to extend LIC to 3D. Rezk-

Salama et al. [113] used a 3D-texture mapping approach combined with an interactive clipping plane to solve the problems of occlusion and interaction. Interrante and Grosch

[57] also discussed some strategies to represent 3D flow with volume LIC.

Recent research on unsteady 2D flow fields makes a lot of progress. Jobard et al.

[61][62] introduced Lagrangian-Eulerian texture advection technique for 2D vector fields at interactive frame rates. The algorithm produces with high spatio-temporal correlation. Van Wijk [139] proposed image based flow visualization (IBFV) for dense,

2D, unsteady vector field representation. Each frame of the flow animation is defined as a blend between a warped version of the previous image according to the flow direction and a number of background images. The basic IBFV has been extended to IBFV on curved surfaces [140][75] and to 3D vector fields [130].

4.2.3 Volume Rendering with Embedded Textures

Another dense global visualization of the 3D vector field is the volume rendering with embedded texture, proposed by Crawfis and Max [27]. They embedded the vector icons into the splat footprint used for volume rendering. The small billboard images are

136

overlapped and composited together to build up the final images. Actually, they incorporated the vector direction into the direct volume rendering of the flow.

4.2.4 Hybrid Techniques

The above texture-related techniques for 3D flow visualization (like the volume rendering with embedded textures, volume LIC, and 3D IBFV) are mainly the texture syndissertation for a global perception of the vector fields. How to avoid the excessive clutter and render the regions of interest is a problem for these methods.

Li and Shen [82][124] proposed a hybrid method. They first constructed the geometry (streamlines in their case) and voxelized the geometry into a volumetric form.

Each voxel stores a set of attributes. At the second phase, the attributes stored in the volume are used as the texture coordinates to look up the appearance texture to generate the texture mapped streamlines. The advantage of the method is the user can change the input textures and instantaneously visualize the rendering results. The limitation of the method is the geometry is fixed during the rendering and cannot be changed interactively.

4.2.5 Implicit Methods

Van Wijk [138] introduced the implicit stream surface. He associated a scalar field with the inflow boundary of the computational grid, and then for each remaining grid point, he traced a streamline backwards in the flow until it reached the boundary. The scalar field was evaluated at this location, and the grid point was assigned this scalar value. This amounts to a mapping of the 3D vector field to a 3D scalar field, R3−>R. An iso-contour surface was then extracted from this resulting scalar field to provide the

137

stream surface. The advantage of the implicit method is a family of stream surfaces can be generated efficiently by varying the iso-value after the initial calculation of the scalar field.

Westermann et al. [147] also used an implicit method to convert the vector field to the scalar field by storing the advection time. They then rendered time surfaces using a level-set method by taking advantage of 3D texture mapping hardware. Their method is pretty similar to van Wijk’s implicit method, but without the need for the inflow mapping.

Streamball technique [16] is another implicit method in the flow visualization.

Steamballs are based on implicit surface generation techniques adopted from the well- known metaballs. In this method, a potential field was defined by a finite set of advected particles in the flow, or defined by streamlines. And the implicit surface was constructed as the isosurface of the potential function. The advantage of the streamball technique is they split automatically in areas where divergence occurs, and merge automatically in areas of convergence. This gives some valuable information in the flow.

4.3 Research Motivation and Framework

The advection techniques can segment the flow field, but provide little information about the flow details and the global representation. The texture techniques provide a clear perception of the vector field in cases of two-dimensional fields or flows across a surface in three dimensions, but do not work so effectively in three-dimensional fields due to the loss of information when the three-dimensional data is projected onto a two-

138

dimensional image plane and due to the occlusion. Also, the expensive computational cost of 3D texture-based algorithms makes it impossible to achieve the interactivity. Not much work has been done to combine the two techniques. Especially, very little work is on mapping effective textures on stream surfaces and time surfaces, and no work has been started on texture mapped flow volumes.

For the generation of the flow volumes, the traditional method constructs an explicit geometrical representation (tetrahedra) by advecting the vector field and renders the flow volumes using an unstructured volume rendering technique. It is not flexible to change the flow volumes. Every time the boundary polygon is changed, the advection will be needed again to construct a new representation. For a large flow field, the advection and the construction at the run time are expensive, so it is difficult to achieve the interactivity.

Also, unless a detailed refinement of the flow volume is specified for the interior, information inside the underlying flow volume could be lost in the linear interpolation.

Until now, the previous work has not incorporated the streamlines, time-lines, stream surfaces, time surfaces and the flow volumes together. The researchers either work on separate representation, or combine only streamlines and stream surfaces. A complete system including all the above representation types has not been published yet.

In order to solve some of the above problems, I proposed one implicit method to construct the implicit flow field. The implicit flow fields are constructed in the pre- processing stage, which avoid the advection at the run time and provide the texture coordinates for the texture mapping. Now the task is how to render the implicit flow fields. One motivation of my work is to achieve effective flow visualization by mapping effective textures on stream surfaces, time surfaces and flow volumes to display the flow

139

details. These textures will be very helpful for us to understand the flow properties. I, along with my colleague Daqing Xue, studied two visualization techniques to render the implicit flow field: interval volume rendering technique, and slice-based 3D texture mapping technique.

The slice-based 3D texture mapping technique renders the implicit 4-tuple flow field directly without the inflow mapping to a scalar field, taking advantage of the modern graphics hardware. The advantages of this rendering method are high interactivity and fine texture details rendered throughout the 3D flow volume, i.e., texture mapped flow volumes.

Another motivation of this work is to implement a complete system, which incorporates streamlines, time-lines, stream surfaces, time surfaces and flow volumes.

The second rendering technique, the interval volume rendering, is used to achieve such a complete system. The inflow mapping is necessary to obtain a scalar field on which the interval volume segmentation is applied. The flow volume is the extracted interval volume enclosed between two iso-surfaces, and the stream surfaces and time surfaces are iso-surfaces with respect to the scalar value and to the advection time. The 4-tuple attributes can be used as texture coordinates to map textures onto the stream and time surfaces to illustrate the flow details. For instance, the streamlines can be mapped onto the stream surfaces using one-dimensional Mip-map texture.

As discussed before, the occlusion is one problem for three-dimensional flow visualization. These two rendering techniques can solve the problem in different ways. In

3D texture mapping, we can control the opacity values through the 2D inflow textures to display the regions of interest. In the interval volume rendering, iso-values can be

140

selected to show the interested regions. Also, different opacity values are assigned to the volumes and to the surfaces to emphasize the surface details, and semi-transparent surfaces are used to show many segmentation surfaces.

The visualization framework of the implicit flow field is illustrated in Figure 4.1.

3D texture

mapping Final image rendering Vector Pre- Implicit Flow field advection Interval field Inflow Final volume mapping image rendering

Figure 4.1: Visualization framework of the implicit flow field

4.4 Construction of the Implicit Flow Field

Given a flow field, we first pre-advect the field and store the flow information, represented as multiple attributes, at each sample point for later rendering. This multi- variate field which keeps the flow information is called an implicit flow field. This section describes how to construct such an implicit flow field from a given flow field.

van Wijk [138] proposed an implicit technique to generate stream surfaces, the surfaces constructed by advection path lines of a set of particles starting from a curve. He associates a scalar field with the inflow boundary of the computational grid, and then for each remaining grid point, he traces a streamline backwards in the flow until it reaches the boundary. The scalar field is evaluated at this location, and the grid point is assigned this scalar value. This amounts to a mapping of the 3D vector field to a 3D scalar field,

141

R3→R. An iso-contour surface is then extracted from this resulting scalar field to provide the stream surface.

I have extended the implicit technique for volumetric flow visualization and proposed the concept of implicit flow field. Here we define a sample point as any location in three-dimensional space. In practice, we will generally associate a sample point with each vertex in either the underlying computation grid or a superimposed voxel grid. There are many attributes that can be derived or mapped onto each sample point.

Local operations, such as velocity magnitude, vorticity, etc. provide simple filters. For implicit flows, we associate, at a minimum, a termination surface ID indicating which surface the backward streamline intersected first, the coordinates on the termination surface in a local coordinate frame to the surface, as well as the advection time required for the flow to reach the termination surface (backwards, or conversely, the time required for a point on the boundary to reach the sample point). Figure 4.2 shows the information stored at a sample point in the implicit flow field. Currently, we maintain the four attributes mentioned above: termination surface ID, parametric position (u, v) on the surface, and the advection time. Hence, for each sample point, we store a 4-tuple, (f,u,v,t), containing these values. The construction of the implicit flow field from a 3D flow field is actually a function Φ: R3→R4. Here, Φ is a projected advection operator for each point in the field. This 4-tuple representation will be the basis for all of our renderings.

142

f

t

p′ p u (u,v) (f, u, v, t)

Figure 4.2: A diagram to show the construction of an implicit flow field

Additional attributes, such as the maximum velocity magnitude along the streamline, average density along the streamline, etc., can also be calculated and stored in this preprocessing stage. Thus, in general, we have an operation computing a mapping from

R3→Rn.

The implicit technique has the following advantages for calculating and visualizing flow volumes and stream/time surfaces. First, it provides the texture coordinates for the texture mapping in the later rendering process. Now, for the flow volumes, 2D textures can be used for texture mapping with the texture coordinates provided. Also, the extracted stream surfaces and time surfaces are no longer arbitrary triangulated surfaces.

They are parameterized surfaces which are easy to do texture mapping. Secondly, it provides the flexibility for the user to change the representation and appearance during the rendering. The representation and appearance of flow volumes are guided by the user- specified input texture with the dependent texture support of modern graphics card.

Thirdly, the advection is conducted at the pre-processing stage. It avoids the costly

143

advection operations at the run time and contributes to the interactivity achievement.

Finally, this technique improves the accuracy of the flow volume and eliminates the lost of the interior flow details.

The visualization diagrams for van Wijk’s implicit stream surface and our implicit flow volume are illustrated in Figure 4.3. In addition to the obvious volume versus surface difference1, there are other differences between our technique and that of van

Wijk. First, we either delay the specification of the scalar field on the inflow boundary

(for the interval volume rendering technique), or eliminate the mapping onto a scalar field entirely (for the 3D texture mapping technique). This allows us to develop new flow volumes without having to re-compute the costly advection operations. Secondly, we associate several additional attributes with each sample point which allow for better user interaction and enhanced surface representations. Finally, we allow for the user specification of many arbitrary boundary surfaces, which we call termination surfaces, indicating the termination of the backwards advection process. These can be used to place a termination surface around each critical point, allow for the specification of inflow and outflow boundaries [89] or as a user-controlled segmentation of the flow.

1 Van Wijk actually points out the extension of implicit stream surfaces to stream volumes, as well as a flow of ink metaphor.

144

Flow Scalar Implicit Inflow Iso- field Pre- field stream mapping contouring v g surface advecti

Flow Multi-attribute Implicit field Pre- 3D texture scalar field mapping stream v advecti (f, u, v, t) volume

Inflow mapping g=Φ(f,u,v)

Scalar Interval Implicit field volume stream g rendering volume

Figure 4.3: Visualization diagrams for van Wijk’s implicit stream surface (top), and our implicit flow volume (bottom)

4.5 Rendering of Implicit Flow Fields Using the Interval Volume Approach

After extracting the implicit flow fields, now the task is how to render the implicit flow fields. We studied two different techniques to model and render the implicit flow volumes: interval volume rendering, and slice-based 3D texture mapping. I will explain the first technique in this section.

In order to illustrate the flow details on stream surfaces and time surfaces, interval volume technique is used to incorporate semi-transparent (zero-thickness) surfaces with the flow volume.

145

Similar to van Wijk’s implicit stream surface, an inflow mapping from our 4-tuple attributes onto a single scalar value is needed so that we can apply an interval volume segmentation [14] on the scalar field. The inflow mapping is a mapping on the inflow boundary surface for the user to control the representation of the 3D flow. The inflow mapping is either a function distribution on the inflow surface (for example, Gaussian function), or some hand painting gray scale mapping. Figure 4.4 gives two examples of the inflow mapping.

Figure 4.4: Two examples of the inflow mapping: (a) a function; (b) hand painting

Now, the flow volume is the extracted interval volume enclosed between two iso- surfaces, and the stream surfaces and time surfaces are iso-surfaces with respect to the scalar value and to the advection time. The extracted flow volume can be rendered using any tetrahedron rendering technique. In my work, I chose the projection-based tetrahedron rendering implemented with the modern graphics hardware [159][144]. The

4-tuple attributes can be used as texture coordinates to map textures onto the stream and time surfaces to illustrate the flow details. This interval volume rendering technique has the advantages of a flexible inflow mapping and incorporating textured stream and time

146

surfaces with the flow volumes. This semi-transparent surface texturing is not possible in

3D texture mapping with even the most advanced volume .

The following equation describes an implicit flow volume using this technique:

Ipp =≤{ |,αΦ( f()pu()p,v()p,t()p) ≤β} (4.1)

This equation determines the continuous set of points, p, which lie on or inside the implicit flow volume, Ip, specified as an interval of the scalar field mapping. Here, f is the termination surface ID of the advection point, u and v are the coordinates of the point on the termination surface, and t is the advection time with which the backwards advection for the sampling point reaches this surface. The inflow function,Φ, provides a mapping of the implicit flow field to a single scalar distribution over the field, and α and

β are two iso-values specifying the boundaries of the flow surface. We maintain the association between the sampling points and their 4-tuple, (f, u, v, t) from Section 4.4.

This information will be used in later subsections to enhance the flow volume appearance.

The scalar field distribution is used to extract the flow geometry using a high dimensional iso-contouring routine [12][13], as discussed in Section 4.4. The resulting tetrahedra are then rendered using a hardware-accelerated Projected Tetrahedron renderer

[159][144]. One of the advantages of this technique is that actual geometry is extracted, efficiently enough to allow for the surface to be interactively changed. Furthermore, individual stream surface and/or time surface renderings can be seamlessly integrated into the volume rendering of the flow volume. Since these surfaces are embedded in the

3D interval volume (as shown in Figure 4.5), typical problems in integrating polygonal

147

geometry with volume rendering are avoided. Moreover, these surfaces can be semi- transparent, with many nested stream surfaces or time surfaces composited together in the final rendering. The surfaces can also be easily texture-mapped, including opacity textures, to present additional depth cues, animation, surface reflections, and general clarity [43].

Figure 4.5: Interval volume with a boundary isosurface

4.5.1 Surface Shading and Textures

The interval volume algorithm when applied to this implicit flow field, produces a geometric sub-volume, where the boundaries correspond to stream surfaces and time surfaces. In this section we examine techniques to incorporate surface shading to aid in the volume visualization.

The stream surfaces and time surfaces are actually the boundary surfaces of the interval volumes. Our interval volume computation algorithm offers the ability to compute the boundary surfaces between interval volumes without extra computational

148

overhead. The surfaces are extracted during the interval volume construction, simply by checking if the vertices are on the boundary or not. This occurs when all vertices of a face have the value which is equal to one of the iso-values. Because each inner surface is shared by two tetrahedra, in order to avoid the duplication of a face, we extract only the boundary surfaces which correspond to the surface with lower iso-value in each interval volume plus the boundary surface with higher iso-value for the last interval volume.

These faces are tagged as boundary surfaces on the corresponding tetrahedra. The surfaces are rendered as semi-transparent polygons to prevent occlusion of internal features. During rendering, we render each tetrahedron. If a face of the tetrahedron belongs to a boundary for which the user has chosen surface shading, we then render that face. This is our normal rendering operation.

4.5.2 Textured Stream Surface Boundaries

Explicit stream surfaces or flow volumes are easily parameterized. One can think of these surfaces as swept surfaces resulting from some initial curve. The linear approximation of the curve has a specific ordering of the vertices in order to construct the triangulated surface (see [56]). Texture parameterization or mapping for implicit surfaces is a difficult problem [134]. The simplest texture parameterization is to use three- dimensional textures and the vertex locations of the stream surface for the texture coordinates. We use a 3D LIC to demonstrate non-parametrical space texture mapping. A

3D LIC texture for the underlying flow field is first generated. The coordinates of the surfaces’ vertices are indexed into the 3D LIC texture. These stream and time surfaces

149

function similar to the clip planes in [113]. Figure 4.6 shows a stream surface mapped with an underlying 3D LIC texture.

Figure 4.6: A stream surface inside the flow is textured using a 3D LIC texture

To support 2D texture mapping for our implicit stream surfaces, we have a slightly easier problem, as opposed to generic implicit surface mapping. We desire a parameterization where the curve length of the initial shape or iso-contour on the termination surface is mapped to one texture coordinate, and the advection time is mapped to the other texture coordinate. The latter mapping is embedded in our implicit flow volume representation. The former is more difficult, but extracting curves from 2D images or scalar fields is well-studied. We use a contour search algorithm, which finds a point on the contour and then visits neighboring cells on the contour [58]. Figure 4.7(a) is a textured stream surface, and the texture simulates the streamlines and timelines. Figure

150

4.7(b) shows two stream surfaces mapped with different textures. Notice the nested stream surfaces in this figure.

Figure 4.7: Textured stream surfaces

An interesting application of textures adds streamlines to the stream surface.

Constant colored streamlines are achieved using a one-dimensional texture, whose coordinates are mapped to the 2D iso-contour curve length as describe above. A simple texture, which modulates the surface color, is used to encode black stripes (a square wave). We use a non-averaged mip-map technique that adds more and more streamlines as we zoom into the scene. This is constructed by halving the square wave’s frequency for each higher mip-map level. With interpolation between the mip-map levels enabled, the streamlines gracefully fade in and out as the local projected stream surface area changes. Several snapshots, taken as we zoom in on the stream surface, are shown in

Figure 4.8. Figure 4.9 shows the streamlines from a source to a sink inside the flow. Five stream surfaces are embedded within the flow volume. Figure 4.10 shows four stream surfaces with texture mapped streamlines on it for the Isabel Hurricane dataset.

151

Figure 4.8: Streamlines using a non-averaged 1D mip-map texture. As we zoom in, more streamlines are automatically added.

Figure 4.9: A coupled-charge dataset rendered using interval volumes with five stream surfaces textured by streamline-like texture

152

Figure 4.10: Isabel Hurricane dataset rendered using interval volumes with four stream surfaces textured by streamline-like texture

4.5.3 Time Surface Rendering

A different scalar mapping can be used to map from the 4-tuple (,fu,v,t) of the implicit flow field to illustrate time surfaces or time volumetric puffs, such as smoke or dye released and then paused. The advection time, t, can be used directly as the scalar field input to the interval volume routine. The resulting interval volume then encloses two time-surfaces, which are encoded as boundary faces of the tetrahedral mesh. These surfaces can also be shaded and texture-mapped. We use the (u,v) location on the termination surface and provide an inflow texture, which is easily mapped to the time- surface. A deformation of the texture on subsequent time surfaces depicts the flow

153

convergence and divergence. Figure 4.11 is a flow volume bounded between two time surfaces mapped with grid pattern, which shows how the flow changes with the time.

Figure 4.12 is a flow diverging around on inner obstacle. The image shows three textured time surfaces inside the stream volume.

Figure 4.11: Textured time surfaces

Figure 4.12: Flow volumes with three textured time surfaces

154

4.5.4 Time Clipping

It would be advantageous to represent both stream surfaces and time surfaces within the same flow volume rendering. This is not as easy as it would seem. Applying the interval volume algorithm to construct a flow volume, only produces a boundary at the iso-values for the inflow texture, not the advection time, which is a separate attribute. Our solution to this problem is to provide a two-pass algorithm through the interval volume procedure. The interval volume algorithm can be applied to any convex polytope. For the first pass, we specify the scalar function according to the inflow contours. This produces a tetrahedral mesh of the resulting flow volume. Next we apply the interval volume algorithm using the advection time on the tetrahedral mesh. This provides the intersection of the flow volume with the specified time interval. We now have boundary elements for both the stream surfaces and the time surfaces, which can be shaded and textured. The same front-to-back rendering algorithm provides a correct compositing of the surfaces and the volume.

Figure 4.13(a) shows a stream volume truncated to the range of two advection time values. This is the intersection of the interval volumes with respect to the implicit value and with respect to the advection time. Using this advection-time-truncated stream volume, we show a stream volume growing with the advection time. By specifying multiple iso-values for the advection time, we can embed boundary elements for many time surfaces into the flow volume. A textured stream surface and three textured time surfaces are placed inside the stream volume in Figure 4.13(b). In this image, one

155

textured stream surface and three textured time surfaces are rendered together with the stream volume.

Figure 4.13: (a) Truncated time surfaces and stream surface; (b) Three time surfaces and one stream surface.

4.6 Rendering of Implicit Flow Fields Using 3D Texture Mapping Approach

The second technique to render the implicit flow field is 3D texture mapping technique. 3D texture mapping approach renders the implicit 4-tuple flow field directly without the inflow mapping to a scalar field, taking advantage of modern graphics hardware. With the support of the dependent texture, we can change the appearance and representation of the 3D flow volume using advanced volume shaders. The advantages of this rendering method are high interactivity and fine texture details rendered throughout the 3D flow volume. In this section, I will mainly explain the basic idea and features, more implementations details are given in [161].

156

Since the implicit flow field provides a 2D mapping on the termination surface, our approach allows the user to paint the dependent texture colors and opacities directly on this surface, which makes it possible to provide the flexibility for the user to control the representation and appearance of the flow volume. We call this dependent texture, the inflow texture, as it dictates the paint that is carried from the termination surface into the flow. Figure 4.14 is the visualization diagram of the implicit flow field using slice-based

3D texture approach.

3D texture approach

Flow Implicit Backward Load Fragment field flow field Final advection 3D texture shader v (f id, u, v, t) image Load Inflow texture

User-controlled texture

Figure 4.14: Visualization diagram of the implicit flow field using slice-based 3D texture approach

The simplest such surface would be a single face of the bounding box for the flow field. A dependent texture mapped to this face is used as our lookup table. The (u, v) in the 4-tuple, (f, u, v, t), is employed to index into the dependent texture, producing the current fragment’s color and opacity. Figure 4.15 shows an image in which the user hand-painted “vis 2004” on the inflow texture on one face of a bounding box. In addition

157

to hand painting, the user can import any image for use as the inflow texture. Figure 4.16 has the IEEE Visualization 2004 conference logo used as an opacity and color texture.

+

Figure 4.15: Hand-painted image as inflow textures to advect through the volume

+

Figure 4.16: Imported image as inflow textures to advect through the volume

More complex termination surfaces can easily be supported, provided a simple parameterization exists. In addition to flat planes, we currently support spherical and cylindrical termination surfaces (useful for bounding a neighborhood of a source), and a rectangular box termination surface. Figure 4.17 shows the inflow texture and the flow volume for the Isabel Hurricane dataset using the 3D texture mapping technique. The

158

inflow texture is designed to highlight flow details within the context of the flow volume.

We first extract the eye of the hurricane by finding the path of the zero-velocity points.

Here, the termination surface is the cylindrical surface around the eye of the hurricane. A surface image which is a land map is mapped to give more geographical information: green color represents the land and the dark blue color represents the ocean.

Figure 4.17: Inflow texture and flow volume of the Isabel Hurricane dataset

Periodic dye injection can help understand the interior structure and highlight local features in the flow. In addition to the user defined inflow texture, we can use a second and a third texture for this purpose and show more flow information with the multi- texture technique. In Figure 4.18, an initial inflow texture, as shown in Figure 4.18(a) and a second inflow texture as shown in Figure 4.18(b) were used to generate the image in

Figure 4.18(d). Here, since the advection time is stored in the implicit flow field, a third

159

one-dimensional texture along the time axis is used to modulate the flow of the high- frequency dye on the second texture. The image in Figure 4.18(c) shows the flow volume without a high-frequency detail texture.

(a) (b)

(c) (d) Figure 4.18: Dual inflow textures: (a) The inflow texture specified by the user. (b) A particle distribution. (c) The result from the inflow texture (a) only. (d) The result obtained by combining the inflow texture (a) and the texture (b)

160

4.7 Comparison of Rendering Techniques

In the above two sections, we have described two techniques of implicit flow volume rendering. In this section, we compare the advantages and disadvantages of these two techniques with respect to explicit flow volumes [91]. Table 4.1 summarizes our comparison between these three techniques.

Traditional Flow Volume Implicit Stream Volume 3D texture mapping Interval volume rendering Requires pre-processing No Yes Yes Advection Advection during the volume Pre-advection Pre-advection construction Flow volume construction Through advection Using dependent textures Using high dimensional iso-contouring routine Representation Explicit Implicit Explicit (reconstructed from the implicit flow volume) Initial Starting Location Anywhere User-defined Termination surfaces (pre- User-defined Termination surfaces (pre- computed) computed) Stream surface / time surface Easily added No Yes

Rasterization / rendering range Render only the flow area Rasterize the entire volume Iso-contouring the entire volume, render only the flow area Requires recomputation Yes No Yes

Non-regular grids Easily supported Requires voxelization Easily supported

Cross section specification Polygon Per-pixel mask function Mask function on the termination surface(s)

Cross section quality Limited by user specification, Resolution of the dependent texture Dependent on voxelization typically poor Boundary quality Dependent on polygon Dependent on dependent texture Dependent on voxelization

Details / correctness Without mesh refinement, More accurate More accurate misses details in flow Rendering performance Dependent on the number of Dependent on voxel grid size Dependent on the number of tetrahedra tetrahedra Volume size Arbitrarily large volume Limited by the texture memory of the Arbitrarily large volume, limited by system display card memory Source, sink critical points Fine Requires critical point detection Requires critical point detection

Table 4.1: Comparison of the flow volume visualization techniques

The implicit flow field is generated by pre-advecting the flow field and storing the advection information for each voxel in the implicit flow. When we subsequently construct a stream volume, no integrator is required to compute streamlines through the flow field. Since this is a pre-computation, care can be taken to ensure accurate streamline advection. We use an adaptive fourth-order Runga-Kutta algorithm. The 3D

161

texture mapping method renders stream volumes using a dependent texture, while the interval volume technique extracts a stream volume using a high dimensional iso- contouring routine. Explicit flow volumes are constructed using an advection algorithm during run-time. The 3D texture mapping has an advantage when the inflow boundary is changed, as it does not require any re-computation. The other two techniques need to re- compute their flow volumes, one through advection the other through iso-contouring, both of which can be costly operations. Although, it should be pointed out that all of these techniques run fairly interactive for the datasets we have tested. In our experiments, the 3D texture mapping can achieve roughly 10 FPS for 1283 implicit flow dataset. The interval volume rendering offers about 3.5 FPS for the interval volume with 346.5K tetrahedra. More performance results about interval volume rendering can be found in

[14]. All experiments are performed on a PC with a QuadroFX 3000 graphics card and a

Pentium IV 3.2 GHz processor. A true performance comparison is not provided, due to the many parameters each technique requires for the specification. For any given technique, we can find a case where it would be the fastest, or the slowest. Nevertheless, there are three key differentiating factors that we wish to highlight among these flow volume techniques.

(1) Volumetric Texture or Details

In the traditional method, a cross section is specified by a low-resolution polygon.

The quality of the cross section and the flow boundary is limited by user specification.

Typically, the quality is poor. Furthermore, the distribution of any optical properties across the cross section is ill-specified. In order to allow for changes of the optical

162

properties across the initial smoke generator, a subdivision of the cross section (and hence the resulting explicit flow volume) is required. Most explicit flow volume renderings utilize a constant color and extinction coefficient.

For the implicit methods, the cross section is specified using a general inflow texture. The complexity of the cross section and the resulting flow boundary is thus determined by the resolution of the dependent texture for the 3D texture mapping technique, and by the underlying voxel grid of the implicit volume for the interval volume technique, respectively. An extremely high virtual resolution is possible with the dependent textures. No assumptions about the underlying volume rendering model are made in our system. In fact, an arbitrary fragment program can be used to compute the volume rendering.

(2) Stream Surface Texture or Details

One advantage of the interval volume method is that the stream surfaces and the time surfaces are modeled during the interval volume extraction process without extra computation cost. Texture mapping and surface shading are then applied on these surfaces to highlight internal features and provide a pleasing and more informative flow visualization. Although the traditional flow volume method did not add these surfaces, they are easily incorporated, since the representation is a true parametric representation.

While algorithms exist to display contour surfaces using 3D texture-mapping based volume renderers [39], applying parametric textures to these surfaces is an unsolved problem. Initial experiments are very prone to aliasing and blur. The 3D texture mapping technique cannot generate these textured surfaces.

163

(3) Rendering Complexity

For the rasterization and rendering, the traditional flow volume method renders only the flow area, and the rendering performance depends on the number of tetrahedra. This is also true for the interval volume rendering technique. But the interval volume method requires the iso-contouring over the entire volume for the interval volume extraction.

Furthermore, there may be many internal tetrahedra within the implicit flow volume. This is advantageous for multi-colored flow volumes, but is extra computation and rasterization for constant colored flow volumes. Both of these techniques of course, require sorting of the tetrahedra and a special rendering algorithm. The 3D texture mapping method is different from the above two methods in these aspects. It rasterizes the entire volume and the rendering performance is strictly dependent on the number of the voxels (and the complexity of the volume shader). Volume rendering the unstructured grids generated from the explicit flow volumes or interval volumes techniques can be much more expensive than volume rendering using 3D texture mapping. However, if the flow volume area is kept small, the reduced rasterization operations can be an advantage.

4.8 Conclusion and Future Work

We proposed the concept of implicit flow fields to investigate the visualization of three-dimensional flow fields. Given a flow field, we first construct an implicit flow field by pre-advecting the flow field and storing the flow information at each sample point.

Since the advection is performed at the pre-processing stage, the computationally

164

expensive advection in 3D flow fields is avoided at the run time. Also, the information stored in the implicit flow information will be used for user-guided flow representation and appearance and texture mapping to show the flow details in the later rendering process.

We studied two techniques to render the implicit flow field. One technique is the interval volume approach, which extracts the geometry of the flow volume as the interval volume and renders the flow volume using any tetrahedral rendering technique. Its advantage is the stream surfaces and time surfaces can be extracted easily during the interval volume extraction, and embedded with the flow volume as the boundary surfaces of the tetrahedral mesh. In this way, we can achieve a complete representation of the streamlines, time lines, stream surfaces, time surfaces and flow volume. The second technique is the 3D texture mapping approach, which directly loads the implicit flow field into the 3D texture memory for the navigation of the flow field. The advantage of this technique is its interactivity at the run time and the flexibility for the user to control the representation and appearance of the flow volume. Also, we conducted a comparison of different techniques for flow volumes. We have applied our implicit flow field method to some real application datasets.

In this dissertation, I extract 4-tuple attributes (f, u, v, t) during the construction of the implicit flow field, and render the flow volumes based on this 4-tuple implicit flow.

Actually, we can extract more attributes and generalize it to n-tuple implicit flow fields.

Additional attributes, such as the velocity and vorticity magnitude at the voxel, the maximum velocity magnitude along the streamline, average density along the streamline,

165

etc., can also be calculated and stored in this preprocessing stage. Thus, in general, we have an operation computing a mapping from R3→Rn.

Although the user can specify any surface as the termination surface during the construction of the implicit flow field, how to determine the termination surfaces which can be used to visualize the features of the given vector field effectively is still a potential topic. The inflow-outflow analysis [89] of the given vector field will help us to understand the features of the flow field and assist users to set up suitable termination surfaces which can catch the features of the flow field and take effective use of the implicit flow field method.

166

BIBLIOGRAPHY

[1] G. Albertelli, and R. A. Crawfis, “Efficient subdivision of finite-element datasets into consistent tetrahedral”, in Proceedings of IEEE Visualization '97, p.213-219, October 18- 24, 1997, Phoenix, Arizona.

[2] J. Amanatides, “Ray tracing with cones”, Computer Graphics, 18(3), pp. 129-135, 1984.

[3] A. Appel, “Some techniques for shading machine rendering of solids”, Proc. AFIPS JSCC, Vol. 32, pp. 37-45, 1968.

[4] P. Atherton, K. Weiler, D. Greenberg, “Polygon Shadow Generation”, Proc. SIGGRAPH’78, pp. 275-281, 1978.

[5] R. Avila, T. He, L. Hong, A. Kaufman, H. Pfister, C. Silva, L. Sobierajski, “VolVis: a diversified volume visualization system”, in Visualization’94, pp. 31-38, 1994.

[6] C. Bajaj, V. Pascucci, G. Rabbiolo, and D. Schikore, “Hypervolume Visualization: A Challenge in Simplicity”, in IEEE Volume Visualization 1998 Symposium, pp. 95-102.

[7] D.C. Bank, “Illumination in diverse codimensions”, in Proceedings of SIGGRAPH’94, pp. 327-334, 1994.

[8] D. Banks, and S. Linton, “Counting Cases in Marching Cubes: Toward a Generic Algorithm for Producing Substitopes”, In Proceedings of IEEE Visualization 2003, pp. 51-58.

[9] D. Banks, S. Linton, and P. Stockmeyer, “Counting Cases in Substitope Algorithms”, IEEE Transactions on Visualization and Computer Graphics, July/August, 2004, Vol. 10, No. 4, pp. 371-384.

[10] U. Behrens and R. Ratering, “Adding shadows to a texture-based volume renderer”, 1998 Symposium onm Volume Visualization, pp. 39-46, 1998.

[11] P. Bergeron, “A general version of Crow’s shadow volumes”, IEEE CG&A, 6(9), pp. 17-28, 1986.

167

[12] P. Bhaniramka, R. Wenger and R. Crawfis, “Isosurfacing in higher dimensions”, in Proceedings of IEEE Visualization '00, pp. 15-22, 2000.

[13] P. Bhaniramka, R.Wenger, AND R. Crawfis, “Isosurface Construction in any dimension using convex hulls”, IEEE Transactions on Visualization and Computer Graphics, March/April, 2004, Vol 10, No, 2, pp 130-141.

[14] P. Bhaniramka, C. Zhang, D. Xue, R. Crawfis and R. Wenger, Volume Interval Segmentation and Rendering, in Symposium on Volume Visualization 2004, Austin, TX, pp. 55-62.

[15] W. Bouknight, K. Kelly, “An algorithm for producing half-tone computer graphics presentations shadows and movable light sources”, AFIPS Conf. Proc., Vol. 36, pp. 1-10, 1970.

[16] M. Brill, H. Hagen, H.-C. Rodrian, W. Djatschin and S. V. Klimenko, “Streamball techniques for flow visualization”, in Proceedings of IEEE Visualization '94, pp. 225-231, 1994.

[17] B. Brotman, N. Badler, “Generating soft shadow with a depth buffer algorithm”, IEEE CG&A, 4(10), pp. 71-81, 1984.

[18] B. Cabral and C. Leedom, “ vector fields using line integral convolution”, in Proceedings of SIGGRAPH’93, pp. 263-272, 1993.

[19] B. Cabral, N. Cam, J. Foran, “Accelerated volume rendering and tomographic reconstruction using texture mapping hardware”, 1994 Symposium on Vol. Vis., pp.91-98, 1994.

[20] S. Chandrasekhar, Radiative Transfer, Oxford University Press, 1950.

[21] B. Chen, F. Dachille, A. Kaufman, “Forward image warping”, IEEE Visualization’99, pp. 89-96, 1999.

[22] L. Chen, I. Fujishiro, K. Nakajima, “Parallel Performance Optimization of Large-Scale Unstructured Data Visualization for the Earth Simulator”, in Eurographics Workshop on Parallel Graphics and Visualiztion, 2002.

[23] Min Chen and John Tucker, “Constructive Volume Geometry”, Computer Graphics Forum, Vol.19, No.4, 281-293, 2000.

[24] M. Cohen, D. Greenberg, “The hemi-cube: a radiosity solution for complex environments”, Computer Graphics, 19(3), pp. 31-40, 1985.

168

[25] R. Cook, T. Porter, L. Carpenter, “Distributed ray tracing”, Computer Graphics, 18(3), pp. 137-145, 1984.

[26] R. Crawfis, “Real-time Slicing of Data Space”, In Proceedings of IEEE Visualization 1996, pp. 271-277, 1996.

[27] R. Crawfis, N. Max, “Texture Splats for 3D Scalar and Vector Field Visualization”, Proc. Visualization’93 , pp. 261-266, 1993.

[28] R. Crawfis, J. Huang, “High quality splatting and volume synthesis”, in Data Visualization: The State of the Art, eds. Frits H. Post, Gregory M. Nielson, Georges-Pierre Bonneau, Kluwer Academic Publishers, pp. 127-140, 2003.

[29] R. Crawfis, D. Xue, C. Zhang, “Volume Rendering Using Splatting, A Tutorial and Survey”, Visualization Handbook, eds. Charles Hansen, Christopher Johnson, Academic Press, 2004.

[30] F. Crow, “Shadow Algorithm for Computer Graphics”, Proc. SIGGRAPH’77, pp. 242-248, 1977.

[31] B. Csebfalvi, “Fast volume rotation using binary shear-warp factorization”, Eurographics Data Visualization’99, pp. 145-154, 1999.

[32] T. Cullip, and U. Neumann. Accelerating volume reconstruction with 3D texture hardware. Tech. Rep. TR93-027, University of North Carolina, Chapel Hill N.C.

[33] F. Dachille, K. Kreeger, B. Chen, I. Bitter, A. Kaufman, “High quality volume rendering using texture mapping hardware”, Proc. 1998 SIGGRAPH/Eurographics Workshop on Graphics Hardware, pp. 69-76, 1998.

[34] J. Danskin, P. Hanrahan, “Fast algorithms for volume rendering”, 1992 Workshop on Volume Visualization, pp. 91-98, 1992.

[35] W. de Leeuw and J. van Wijk, “Enhanced spot noise for vector field visualization”, in Proceedings of IEEE Visualization '95, pp. 233-239, 1995.

[36] Y. Dobashi, T. Nishita, T. Yamamoto, “Interactive Rendering of Atmospheric Scattering Effects Using Graphics Hardware”, Proc. Graphics Hardware 2002, pp.99-108, 2002.

[37] Y. Dobashi, T. Yamamoto, T. Nishita, “Interactive Rendering Method for Displaying Shafts of Light,” Proc. Pacific Graphics 2000, pp.31-37, 2000.

[38] D. S. Ebert, R. E. Parent, “Rendering and Animation of Gaseous Phenomena by Combining Fast Volume and Scanline A-buffer Techniques”, Proc. SIGGRAPH’90, pp. 357-366, 1990.

169

[39] K. Engel, M. Kraus and T. Ertl, “High-quality pre-integrated volume rendering using hardware-accelerated pixel shading”, in Proceedings of the ACM SIGGRAPH/ EUROGRAPHICS Workshop on Graphics Hardware, pp. 9-16, 2001.

[40] F. Foley, A. Van Dam, S. Feiner, J. Huges, Computer Graphics: Principles and practice, Addison Wesley, 1996.

[41] I. Fujishiro, Y. Maeda, and H. Sato, “Interval volume: a solid fitting technique for volumetric data display and analysis”, in IEEE Visualization ‘95, Atlanta, GA, 1995.

[42] Fujishiro, Y. Maeda, and H. Sato, and Y. Takeshima, “Volumetric data exploration using interval volume”, in IEEE Transactions on Visualization and Computer Graphics, Vol.2, June 1996.

[43] G. Gorla, V. Interrante and G. Shapiro, “Texture synthesis for 3D shape representation”, in IEEE Transactions on Visualization and Computer Graphics, Vol. 9, No. 4, pp. 217-242, 2003.

[44] B.Guo, “Interval Set: A Volume Rendering Technique Generalizing Isosurface Extraction”, in Proceedings of IEEE Visualization ’95, Atlanta, GA.

[45] A. Hanson, and P. Heng, “Four-Dimensional Views of 3D Scalar Fields”, in Proceedings of IEEE Visualization 1992, pp. 84-91.

[46] A. Hanson, and P. Heng, “Illuminating the Fourth Dimension”, IEEE Computer Graphics and Applications, Vol. 2, No. 4, pp. 54-62.

[47] A. Hanson, and R.Cross, “Interactive Visualization Methods for Four Dimensions”, in Proceedings of IEEE Visualization 1993, pp. 196-203.

[48] M. Harris, A. Lastra, “Real-Time Cloud Rendering”, Proc. Eurographics’2001, vol. 20, no. 3, pp. 76-84, 2001.

[49] T. He, A. Kaufman, “Fast stereo volume rendering”, IEEE Visualization’96, 1996.

[50] P. Heckbert, “Discontinuity meshing for radiosity”, Third Eurographics Workshop on Rendering, pp. 203-226, May 1992.

[51] P. Heckbert and M. Herf, “Simulating soft shadows with graphics hardware”, Technical report TR CMU-CS-97-104, Carnegie Mellon University, 1997.

[52] W. Heidrich, M. Mccool, and J. Stevens, “Interactive Maximum Projection Volume Rendering”, In Proceedings of IEEE Visualization 1995, pp. 11-18.

170

[53] K. H. Hoehne, B. Pfiesser, A. Pommert, M. Riemer, T. Schiemann, R. Schubert, U. Tiede, “A virtual body model for surgical education and rehearsal”, IEEE Computer, Vol. 29, No. 1, 1996.

[54] J. Huang, K. Mueller, N. Shareef, R. Crawfis, “FastSplats: optimized splatting on rectilinear grids”, Visualization’2000, pp. 219-227, 2000.

[55] J. P. M. Hultquist, “Interactive numerical flow visualization using stream surface”, Computing Systems in engineering, Vol. 1, No. 2-4, pp. 349-353, 1990.

[56] J. Hultquist, “Constructing stream surfaces in steady 3D vector fields”, in Proceedings of IEEE Visualization '92, pp. 171-178, 1992.

[57] V. Interrante and C. Grosch, “Strategies for effectively visualizing 3D flow with volume LIC”, in Proceedings of IEEE Visualization '97, pp. 421-424, 1997.

[58] T. Itoh and K. Koyamada, “Automatic isosurface propagation using an extrema graph and sorted boundary cells”, in IEEE Transactions on Visualization and Computer Graphics, Vol. 1, No. 4, pp. 319-327, 1995.

[59] H.W. Jensen, S.R. Marschner, M. Levoy, P. Hanrahan, “A Practical Model for Subsurface Light Transport”, Proc. SIGGRAPH’01, pp. 511-518, 2001.

[60] G. Ji, H.-W. Shen, and R. Wenger, “Volume Tracking using Higher Dimensional Isosurfacing”, In Proceedings of IEEE Visualization 2003, pp. 209-216.

[61] B. Jobard, G. Erlebacher and M. Y. Hussaini, “Lagrangian-Eulerian advection for unsteady flow visualization”, in Proceedings of IEEE Visualization '01, pp. 53-60, 2001.

[62] B. Jobard, G. Erlebacher and M. Y. Hussaini, “Lagrangian-Eulerian advection of noise and dye textures for unsteady flow visualization”, in IEEE Transactions on Visualization and Computer Graphics, Vol. 8, No. 3, pp. 211-222, 2002.

[63] J. T. Kajiya, B. P. Von Herzen, “Ray Tracing Volume Densities”, Proc. SIGGRAPH’84, pp. 165-174, 1984.

[64] A. Kaufman, “An Algorithm for 3D Scan-Conversion of Polygons”, Proc. Eurographics’87, pp. 197-208, 1987.

[65] M.-H. Kiu and D. C. Banks, “Multi-frequency noise for LIC”, in Proceedings of IEEE Visualization '96, pp. 121-126, 1996.

171

[66] J. Kniss, G. Kindlmann and C. Hansen, “Interactive volume rendering using multi- dimensional transfer functions and direct manipulation widgets”, in Proceedings of IEEE Visualization '01, pp. 241-248, 2001.

[67] J. Kniss, G. Kindlmann, C. Hansen, “Multi-Dimensional Transfer Function for Interactive Volume Rendering”, TVCG 2002.

[68] J. Kniss, S. Premoze, C. Hansen, D. Ebert, “Interactive Translucent Volume Rendering and Procedural Modeling”, IEEE Visualization 2002.

[69] J. Kniss, S. Premoze, C. Hansen, P. Shirley, A. McPherson, “A Model for Volume Lighting and Modeling”, IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 2, pp. 150-162, 2003.

[70] M. Kraus, W. Qiao, and D. Ebert, “Projecting Tetrahedra without Rendering Artifacts”, in Proceedings of IEEE Visualization 2004, pp. 27-34.

[71] J. Krueger, R. Westerman, “Acceleration techniques for GPU-based volume rendering”, IEEE Visualization 2003, pp. 287-192, 2003.

[72] P. Lacroute, M. Levoy, “Fast volume rendering using a shear-warp factorization of the viewing transformation”, Proc. SIGGRAPH’94, pp.451-458, 1994.

[73] P. Lacroute, “Fast volume rendering using a shear-warp factorization of the viewing transformation”, Doctoral Dissertation, Stanford University, 1995.

[74] P. Lacroute, “Real-time volume rendering on shared memory multiprocessors using the shear-warp factorization”, IEEE Parallel Rendering Symposium’95, pp. 15-22, 1995.

[75] R. Laramee, B. Jobard and H. Hauser, “Image space based visualization of unsteady flow on surfaces”, in Proceedings of IEEE Visualization '03, pp. 131-138, 2003.

[76] K.D. Lathrop, “Ray Effects in Discrete Ordinates Equations”, Nuclear Science and Engineering, vol. 32, pp. 357-369, 1968.

[77] M. Lee, L. D. Floriani, and H. Samet, “Constant-Time Navigation in Four-Dimensional Nested Simplicial Meshes”, in International Conference on Shape Modeling and Applications 2004 (SMI’04), pp. 221-230, 2004.

[78] J. Leven, J. Corso, J. Cohen, S. Kumar, “Interactive visualization of unstructured grids using hierarchical 3D textures”, in Symposium on Volume Visualization ’02, Boston, MA.

[79] M. Levoy, “Display of surfaces from volume data”, IEEE Computer Graphics and Applications, vol. 8, no. 5, pp. 29-37, 1988.

172

[80] M. Levoy, “Efficient ray tracing of volume data”, ACM Transactions on Computer Graphics, vol. 9, no. 3, pp. 245-261, 1990.

[81] M. Levoy, “A Hybrid Ray Tracer for Rendering Polygon and Volume Data”, IEEE Computer Graphics and Applications, vol. 10, no. 2, pp. 33-40, 1990.

[82] W. Li, K. Mueller, A. Kaufman, “Empty space skipping and occlusion clipping for texture- based volume rendering”, IEEE Visualization’2003, pp. 317-324, 2003.

[83] B. Lichtenbelt, R. Crane, S. Naqvi, “Introduction to volume rendering”, Prentice-Hall, 1998.

[84] G.-S. Li, U. Bordoloi and H.-W. Shen, “Chameleon: an interactive texture based rendering framework for visualizing three-dimensional vector fields”, in Proceedings of IEEE Visualization '03, pp. 241-248, 2003.

[85] T. Lokovic, E. Veach, “Deep shadow map”, Proc. SIGGRAPH’2000, 2000.

[86] W. E. Lorensen, and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm”, in M. C. Stone, ed., Computer graphics, 1987, Anaheim, California, July 1987, pp. 163-169.

[87] K. L. Ma, T. W. Crockett, “A scalable parallel cell-projection volume rendering algorithm for three-dimensional unstructured data”, in IEEE Symposium on Parallel Rendering, ’97, Phoenix, Arizona.

[88] K. L. Ma, T. W. Crockett, “Parallel Visualization of Large Scale Aerodynamics Calculations: A Case study on Cray T3E”, in IEEE Parallel Visualization and Graphics, 1999, San Francisco, CA.

[89] K. Mahrous, J. Bennett, G. Scheuermann, B. Hamann and K. Joy, “Topological segmentation in three-dimensional vector fields”, in IEEE Transactions on Visualization and Computer Graphics, Vol. 10, No. 2, pp. 198-205, 2004.

[90] N. Max, P. Hanrahan and R. Crawfis, “Area and volume coherence for efficient visualization of 3d scalar functions”, in Computer graphics, November 1990, pp. 27-33.

[91] N. Max, B. Becker and R. Crawfis, “Flow volumes for interactive vector field visualization”, in Proceedings of IEEE Visualization '93, pp. 19-24, 1993.

[92] N. Max, “Optical Models for Direct Volume Rendering”, IEEE Transactions on Visualization and Computer Graphics, vol. 1, no. 2, pp. 99-108, 1995.

173

[93] N. Max, “Efficient Light Propagation for Multiple Anisotropic Volume Scattering”, Photorealistic Rendering Techniques, G. Sakas, P. Shirley, and S. Mueller, eds. Heidelberg: Springer Verlag, pp.87-104, 1995.

[94] N. Max, “Consistent Subdivision of Convex Polyhedra into Tetrahedra”, in Journal of Graphics Tools, 6 (3), 29-36, 2002.

[95] N. Max, P. Williams, C. Silva, “Cell Projection of Meshes with Non-Planar Faces”, Institute of Data Analysis and Visualization, Kluwer Academic Publishers, pp. 157-169, Dagstuhl, Germany, May 2003.

[96] N. Max, G. Schussman, R. Miyazaki, K. Iwasaki, and T. Nishita, “Diffusion and Multiple Anisotropic Scattering for in Clouds”, WSCG2004, pp. 277, 2004.

[97] M. Meissner, U. Hoffman, W. Strasser, “Enabling classification and shading for 3D texture mapping based volume rendering”, in IEEE Visualization’99, pp. 207-214, 1999.

[98] M. Meissner, J. Huang, D. Bartz, K. Mueller, R. Crawfis, “A practical evaluation of popular volume rendering algorithms”, 2000 Symposium on Volume Rendering, pp. 81-90, Salt Lake City, October 2000.

[99] R. Miyazaki, Y. Dobashi, T. Nishita, “A Fast Rendering Method of Clouds Using Shadow- View Slices” Proc. CGIM 2004, pp.93-98, 2004.

[100] K. Mueller, T. Moeller, J.E. Swan, R. Crawfis, N. Shareef, R. Yagel, “Splatting errors and antialiasing”, IEEE Transactions on Visualization and Computer Graphics, Vol. 4, No. 2, pp. 178-191, 1998.

[101] K. Mueller, R. Crawfis, “Eliminating popping artifacts in sheet buffer-based splatting”, Proc. Visualization’98, pp.239-245, 1998.

[102] K. Mueller, N. Shareef, J. Huang, R. Crawfis, “High-quality splatting on rectilinear grids with efficient culling of occluded voxels”, IEEE Transactions on Visualization and Computer Graphics, Vol. 5, No. 2, pp. 116-134, 1999.

[103] K. Mueller, T. Moeller and R. Crawfis, “Splatting without the blur”, in Proceedings of IEEE Visualization '99, pp. 363-370, 1999.

[104] N. Neophytou, K. Mueller, “GPU accelerated image aligned splatting”, International Workshop on Volume Graphics 2005, pp. 197-205, 2005.

[105] G. M. Nielson, and J. Sung, “Interval volume tetrahedrization”, in R. Y. a. H. Hagen, ed., IEEE Visualization '97, IEEE, November 1997, pp. 221-228.

174

[106] T. Nishita, E. Nakamae, “An Algorithm for Half-Tone Representation of Three- Dimensional Objects”, Information Processing in Japan, Vol. 14, pp. 93-99, 1974.

[107] T. Nishita, E. Nakamae, “Half-Tone Representation of 3-D Objects Illuminated by Area Sources or Polyhedron Sources,” IEEE Computer Society’s 7th International Computer Software & Applications Conference (COMPSAC), pp.237-242, 1983.

[108] T. Nishita, Y. Dobashi, E. Nakamae, “Display of Clouds Taking into Account Multiple Anisotropic Scattering and Sky Light”, Proc. SIGGRAPH’96, pp. 313-322, 1996.

[109] M. Nulkar, K. Mueller, “Splatting With Shadows”, Volume Graphics 2001.

[110] The Ohio State University. Isotable generation software. http://www.cse.ohio- state.edu/graphics/isotable.

[111] S. Parker, P. Shirley, Y. Livnat, C. Hansen, P. Sloan, Interactive ray tracing for isosurface rendering”, in Visualization’98, pp. 233-238, 1998.

[112] K. Perlin, E. M. Hoffert, “Hypertexture”, Proc. SIGGRAPH’89, pp. 253-262, 1989.

[113] C. Rezk-Salama, P. Hastreiter, C. Teitzel and T. Ertl, “Interactive exploration of volume line integral convolution based on 3D-texture mapping”, in Proceedings of IEEE Visualization '99, pp. 233-240, 1999.

[114] S. Röttger, and T. Ertl, “A two-step approach for interactive pre-integrated volume rendering of unstructured grids”, in IEEE Volume Visualization, ’02, Boston, MA, pp. 23- 28.

[115] H. Rushmeier, K. Torrance, “The Zonal Method for Calculating Light Intensities in the Presence of a Participating Medium”, Computer Graphics, vol. 21, no. 4, pp. 293-303, 1987.

[116] H. Rushmeier, “Realistic Image Synthesis for Scenes with Radiatively Participating Media”, PhD Thesis, Cornell University, May 1988.

[117] P. Schroder, P. Hanrahan, “On the form factor between two polygons”, Proc. SIGGRAPH’93, pp. 163-164, 1993.

[118] J. P. Schulze, R. Niemeier, U. Lang, “The perspective shear-warp algorithm in a virtual environment”, in Visualization’2001, pp. 207-213, 2001.

[119] J. P. Schulze, U. Lang, “The parallelization of the perspective shear-warp volume rendering algorithm”, Fourth Eurographics Workshop on Parallel Graphics and Visualization 2002.

175

[120] H.-W. Shen, C. R. Johnson, and K.-L. Ma, “Visualizing vector fields using line integral convolution and dye advection”, in 1996 Volume visualization Symposium, pp. 63-70, 1996.

[121] H.-W. Shen and D. L. Kao, “UFLIC: A line integral convolution algorithm for visualizing unsteady flows”, in Proceedings of IEEE Visualization '97, pp. 317-323, 1997.

[122] H.-W. Shen, “Isosurface extraction in time-varying fields using a temporal hierarchical index tree”, in IEEE visualization '98, IEEE, October 1998, pp. 159-166.

[123] H.-W. Shen and D. L. Kao, “A new line integral convolution algorithm for visualizing time-varying flow fields”, in IEEE Transactions on Visualization and Computer Graphics, Vol. 4, No. 2, 1998.

[124] H.-W. Shen, G.-S. Li and U. Bordoloi, “Interactive visualization of three-dimensional vector fields with flexible appearance control”, in IEEE Transactions on Visualization and Computer Graphics, Vol. 10, No. 4, 2004.

[125] P. Shirley and A. Tuchman, “A polygonal approach to direct volume rendering”, Computer Graphics, Vol. 24, No. 5, pp. 63-70, 1990.

[126] L. Sobierajski, A. Kaufman, “Volumetric Raytracing”, 1994 Symposium on Volume Visualization, pp. 11-18, 1994.

[127] C. Soler and F.X. Sillion, “Fast calculation of soft shadow textures using convolution”, Proc. SIGGRAPH’98, pp. 321-332, 1998.

[128] D. Stalling and H.-C. Hege, “Fast and resolution independent line integral convolution”, in Proceedings of SIGGRAPH’95, pp. 249-256, 1995.

[129] J. Stam, “Multiple Scattering as a Diffusion Process”, in Proceedings of the 6th Eurographics Workshop on Rendering, pp. 51-58, 1995.

[130] A. Telea and J. van Wijk, “3D IBFV: hardware-accelerated 3D flow visualization”, in Proceedings of IEEE Visualization '03, pp. 233-240, 2003.

[131] U. Tiede, K.H. Hoehne, M. Bomans, A. Pommert, M. Riemer, G. Wiebecke, “Investigation of medical 3D-rendering algorithms”, IEEE Computer Graphics and Applications, vol. 10, no. 2, pp. 41-53, 1990.

[132] U. Tiede, T. Schiemann, K.H. Hoehne, “High quality rendering of attributed volume data”, in Visualization’98, pp. 255-262, 1998.

176

[133] J. Todd, F. Norman, J. Koenderink and A. Kappers, “Effects of texture, illumination, and surface reflectance on stereoscopic shape perception”, in Perception, Vol. 26, pp. 807-822, 1997.

[134] G. Turk, “Texture synthesis on surfaces”, in Proceedings of SIGGRAPH’2001, pp. 347- 354, 2001.

[135] H. Tuy, L. Tuy, “Direct 2D display of 3D objects”, IEEE Computer Graphics and Applications, vol. 4, no. 10, pp. 29-33, 1984.

[136] A. van Gelder, K. Kim, “Direct volume rendering with shading via three-dimensional textures”, 1996 Symposium on Volume Visualization, pp. 23-30, 1996.

[137] J. van Wijk, “Spot noise – texture synthesis for data visualization”, in Proceedings of SIGGRAPH’91, pp. 309-318, 1991.

[138] J. van Wijk, “Implicit stream surfaces”, in Proceedings of IEEE Visualization '93, pp. 245- 252, 1993.

[139] J. van Wijk, “Image based flow visualization”, in Proceedings of SIGGRAPH’2002, pp. 745-754, 2002.

[140] J. van Wijk, “Image based flow visualization for curved surfaces”, in Proceedings of IEEE Visualization '03, pp. 123-130, 2003.

[141] Alan Watt, , Third Edition, Addison Wesley.

[142] C. Weigle, and D. Banks, “Complex-valued contour meshing”, IEEE Visualization '96, IEEE, October 1996, pp. 173-180.

[143] C. Weigle, and D. Banks, “Extracting iso-valued features in 4-dimensional scalar fields”, 1998 Volume Visualization Symposium, IEEE, October 1998, pp. 103-110.

[144] M. Weiler, M. Kraus, and T. Ertl, “Hardware Based View-independent Cell Projection”, in Symposium on Volume Visualization, 2002, Boston, MA, pp. 13-22.

[145] M. Weiler, M. Kraus, M. Merz, and T. Ertl, “Hardware-Based Ray Casting for Tetrahedral Meshes”, in Proceedings of IEEE Visualization 2003, pp. 333-340.

[146] R. Westermann and T. Ertl, “Efficiently using graphics hardware in volume rendering applications”, in Proceedings of SIGGRAPH’98, pp. 169-177, 1998.

[147] R. Westermann, C. Johnson and T. Ertl, “A level set method for flow visualization”, in Proceedings of IEEE Visualization '00, pp. 147-154, 2000.

177

[148] L. Westover, “Interactive volume rendering”, Proceedings of Volume Visualization Workshop (Chapel Hill, N.C., May 18-19), Department of Computer Science, University of North Carolina, Chapel Hill, N.C., 1989, pp. 9-16.

[149] L. Westover, “Footprint evaluation for volume rendering”, Proc. SIGGRAPH’90, pp. 367- 376, 1990.

[150] L. Westover, “ SPLATTING: a parallel, feed-forward volume rendering algorithm”, Ph.D. Dissertation, Department of Computer Science, The University of North Carolina at Chapel Hill, 1990.

[151] T. Whitted, “An Improved Illumination for Shaded Display”, Communications of the ACM, Vol. 23, No. 6, pp. 343-349, 1980.

[152] L. Williams, “Casting Curved Shadows on Curved Surfaces”, Proc. SIGGRAPH’78, pp. 270-174, 1978.

[153] P. Williams, “Visibility Ordering of Meshed Polyhedra”, in ACM Transactions on Graphics, 11 (4), 103-126, April 1992.

[154] P. Williams, “A Volume Density Optical Model”, in IEEE Volume Visualization Symposium, ’92, 61-68

[155] P. Williams, N. Max, C. M. Stein, “A High Accuracy Volume Renderer for Unstructured Data”, in IEEE Transactions on Visualization and Computer Graphics, 4(1), pp. 37-54, 1998.

[156] A. Woo, P. Poulin, A. Fournier, “A survey of shadow algorithm”, IEEE Computer Graphics and Applications, Vol. 10, No. 6, 1990.

[157] J. Woodring, and H.-W. Shen, Chronovolumes: “A Direct Rendering Technique for Visualizing Time-Varying Data”, In Proceedings of 2003 International Workshop on Volume Graphics.

[158] J. Woodring, C. Wang, and H.-W. Shen, “High Dimensional Direct-Rendering of Time- Varying Volumetric Data”, In Proceedings of IEEE Visualization 2003, pp. 417-424.

[159] B. Wylie, K. Moreland, L. A. Fisk, and P. Crossno, “Tetrahedral Projection using Vertex Shaders”, in Symposium on Volume Visualization 2002, Boston, MA, pp. 7-12.

[160] D. Xue, R. Crawfis, “Efficient Splatting Using Modern Graphics Hardware”, Journal of Graphics Tools, Vol. 8, No. 3, pp. 1-21, 2003.

178

[161] D. Xue, C. Zhang, and R. Crawfis, “Rendering Implicit Flow Volumes”, in Proceedings of IEEE Visualization 2004, pp. 99-106.

[162] D. Xue, C. Zhang, R. Crawfis, “iSBVR: isosurface-aided hardware acceleration techniques for slice-based volume rendering”, International Workshop on Volume Graphics 2005, pp. 207-215, 2005.

[163] R. Yagel, Z. Shi, “Accelerating volume animation by space-leaping”, in Visualization’93, pp. 63-69, 1993.

[164] S.Y. Yen, S. Napel, G.D. Rubin, “Fast sliding thin slab volume visualization”, Symposium on Volume Visualization’96, pp. 79-86, 1996.

[165] C. Zhang, R. Crawfis, “Volumetric Shadows Using Splatting”, Proc. Visualization 2002, pp. 85-92, 2002.

[166] C. Zhang, R. Crawfis, “Shadows and Soft Shadows with Participating Media Using Splatting”, IEEE Transactions on Visualization and Computer Graphics, vol. 9, no. 2, pp. 139-149, 2003.

[167] C. Zhang, D. Xue, R. Crawfis, and R. Wenger, “Time-Varying Interval Volumes”, in International Workshop on Volume Graphics 2005, pp. 99-107.

[168] M. Zoeller, D. Stalling, and H.-C. Hege, “Interactive visualization of 3D-vector fields using illuminated streamlines”, in Proceedings of IEEE Visualization '96, pp. 107-113, 1996.

179