ADVANCED FLOW

DISSERTATION

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the

Graduate School of The Ohio State University

By

Liya Li, B.E., M.S.

*****

The Ohio State University

2007

Dissertation Committee: Approved by

Professor Han-Wei Shen, Adviser Professor Roger Crawfis Adviser Professor Yusu Wang Graduate Program in Computer Science and Engineering °c Copyright by

Liya Li

2007 ABSTRACT

Flow visualization has been playing a substantial role in many engineering and scientific applications, such as automotive industry, computational fluid dynamics, chemical processing, and weather simulation and climate modelling. Many meth- ods have been proposed in the past decade to visualize steady and time-varying

flow fields, in which textures-based and geometry-based visualization are widely used to explore the underlying fluid dynamics. This dissertation presents a view- dependent flow texture algorithm, an illustrative streamline placement algorithm on two-dimensional vector fields, and an image-based streamline placement algorithm on three-dimensional vector fields.

Flow texture, generated through convolution and filtering of texture values ac- cording to the local flow vectors, is a dense representation of the vector field to provide global information of the flow structure. A view-dependent algorithm for multi-resolution flow texture advection on two-dimensional structured rectilinear and curvilinear grid is presented. By using an intermediate representation of the under- lying flow fields, the algorithm can adjust the resolutions of the output texture on the fly as the user zooms in and out of the field, which can avoid aliasing as well as ensure enough detail.

Geometry-based methods use geometries, such as lines, tubes, or balls, to represent the motion paths advected from the vector fields. It provides a sparse representation

ii and an intuitive visualization of flow trajectory. For two-dimensional vector fields, a streamline placement strategy is presented to generate representative and illustrative streamlines, which can effectively prevent the visual overload by emphasizing the essential and deemphasizing the trivial or repetitive flow patterns. A user study is performed to quantify the effectiveness of this visualization algorithm, and the results are provided. For three-dimensional vector fields, an image-based streamline seeding algorithm is introduced to better display the streamlines and reduce visual cluttering in the output images. Various effects can be achieved to enhance the visual understanding of three-dimensional flow lines.

iii To Tao, and my parents.

iv ACKNOWLEDGMENTS

I am grateful to my advisor Dr. Han-Wei Shen, who guided me expertly along the

PhD study and helped me through many research difficulties. I would like to express my sincere gratitude to Dr. Roger Crawfis, Dr. Yusu Wang, Dr. Garry McKenzie, and my other committee members. Thank you very much for your valuable time and effort, insightful criticism and advice.

A heartfelt thanks to my colleagues Dr. Jinzhu Gao, Dr. Antonio Garcia, Teng-

Yok Lee, Dr. Naeem Shareef, Dr. Chaoli Wang, and Jonathan Woodring, whose hard work and passion encouraged me. I enjoyed working with them and learning from them. I would like to extend my thanks to other members in the group with whom I shared memorable experiences for the past five years. I wish you all the best in your respective research and career.

My deepest gratitude is to my husband Tao Li, for everything. For love and for life. I am very grateful to my loyal friends for their encouragement.

v VITA

1978 ...... Born - Hubei, China

1999 ...... B.E. Computer Science Beijing Institute of Technology, China

2002 ...... M.S. Computer Science Beijing Institute of Technology, China

2006 ...... M.S. Computer Science The Ohio State University

September 2002 - August 2003 ...... University Fellow The Ohio State University

September 2003 - March 2004 ...... Graduate Teaching Associate The Ohio State University

April 2004 - August 2007 ...... Graduate Research Associate The Ohio State University

June - September, 2005 ...... Research Intern The National Center for Atmospheric Research

September - November, 2007 ...... Intern NVIDIA

vi PUBLICATIONS

Refereed Papers

Liya Li, Hsien-Hsi Hsieh, and Han-Wei Shen, “Illustrative Streamline Placement and Visualization”. IEEE Pacific Visualization Symposium, March 2008.

Liya Li and Han-Wei Shen, “Image-Based Streamline Generation and Rendering”. IEEE Transactions on Visualization and Computer Graphics, 13(3):630-640, May 2007.

Liya Li and Han-Wei Shen, “View-dependent Multi-resolutional Flow Texture Ad- vection”. Visualization and Data Analysis, 2006.

Chaoli Wang, Jinzhu Gao, Liya Li, and Han-Wei Shen, “A Multiresolution Framework for Large-Scale Time-Varying ”. In Proceed- ings of International Workshop on Volume Graphics 2005, Stony Brook, New York, pages 11-19, June 2005.

Jinzhu Gao, Chaoli Wang, Liya Li, and Han-Wei Shen, “A Parallel Multiresolution Volume Rendering Algorithm for Large Data Visualization”. Parallel Computing (Special Issue on Parallel Graphics and Visualization), 31(2):185-204, February 2005.

Unrefereed Papers

Liya Li and Han-Wei Shen, “Image-Based Streamline Generation and Rendering”. Technical Report OSU-CISRC8/06-TR71, Department of Computer Science and En- gineering, The Ohio State University, 2005.

FIELDS OF STUDY

Major Field: Computer Science and Engineering

Studies in: Computer Graphics Professor Han-Wei Shen Computer Architecture Professor Gagan Agrawal Computer Networking Professor Dong Xuan

vii OF CONTENTS

Page

Abstract ...... ii

Dedication ...... iv

Acknowledgments ...... v

Vita ...... vi

List of Tables ...... x

List of Figures ...... xi

Chapters:

1. Introduction ...... 1

2. Background ...... 9

2.1 Vector Fields ...... 9 2.1.1 Grids ...... 9 2.1.2 Integral Curves ...... 12 2.2 Critical Points and Flow Topology ...... 14

3. Related Work ...... 17

3.1 Flow Texture ...... 17 3.2 Two-dimensional Streamline Placement ...... 20 3.3 Simplification of Vector Fields ...... 23 3.4 Streamline Clustering ...... 25 3.5 Three-dimensional Streamline Placement ...... 25 3.6 Visualization Enhancement ...... 27

viii 4. View-dependent Multi-resolutional Flow Texture Advection ...... 29

4.1 Algorithm Overview ...... 29 4.2 Flow Field Representation ...... 30 4.3 Texture Advection ...... 31 4.4 Spatial Coherence ...... 34 4.5 Multi-resolutional Texture Avection ...... 35 4.5.1 Adjustment of Advection Step Size ...... 37 4.6 Results ...... 39

5. Illustrative Streamline Placement ...... 47

5.1 Algorithm Overview ...... 47 5.1.1 Distance Field ...... 49 5.1.2 Computation of Local Dissimilarity ...... 50 5.1.3 Influence from Multiple Streamlines ...... 52 5.1.4 Computation of Global Dissimilarity ...... 54 5.1.5 Selection of Candidate Seeds ...... 54 5.2 Topology-Based Enhancement ...... 57 5.3 Quality Analysis ...... 59 5.3.1 Quantitative Comparison ...... 59 5.3.2 User Study ...... 64 5.4 Results ...... 70

6. Image Based Streamline Generation and Rendering ...... 76

6.1 Algorithm Overview ...... 76 6.2 Image Space Streamline Placement ...... 78 6.2.1 Evenly-spaced Streamlines in Image Space ...... 79 6.2.2 Streamline Placement Strategies ...... 81 6.2.3 Additional Run Time Control ...... 90 6.3 Results ...... 98

7. Conclusions ...... 101

Bibliography ...... 106

ix LIST OF TABLES

Table Page

4.1 Datasets used in the experiments. Note that the size for the vortex data includes all 31 time steps. The sizes are in KBytes...... 39

4.2 The time for trace slice preprocessing and texture creation and loading (in seconds)...... 40

4.3 The size of trace slices (in MBytes) including all time steps. Note that Vortex dataset is time-varying...... 44

5.1 The percentages of user rankings for each image based on the easiness to follow the underlying flow paths...... 68

5.2 The percentages of user rankings for each image based on the easiness to locate the critical points by observing the streamlines...... 69

5.3 The percentages of user rankings for each image based on the overall effec- tiveness of visualization considering the flow paths and critical points. . . 69

5.4 Information of four different datasets, and the number of streamlines gen- erated by the algorithm...... 71

5.5 Timings (in seconds) measured for generating streamlines. Each row corre- sponds to a data set listed in the same row of Table 5.4...... 72

x LIST OF FIGURES

Figure Page

1.1 Hand-drawn streamlines for a flow field around a cylinder. Image courtesy of Greg Turk [63]...... 5

2.1 Different types of grids (a) regular Cartesian grid (b) irregular Cartesian grid (c) structured grid (d) unstructured grid...... 10

2.2 Classification of critical points of two-dimensional vector fields. R1 and R2 denote the real part of the eigenvalues of the Jacobian matrix, while I1 and I2 denote the imaginary parts. Image courtesy of Helman and Hesselink [24]. 16

4.1 The creation of trace slices by backward advection...... 32

4.2 Texture advection using two-stage texture lookups...... 34

4.3 Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slices and the down-sampled vortex dataset reduced from 100x100 to 50x50. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field...... 42

4.4 Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slice and the down-sampled vortex dataset reduced from 100x100 to 25x25. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field...... 42

4.5 Rendering of the post dataset (a) with (b) without the multi-resolution level of detail control...... 43

4.6 (a)With the correction of noise distribution, no stretched pattern can be seen (b) Rendering using LIC with the original resolution of 52x62. . . . . 44

xi 4.7 The image on the left was generated when zoomed in. As the user zoomed out from the image on the left, the algorithm was able to produce a clearer pattern by switching to a lower resolution of trace slices and noise texture (upper right), while the algorithm with no LOD control produced aliasing result (lower right)...... 45

4.8 A similar test as Figure 4.7 using the time-varying vortex dataset. It can be seen that this algorithm produced a better image (upper right) compared with no level of detail adjustment (lower right)...... 46

5.1 Streamlines generated by the algorithm...... 48

5.2 Assume the flow field is linear and streamlines are straight lines. The circle in the images denotes the region where a critical point is located. Black lines represent the exact streamlines seeded around the critical point. The orange lines represent the approximate vectors by considering the influence of only one closest streamline (left), and the blending influence of two closest streamlines (right)...... 53

5.3 Streamlines generated by the algorithm on the Oceanfield data...... 57

5.4 Streamlines generated when the flow topology is considered. There are three saddle and two attracting focus critical points in this data...... 58

5.5 (a) Representative streamlines generated by the algorithm (b) Gray scale image colored by one minus a normalized value of the cosine of the angle between vectors from the original field and the reconstructed field. Dark color means the two vectors are almost aligned with each other, while brighter color means more errors. The maximal difference between the vector directions in this image is about 26 degree, and the minimal difference is 0 degree...... 62

5.6 (a) Gray scale image colored by the distance errors (in the unit of cells) between two streamlines integrated from each grid point in the original vector field and the reconstructed one. Dark color means low errors, while brighter color means higher errors (b) Histogram of the streamline errors collected from all grid points in the field. X axis is the error, while Y axis is the frequency of the corresponding error value. The maximal difference is 23.1 and the minimal is 0.0. The dimensions of the field is 100x100...... 63

xii 5.7 A group of images used in the first task of the user study...... 73

5.8 Streamlines generated by Mebarki et al. ’s algorithm (left), Liu et al. ’s algorithm (middle), and my algorithm (right)...... 74

5.9 Interface for predicting particle advection paths. Blue arrows on red stream- lines show the flow directions. The red point is the particle to be advected from...... 75

5.10 Mean errors for the advection task on the four different datasets. X axis stands for radius of circles around the selected points, and Y axis depicts the mean error plus or minus the standard deviation. Larger value along Y axis means higher error. Y axis starts from -1 to make the graphs easier to visualize. Dimensions of the datasets (a) 64x64 (b) 64x64 (c) 64x64 (d) 100x100...... 75

6.1 Visualization pipeline of the image-based streamline generation scheme. . . 77

6.2 Streamlines generated on two different stream surfaces...... 84

6.3 Seeding templates for different types of critical points - left: repelling or at- tracting node; middle: attracting or repelling saddle and spiral saddle; right: attracting or repelling spiral (critical point classification image courtesy of Alex Pang)...... 85

6.4 Streamlines generated from critical point templates. Three sphere templates stand for sinks, while the two-cones template stands for saddle...... 86

6.5 (a) An isosurface of velocity magnitude colored by using the velocity (u,v,w) as (r,g,b) (b) Streamlines generated from the isosurface...... 87

6.6 (a) A slicing plane colored by using the velocity (u,v,w) as (r,g,b) (b) Stream- lines generated from the slicing plane...... 88

6.7 (a) The Cylinder as an external object (b) Streamlines generated from a cylinder...... 89

6.8 Level of detail streamlines generated at three different scales. It can be seen that as the field is projected to a larger area, more streamlines that can better reveal the flow features are generated...... 91

xiii 6.9 Streamlines computed using different offsets from a depth generated by a sphere. (a) no offset from the original depth map (b) by increasing a value from the original depth map (c) by further increasing a value from the original depth map (d) by decreasing a value from the original depth map. 92

6.10 An example of peeling away one layer of streamlines by not allowing them to integrate beyond a fixed distance from the input depth map...... 93

6.11 First row: rendering images of stream surface from different viewpoints. Second row: streamlines generated at the corresponding viewpoint. Third row: the combined images of streamlines rendered from four different views. 94

6.12 Streamlines generated from three different cylinder locations (left three im- ages) are combined together and rendered to the image on the right. . . . 96

6.13 Streamline densities are controlled by velocity magnitude on a slice. (a) larger velocity magnitudes are displayed in brighter colors (b) the stream- lines generated from the slice...... 97

6.14 Streamlines generated and rendered with three different styles by the image- based algorithm...... 98

6.15 The percentage of total time each main step used...... 100

6.16 The pink curve (the left axis as scale) shows the number of streamlines, while the blue one (the right axis as scale) shows the number of line segments generated...... 100

6.17 The time (in seconds) to generate streamlines from an isosurface for different separating distance (pixels) using the Plume data set...... 100

xiv CHAPTER 1

INTRODUCTION

Simulations play an important role in scientific and engineering fields, which can be used to simulate a phenomenon, analyze what has happened, and predict what will happen. By utilizing graphics techniques to process the data output from simu- lations, visualization [17] provides an intuitive way to interpret the data and further explore the phenomena described by the data. This visual information bridges the gap between the explicit and implicit information inherent in the data and the end users.

Visualization of vector fields has been a major interest since early time to explore the fluid dynamics [1, 66], evolved from experimental flow visualization to computer simulated flow visualization. Take some of the experimental flow visualization tech- niques [16] as example, we can see how helpful they are in solving different problems.

To study the flow field on a surface, color pigments are mixed with oil and painted on the surface of a model in a windtunnel [48]. The air flowing over the surface carries the oil with it and a streaky deposit of the paint remains to mark the directions of the

flow. To visualize the flow dynamics of a liquid fluid, color dyes are injected into the liquid. It was a popular way to visualize how the flow converges, diverges and mixes at the downstream when the dyes flow through. When using different colors, we can

1 know how streams from different regions mix together. For experimental techniques, both the setup of equipments and the material used can cause some errors during the experiments. In addition, it is not easy to reproduce the results generated from previous experiments with special setup. With faster computing power and higher complexity of simulations, more and more data are analyzed using computational models in recent decades.

Among those existing computerized techniques to visualize flow fields, texture- based and geometry-based are two most popular ones. The goal of my research is to design new algorithms to effectively visualize vector fields to address the following issues.

Texture-based Flow Visualization: Existing texture advection techniques for structured recti- and curvi-linear grid data can be classified into object space and image space methods. In the object space methods such as [12, 42], the computation of textures is first performed in the domain that defines the flow field using techniques such as Line Integral Convolution (LIC). The resulting texture is then mapped to the proxy geometry representing the underlying surface and displayed to the screen. The image space methods such as IBFVS [70], or ISA [34], on the other hand, perform the calculation directly on the screen. In those methods, the computation is done at per fragment level through successive advection and blending of textures.

In general, object space methods do not consider the viewing parameters related to the display when the texture advection is performed. As the resulting flow texture is mapped to the surface mesh, aliasing or distortion can happen if there is a large discrepancy between the density of the mesh and the resolution of the screen. When multiple grid cells are projected to a single pixel, for example, a straightforward

2 mapping of the flow texture to the geometry will produce an aliased result because the texture is under-sampled. When the density of the grid after projection is much lower than the screen resolution, on the other hand, the output texture does not possess enough granularity to depict the flow directions clearly because the texels in the flow texture will only get enlarged or interpolated. In fact, both aliasing and a lack of clear depiction of flow directions can exist simultaneously when the grid density varies substantially across the field domain. This is particularly common for data defined on curvilinear grids.

For the image space methods, when the underlying flow field is defined on a surface mesh existing in three-dimensional space, the mesh is first projected onto the image plane before the texture advection and blending are performed. IBFVS [70] does the projection after the vertices are advected in the flow field while ISA [34] projects both the vertices and the vector field before the advection is performed. In those methods, since the input texture is defined and advected in screen space, the texture aliasing problem is alleviated. Performing the texture advection after projection, however, may encounter several problems. First, since the input noise is defined in screen space and thus has the same size and frequency everywhere regardless of the distances, forms, and sizes of the objects, important cues for depth and shape reasoning in the three-dimensional space are lost. Also, when multiple cells are projected onto the same pixel, since the advection and blending are performed in the image space, the path of the texture advection can be incorrect. Finally, the restriction of only using input textures defined on the image plane makes it more difficult to control the appearance of the output, especially for the case when it is more desirable to advect textures that adhere to the object surfaces.

3 A view-dependent flow texture advection algorithm is presented based on a hybrid image and object space approach. The algorithm can be applied to two-dimensional steady and time-varying vector fields defined on structured rectilinear and curvilinear surface meshes. It is an image space method in the sense that the flow texture is computed at each fragment at the rasterization stage when the screen projection of the mesh has already been determined by the given viewing parameters. It is an object space method because the input texture to be advected and the flow line advection paths are all defined in the original domain where the flow field is defined.

This is to preserve important depth cues that allow better depictions of the shapes.

The proposed algorithm is based on a novel intermediate representation of the flow

field, called a Trace Slice, which can generate the flow texture at a desired resolution interactively based on the run-time viewing parameters. The algorithm can generate

flow patterns with appropriate granularity in the output texture even at the places where the mesh is sparse. As the user zooms in and out of the field, the resulting flow texture of an appropriate resolution will be computed at an interactive rate. When the view is constantly changing, it will not produce a blurred result as in those image based methods [70, 34] where the texture will only clear up over the course of several frames.

Two-dimensional Streamline Placement: Generally speaking, the main chal- lenge for the streamline-based methods is the placement of seeds. On the one hand, placing too many streamlines can make the final images cluttered, and hence the data become more difficult to understand. On the other hand, placing too few streamlines can miss important flow features.

4 Hand-drawn streamlines are frequently shown in scientific literature to provide concise and illustrative descriptions of the underlying physics. Fig. 1.1 shows such an example. The abstract information provided by the streamlines in the image clearly shows the primary features of the flow field. Even though the streamlines do not cover everywhere in the field, we human beings are able to create a mental model to reconstruct the flow field when looking at this concise . That means, to depict a flow field, it is unnecessary to draw streamlines at a very high density.

Abstraction can effectively prevent visual overload, and only emphasize the essential while deemphasizing the trivial or repetitive flow patterns. In the visualization re- search literature, there have been some streamline seeding algorithms proposed in the past [63, 28, 72, 45, 39]. Most of the methods, however, are based on evenly-spaced distribution criteria, namely streamlines are spaced evenly apart at a pre-set distance threshold across the entire field. While those methods can reduce visual cluttering by terminating the advection of streamlines when they are too close to each other, more streamlines than necessary are often generated as a result. In addition, there is no visual focus provided to the viewers to quickly identify the overall structure of the flow field.

Figure 1.1: Hand-drawn streamlines for a flow field around a cylinder. Image courtesy of Greg Turk [63].

5 Spatial coherence often exists in a flow field, meaning neighboring regions have similar vector directions, and nearby streamlines resemble each other. To create a concise and illustrative visualization of streamlines, a seeding strategy is presented which utilizes spatial coherence of streamlines in two-dimensional vector fields. The goal is to succinctly and effectively illustrate vector fields, rather than uniformly laying out streamlines with equal distances among them, as in most of the existing methods. The density of streamlines in the final image is varied to reflect the co- herence of the underlying flow patterns and provide visual focus. In the algorithm, two-dimensional distance fields representing the distances from each grid point in the

field to the nearby streamlines are computed. From the distance fields, a local metric is derived to measure the dissimilarity between the vectors from the original field and an approximate field computed from the nearby streamlines. A global metric is defined to measure the dissimilarity between streamlines. A greedy method is used to choose a point as the next seed if both of its local and global dissimilarity satisfy the requirements.

Three-dimensional Streamline Placement: Streamline placement becomes more difficult for three-dimensional vector fields. For line primitives, after being pro- jected to the screen, the relative depth relationship between neighboring line segments is lost. Thus even though two lines are far away from each other in three-dimensional space, they might give the impression that they overlap or intersect with each other in two-dimensional space. For visualizing streamlines in three-dimensional vector fields, spatial perception is important to be considered as well. An ideal streamline seed

6 placement algorithm should be able to generate visually pleasing and technically il- lustrative images. It should also allow the user to focus on important local features in the flow field.

To better display three-dimensional streamlines and reduce visual cluttering in the output images, an image based approach is presented. Visual cluttering hap- pens because streamlines can arbitrarily intersect or overlap with each other after being projected to the screen, which makes it difficult for the user to perceive the underlying flow structures. In the algorithm, instead of placing streamline seeds in three-dimensional space, seeds are placed on the image plane and then unprojected back to object space before streamline integrations take place. The three-dimensional positions of the seeds can be uniquely determined by the selected image positions and their depth values obtained from an input depth map. By carefully spacing out the streamlines in image space as they are integrated, visual cluttering can be effectively reduced, which minimizes depth ambiguity caused by overlapping streamlines in the image. It is feasible to achieve a variety of effects, such as level of detail, depth peel- ing, and stylized rendering to enhance the perception of three-dimensional flow lines.

Another advantage is that seed placement and streamline visualization become more tightly coupled with other visualization techniques. As the user is exploring other

flow related variables, when interesting features on the screen are spotted, the seeds can be directly placed on the image without having to have a separate process to find the seed positions surrounding the features generated by the visualization technique in use.

The remainder of this dissertation is organized as follows: Chapter 2 presents some definitions and background on vector fields that will be useful for subsequent

7 chapters. Chapter 3 reviews the related work. New algorithms are described in

Chapter 4, Chapter 5, and Chapter 6 respectively. The dissertation is concluded in

Chapter 7.

Portions of this research have been published in citations [37], and [38].

8 CHAPTER 2

BACKGROUND

This chapter is devoted to the introduction of vector fields and related knowledge that will be useful for subsequent chapters.

2.1 Vector Fields

A vector field f defined on an n-dimensional domain S ∈ Rn can be represented by a mapping f(x): S → Rn, while x is defined in standard Euclidean coordinates

(x1, x2, ..., xn). In the case of n = 3, this represents a static three-dimensional vector

field. Usually, the flow data is given with respect to time, named time-varying flow

field, which can be defined as f(x, t): S × I → Rn, while I ∈ R, and t ∈ I serves as a description for time.

2.1.1 Grids

In real applications, flow simulations are performed in the domain with respect to a certain type of grid. For example, if the simulation is tested align the surface of a plane, the grid is constructed to wrap the shape of the plane. Therefore, flow

fields can be defined on various types of grids. Generally speaking, there are regular and irregular type. For regular type, for example Fig. 2.1 (a), (b), and(c), there is a

9 mathematical relationship within the composing points and cells and can be implicitly represented, which saves memory storage and computation, such as data interpolation and point location. For irregular type, for example Fig. 2.1 (d), which is the most general form, both the topology and geometry are completely unstructured.

(a) (b) (c) (d)

Figure 2.1: Different types of grids (a) regular Cartesian grid (b) irregular Cartesian grid (c) structured grid (d) unstructured grid.

Regular Cartesian Grid

The regularity of topology and geometry of this type suggests a natural mapping with the x-y-z coordinate system. A particular point or cell can be uniquely indexed by three indices i-j-k, which simplifies both the interpolation and location of data and position in the grid.

Irregular Cartesian Grid

The difference between regular and irregular Cartesian grid is the regularity and irregularity of geometry. Geometry information, such as the cell size, needs to be considered in some operations, such as interpolation.

10 Structured Curvilinear Grid

A structured curvilinear grid is a type of grid with regular topology but irregular geometry. That is to say, the topology of the grid can be implicitly represented by specifying the dimensions, however, the geometry needs to be explicitly represented by an array of point coordinates. A typical application is Computational Fluid Dynamics

(CFD). A grid is generated in such a way to wrap the surface of objects in the flow

fields. The density of the grid can be various, depending on the structure of the object and the requirement of accuracy. The higher accuracy needed or higher gradient occurs, the denser the grid is. Usually the visualization of the vector fields defined on curvilinear grids can be performed in two different spaces:

• Physical space: In physical space, the parameter surfaces are described and the

motion is defined as well. Properties, such as velocity, density, pressure, or

temperature, of the vector field are generated and stored at each grid point.

Coordinates of a grid point in this space can be denoted as ~x = (x,y,z).

• Computational space: This space lies on a regular Cartesian grid, which is

transformed from physical space using the inverse Jacobian matrix. Coordinates

of a grid point in this space can be denoted as ξ~ = (ξ, η, ζ).

Even though it is feasible to directly apply visualization techniques in physical space on a curvilinear grid, usually the vector field is transformed to computational space to perform numerical operation. This is because the regularity of the topology structure in computational space simplifies the interpolation and point location. In computational space, at any location, the vector can be reconstructed by using linear interpolation of the vectors at neighboring grid points. However, in physical space,

11 the physical geometry is irregular and every cell has its own shape and size. The reconstruction becomes much more complex.

The velocity in physical space and computational space can be denoted as equation

2.1 and 2.2 respectively.

 ∂x  ∂t ∂~x  ∂y  ∂t = ∂t (2.1) ∂z ∂t

 ∂ξ  ∂t ∂ξ~  ∂η  (2.2) ∂t = ∂t ∂ζ ∂t The transformation of the vector field from physical space to computational space is then specified by

 −1  ∂ξ  ∂x ∂x ∂x  ∂x  ∂t ∂ξ ∂η ∂ζ ∂t  ∂η   ∂y ∂y ∂y   ∂y  ∂t =  ∂ξ ∂η ∂ζ  · ∂t (2.3) ∂ζ ∂z ∂z ∂z ∂z ∂t ∂ξ ∂η ∂ζ ∂t

Unstructured Grid

For unstructured grids, both the topology and geometry are irregular. Compared with structured grids, connectivity relationship between vertices are required. Ueng et al. [64] proposed an efficient method to construct streamlines on unstructured grids.

2.1.2 Integral Curves

One of the main tasks of applying visualization techniques to vector fields is to explore the dynamical evolution of a fluid system, which gives rise to a set of integral curves. These curves are defined by ordinary differential equation with different initial conditions,

12 ∂~x(t) = f(~x,t) (2.4) ∂t where ~x(t) represents the particle position at time t, and t is the integration time.

• Streamline: A streamline is a curve that everywhere is tangent to the instan-

taneous local vector field. In an unsteady flow field, the instantaneous vector

at a fixed time is considered. At an instantaneous time τ, a streamline is the

solution to

∂~x(t) = f(~x,τ) , where ~x(t ) = ~x (2.5) ∂t 0 0

• Pathline: A pathline is the actual path traveled by an individual fluid particle

over some time period. Starting from time t0, a particle path is the solution to

∂~x(t) = f(~x,t) , where ~x(t ) = ~x (2.6) ∂t 0 0

• Streakline: A streakline is the line joining the positions of all particles that have

been released previously from the same point. To get a streakline at time t, a set

of particles are released from a position x0 at times s ∈ [t1, t], and the position

for each particle at time t is the solution to equation 2.6 with its corresponding

initial condition (x0, s).

To solve the ODEs problems, usually numerical methods [6] are applied. Ac- cording to the accuracy, the performance, and the complexity, there are different approaches to be used. In this dissertation, I mainly use the fourth order Runge-

Kutta (RK4) numerical integration method [49] to better approximate the local behavior of the integral curves. For a static vector field, RK4 is defined by:

13 k1 = h · v(x) 1 k = h · v(x + k ) 2 2 1 1 k = h · v(x + k ) 3 2 2

k4 = h · v(x + k3) k k k k x = x + 1 + 2 + 3 + 4 + O(h5) (2.7) n+1 n 6 3 3 6 where h is the step size and can be adjusted adaptively. When the flow becomes turbulent, smaller step size is used to capture the changes, and when the flow becomes stable, larger step size is used to save the integration time.

2.2 Critical Points and Flow Topology

Critical points are points at which the magnitude of the vector vanishes. In Math- ematics, a critical point is a point on the domain of a function where the derivative is zero or the function is not differentiable. It is also called stationary point. The criti- cal points in a vector field can determine the topology of that field [22, 23, 2], which is very important to analyze the underlying flow dynamics. Critical points can be characterized according to the behavior of nearby tangent curves (two-dimensional) or tangent surfaces (three-dimensional).

To simplify, two-dimensional vector field is used as an example to discuss the critical points and the classification. A vector (u, v) in the vicinity of a critical point

(x0, y0) can be expressed by the first-order Taylor series expansion

∂u ∂u u(dx1, dy1) ≈ dx1 + dy1 ∂x1 ∂y1 ∂v ∂v (2.8) v(dx1, dy1) ≈ dx1 + dy1 ∂x1 ∂y1 14 where dx1 and dy1 are small distance increments from the position of the critical point. This critical point can be classified according to the eigenvalues of the Jacobian matrix, as defined in 2.3, of the vector (u, v) with respect to (x0, y0).

Fig. 2.2 shows the classification of critical points according to the eigenvalues of a two-dimensional vector fields. A real eigenvector of the matrix defines a direction such that if moving slightly off the critical point in that direction, the field is parallel to the direction being moved. Thus, at the critical point, the real eigenvectors are tangent to the trajectories that end on the point. The positive or negative real part of an eigenvalue indicates an attracting (incoming) or repelling (outgoing) nature of a critical point. When the real part is greater than zero, it is a repelling critical point.

Otherwise, it is an attracting one. The imaginary part denotes whether the point presents a circulation pattern. When the imaginary part is not zero, the corresponding critical point is repelling or attracting focus, rather than node. Saddle point is distinct from other types, because there are only four tangent curves ending at the point.

These curves are tangent to the two eigenvectors of the Jacobian matrix, which are the separatrices of the saddle point. It has one positive and one negative eigenvalue. Near a saddle, the vector field approaches the critical point along negative eigendirections and recedes along positive eigen-directions.

The same principle and method to classify critical points can be applied to three- dimensional vector fields, except for the fact that there are more types of critical points.

15 Figure 2.2: Classification of critical points of two-dimensional vector fields. R1 and R2 denote the real part of the eigenvalues of the Jacobian matrix, while I1 and I2 denote the imaginary parts. Image courtesy of Helman and Hesselink [24].

16 CHAPTER 3

RELATED WORK

In this chapter, I discuss the related work in the area of flow texture, streamline placement, vector field simplification, streamline clustering, and perception enhance- ment.

3.1 Flow Texture

Texture advection has been widely used for visualizing flow fields [44, 51, 33].

It provides a full spatial coverage of the field, which can better show the global characteristics. A lot of techniques have been proposed to visualize textures on two- dimensional or three-dimensional vector fields.

Among the existing texture advection methods, Line Integral Convolution (LIC)

[5] and Spot Noises [67] are generally considered classic. In LIC, convolution is per- formed along the streamline path originated from each pixel in a two-dimensional grid to create coherent flow patterns. In Spot Noise, random spots are warped along the local flow directions and blended together to create the final image. Both of the algorithms have inspired a substantial amount of follow-up research in the past decade [56, 9, 12, 53, 31, 74, 78, 73, 41, 54, 19, 8, 71, 58, 65], to name a few.

17 To speed up the computation of the original LIC algorithm, Stalling and Hege

[56] introduced a new line integral algorithm, called FastLIC. Unlike the original

LIC algorithm, which integrates a streamline from every pixel in the output image and performs convolution, FastLIC efficiently reuses the intensities obtained along the convolution path of every streamline by spreading out the values to many pixels covered by this streamline. It not only saves a lot of computing overhead, but also makes it feasible for computing images at arbitrary resolution. For time-varying fields,

Shen and Kao [54] presented UFLIC, an Unsteady Flow Line Integral Convolution algorithm. Their algorithm uses a time accurate value scattering scheme to model the texture advection process. To further enhance the coherence of the flow animation, they successively update the convolution results over time by using the output from the previous step as input to the next step. Jobard et al. [27] proposed a Lagrangian-

Eulerian Advection (LEA) algorithm for unsteady flows which performs the texture advection at each fragment at an interactive speed. van Wijk [69] proposed Image

Based Flow Visualization (IBFV), which advects the underlying mesh by the flow

field. Through successive updates of texture coordinates at the mesh vertices, an input texture is being continuously advected and blended. Recently, Li et al. [36, 55] proposed a Chameleon algorithm utilizing GPU-based dependent texture hardware for a more flexible control of texture appearance to visualize three-dimensional steady and unsteady flows. Xue et al. [76] proposed two techniques to render implicit flow

fields, which are constructed to record the information about flow advection. The algorithm provides a way to visualize information inside the flow volume.

For non-parametric surfaces, van Wijk recently extended his IBFV [69] to IBVFS

[70], which first advects the mesh vertices in the three-dimensional space, and then

18 projects the vertices to screen space to advect a screen space aligned input texture.

Laremee et al. [34] proposed another image space-based method, called ISA, which projects the mesh vertices as well as the vector field to the two-dimensional screen before texture advections are performed. Weiskopf et al. [75] proposed a unified framework for two-dimensional time-varying fields that can generate animated flow textures to highlight both instantaneous and time-dependent flow features.

There were algorithms proposed specifically to visualize flow textures on curvilin- ear grids, even though those techniques to generate flow textures on surfaces can be applied. Forssell and Cohen [12] extended the original LIC for visualizing the flow on curvilinear grids. First the vectors in physical space, which defines the warped struc- ture of the curvilinear grid, are converted to computational space, which defines the logical organization in the form of regular grid of the curvilinear grid, by multiplying the inverse Jacobian matrix with the vector at each grid point. It is in computational space that the conventional LIC algorithm is performed and the two-dimensional flow texture is generated. Then this texture is mapped back onto the three-dimensional surface in physical space. As the sizes of cells in a curvilinear grid can be dramati- cally different, when mapping the texture back to physical space, the distortion of the texture in each cell can be different, which might give users wrong impression of the underlying fields. This effect can become more severely if animation is used to em- phasize the flow motion. To address the aliasing effect caused by uniform convolution length used in LIC algorithm, they proposed to use varied convolution length based on the grid density in the direction of the flow. In [42], Mao et al. pointed out that the solution proposed by Forssell and Cohen is not enough to completely solve the problem, because the noise granularity is important as well to generate flow textures

19 on curvilinear grids. They proposed to use multi-granularity noise texture based on a stochastic sampling technique called Poisson ellipse sampling. The computational space is re-sampled into a set of randomly distributed points, and the sizes of ellipses are adjusted according to the local cells in physical space. The final noise image, reflecting the density of the grid, is reconstructed from these points and ellipses and used as input to LIC. As for multi-frequency noise image for LIC, Kiu and Banks

[31] presented an explicit method using the velocity magnitude. The vector field is divided into intervals while each interval to vector magnitude in some range.

The noise frequency marked by each interval is inversely proportional to the vector magnitude. And the final noise image is composed of a sequence of images with different frequency. Although the problem of aliasing in the grid with high density is alleviated, their method is not interactive and can not be adapted to arbitrary viewing conditions as the user zooms in and out of the field.

3.2 Two-dimensional Streamline Placement

For two-dimensional static vector fields, there exist several streamline seeding strategies.

One of general strategies is to place streamlines evenly according to the distance between them in the field. In this layout, the image space is uniformly divided by the streamlines. Turk and Banks first proposed the image guided streamline placement algorithm in [63], which uses an energy function to measure the difference between a low-pass filtered streamline image and an image of the desired visual density. The motivation behind is that the energy of evenly placed streamlines should be even too. High energy value means streamlines are close to each other, while low energy

20 means the regions are devoid of streamlines. With this energy function, a random optimization process is performed iteratively to reduce the energy through some pre- defined operations on the streamlines. This method produces high quality images, however, the convergence is slow and the computation is expensive. Jobard and Lefer

[28] proposed a very easy and fast method to generate evenly-based streamlines. A new seed is chosen at a minimal distance away from existing streamlines, and the streamline from this seed is integrated until it is too close to the existing streamlines or leaves the domain. The process terminates until there is no more void region in the field. It explicitly controls the distance between adjacent streamlines to achieve the desired density. The most time-consuming process is the computation of distance between streamlines, unlike energy based algorithm, which spends time on the trial process of operations. According to the algorithm, new seeds are placed near to some existing streamlines, which might cause conflict with the preference for long streamlines. This is because the new streamline from this seed tends to get close to existing streamline and then the integration will be terminated before it can go far away. In order to address this issue, Mebarki et al. [45] proposed a two-dimensional streamline seeding algorithm by placing a new streamline at the farthest point away from all existing streamlines. The purpose of their algorithm is to generate long and evenly spaced streamlines. Seeding a streamline in the largest empty region indeed favors longer streamlines. Delaunay triangulation is used to tessellate regions between streamlines, which finds the largest void region and controls the distance between streamlines. The flow coherence is improved, even though the discontinuities still appear in the results when it becomes denser. Liu et al. [39] proposed an advanced evenly-spaced streamline placement strategy which prioritizes topological seeding and

21 long streamlines to minimize discontinuities. Adaptive distance control based on local

flow variance is used to address the cavity problem.

Even though evenly-based streamline placing strategy is popular, there are some potential issues to be addressed. First, due to its simplicity, there is no consideration of the underlying flow structure. On the one hand, if the distance threshold is set to be larger, it is easy to miss important flow features; on the other hand, if the distance is set to be smaller, the streamlines are very dense, which can cause visual aliasing effect. Second, the termination of integration of streamlines is decided according to the distance between streamlines. In this way, it might be distracted and confused whether the termination is caused by the flow field itself, for example hitting a critical point, or it is because of the distance constraints with the neighbors in the final image.

Some algorithms intentionally favor longer streamlines. But as they mentioned, it is not easy to generate streamlines satisfying both the distance criterion and the length preference. This effect becomes worse for vector fields with convergent flow structure because the flow in those regions tends to squeeze together. Streamlines can easily be terminated in those regions before they get close to each other, which can leave void regions in the final images.

Another strategy places streamlines according to the flow topology. The flow topology can be determined by the types of critical points, and the flow field can be divided into stable regions by critical points and tangent curves (two-dimensional) or tangent surfaces (three-dimensional). Topology skeleton is important to analyze the

flow fields. In view of the importance of the topology, it is natural to place streamlines based on this information. Verma et al. [72] proposed a seed placement strategy based on flow topology characterized by critical points in the two-dimensional vector fields.

22 The algorithm was designed to capture the important flow patterns, and also cover the region sufficiently with streamlines. Critical points in the field are first located and the types are identified. The field is divided into regions and each region contains one critical point. The correspondingly pre-defined template is applied according to the type of each critical point. The shape and size of templates are determined by the influence region covered by the critical points. To have a sufficient coverage, additional seed points are randomly distributed in empty regions using Poisson disk distribution. In this way, the important features will not be missed, no matter how dense or how sparse the final density of streamlines will be.

3.3 Simplification of Vector Fields

In the past years, there are many techniques proposed to simplify two-dimensional vector fields. One general class of simplification, as known as clustering, works on the vector field itself, such as constructing hierarchy of vector field.

Heckel et al. [18] proposed to generate a top-down segmentation of the discrete

field by splitting clusters of points. At beginning, all points of vector field are placed in a single cluster, which is defined by using a single point and associated vector.

Both the representative point and associated vector are computed by averaging the coordinates and vector values in the original vector field. In order to split this initial cluster, for each point in the cluster, two streamlines are integrated, one based on the simplified vector field and the other one based on the original vector field. The distance between the sequence of sampling points on the streamlines is accumulated to be as the error value at this point. Then the error value associated with the whole cluster can be computed as the maximum error value of all points in this cluster. Each

23 cluster is split using a bisection plane. The construction of the hierarchy is an iterative process, which always picks up the cluster with the maximal error as the next cluster to split, until the maximal error of each cluster is less then a threshold value. Since the algorithm uses the visual difference shown by streamlines as the error metric to guide the splitting process, it more or less involves some information about the topology of the underlying flow field. Telea and Wijk [60] presented a method to hierarchically construct the clusters bottom-up from the input flow field. Starting with each node as a cluster, the algorithm repeatedly selects the two most resembling neighboring clusters and merges them to form a larger cluster until a single cluster covering the whole field is generated. The metric to evaluate the similarity between vectors is based on the direction and magnitude comparison, and the position comparison. Du and Wang [10] proposed to use Centroidal Voronoi tessellations (CVTs) to simplify and visualize the vector fields. Given the definition of the distance between two points on the vector field, which involves both the angle between these two vectors and the

Euclidean distance, for a point belonging to a Voronoi region, its distance to the generator is the shortest in all distances to generators in other Voronoi regions. The result of tessellations is the simplified vector field, while the centroid of each cell is the representative vector. The properties of CVT ensure that the results are from a global , rather than locally greedy. This algorithm is fast and easy to implement.

24 3.4 Streamline Clustering

Fiber tracking, as known as streamline tracing, is wildly used in visualizing the results of Diffusion Tensor (DTI) [35]. Bundles constructed from the clus- tering of fibers tell anatomically meaningful information, which define the connection of different grey-matter regions. Fibers in DTI are different from general streamlines in conventional vector fields, and this is because there is inherent clustering structure in the fields. This means the spatial coherence in local regions is more obvious than that of general vector fields. The methods used to cluster streamlines [46] can inspire how to compare streamlines in conventional vector fields. Corouge et al. [7] proposed to use the position and shape similarity between pairs of fibers and tested several dis- tance metrics. For a pair of fibers, the distance between them can be evaluated as the closest point distance, mean distance of closest distances, or Hausdorff distance be- tween the corresponding points on the fibers. The shape-based distance is computed by extracting geometric features from fibers, such as length, center of mass, and sec- ond order moments. Brun et al. [4] presented a clustering method using normalized cut. Representative information, such as mean vectors and the covariance matrix of points on the traces, are first extracted and mapped to an Euclidean feature space.

In the feature space, fiber traces are compared pairwise, and a weighted, undirected graph is created. This graph is partitioned into coherent sets by using normalized cut criterion.

3.5 Three-dimensional Streamline Placement

For three-dimensional vector fields, the flow topology becomes more complex than that of two-dimensional fields. For rendering, when three-dimensional streamlines are

25 projected to two-dimensional screen, they can intersect or overlap with each other, and the depth information is lost. Thus, exiting methods on two-dimensional fields can not effectively be extended to three-dimensional fields. To generate aesthetically pleasing streamlines in three-dimensional flow fields, only the spacing control of streamlines in object space is not enough. The spacing control in image space is as important as that in object space. Compared with streamline seeding for two-dimensional vector

fields, less work were proposed on three-dimensional fields which could address the issues mentioned.

After the topology-based seed placement [72] on two-dimensional vector fields be- ing proposed, Ye et al. [77] extended this strategy to three-dimensional fields. Critical points are first located and classified, then appropriate templates are applied at the vicinity of critical points. Finally, Poisson seeding is used to populate the final empty region. Placing streamlines with higher importance first can reduce the visual clutter- ing effect in certain degree. Even though this method is effective to place streamlines in three-dimensional vector fields, analyzing the topology is not an easy work, be- cause the way we detect and classify critical points are numerical and approximate, instead of being accurate. For some three-dimensional fields, there are no critical points at all, in which case the seeds are just placed with Poisson distribution. Work presented by Mattausch et al. in [43] was an extension of Jobard and Lefer’s evenly- based algorithm [28] and multi-resolution strategy [29]. Spatial perception of the three-dimensional flow was improved by using depth cueing and halos. They also ap- plied focus+context methods, ROI driven streamline placing, and spotlights to solve the occlusion problem. This method only controls spacing between streamlines in

26 object space, and there is no guarantee for the spacing in image space and the com- pleteness of flow pattern. The completeness of flow information means the topology presented by the final set of streamlines. In this case, both the visual cluttering and missing of information are unavoidable.

3.6 Visualization Enhancement

Some research addressed the issue related with visual cluttering, occlusion, per- ception in the context of flow visualization. Take three-dimensional streamlines as example, as being projected to image space, the spatial information of streamlines is lost, which might hinder users’ understanding and real exploration of the underlying

flow patterns. Lighting is one of the elements to improve spatial perception, espe- cially in the case when streamlines are in bundle. Stalling and Zockler [57] employed a realistic shading model to interactively render a large number of properly illuminated

fieldlines using two-dimensional textures. A unique outward normal vector, which is well defined on surfaces, does not exist for line primitives. Instead, traditional light- ing equations are transformed to the form defined by lighting vector and the tangent vector of the line primitives. It is based on a maximum lighting principle, which gives a good approximation of specular reflection. To improve diffuse reflection, Mallo et al. [40] proposed a view-dependent lighting model based on averaged Phong/Blinn lighting of infinitesimally thin cylindrical tubes. They used a simplified expression of cylinder averaging. To emphasize the depth discontinuities, which can intuitively present the depth separation in a projected view, Interrante and Grosch [26] used a visibility-impeding volumetric halo function to highlight the locations and strengths of depth discontinuities. Interactive clipping of three-dimensional LIC volumes [50]

27 addresses the occlusion issue. Li et al. [36] used lighting, silhouette, and tone shad- ing to incorporate various depth cues in their rendering framework. Limb darkening

[20, 21] can be used to convey the three-dimensional shape and depth relations of the

fieldlines by creating a halo effect around each line.

28 CHAPTER 4

VIEW-DEPENDENT MULTI-RESOLUTIONAL FLOW TEXTURE ADVECTION

4.1 Algorithm Overview

The primary goal is to visualize two-dimensional steady and unsteady flow fields defined on recti- or curvi-linear surface meshes that exist in three-dimensional space.

An example of such data is a computational plane from a curvilinear mesh as shown in Figure 4.7 presented in the result section 4.6. To provide the interactivity at run time, texture advection is computed directly at each fragment in the image space using graphics hardware. Particle paths and the input texture to be advected are defined in object space to generate accurate and correct appearances. For grid cells that cover multiple pixels, the goal is to generate flow patterns with enough granularity within the projection region of those cells in the output texture without up-sampling the

flow field or integrating additional particle paths.

The main idea is to generate textures of various resolutions to match the screen resolution as the user zooms in and out of the data, it is important to have an inter- mediate representation for the flow field whose resolution can be easily adjusted at

29 run time to allow flexible and efficient texture advection. This intermediate represen- tation should avoid the problems commonly encountered when up- or down-sampling a flow field: up-sampling the vector field would create a large overhead for computing additional particle traces, while down-sampling the vector field can generate incorrect

flow paths. The latter is because a small error in each vector from the down-sampled

field can accumulate to a large error as the numerical integration is computed. An- other important criterion is that such an intermediate representation can be used directly by the texture advection algorithm and allows for an effective use of modern graphics hardware.

4.2 Flow Field Representation

The core of the algorithm is a novel representation of the underlying flow field, called a Trace Slice, which is used for texture advection under different viewing condi- tions. The primary advantage is that instead of generating multi-resolutional vector

fields and using the approximated field to perform texture advection, the advection paths of flow textures can be more accurately obtained at arbitrary resolutions using the trace slices. The trace slices also make it feasible to exploit programmable GPUs to perform view dependent texture advection for each fragment at an interactive speed.

A trace slice STd is a two-dimensional array with the same dimension as the input Ts grid. Each element of the trace slice corresponds to a mesh vertex, which can be indexed by coordinates (i, j) defined on the parametric surface. The attribute stored in the trace slice, denoted as STd (i, j), is a 2-tuple (a, b), which means if a particle is Ts released from (a, b) in the flow field at time step Ts, it reaches vertex (i, j) at time

30 step T , where T > T . In essence, the information stored in a trace slice STd is d d s Ts primarily used to advect a texture N released at time step Ts to time step Td. This is done by using the 2-tuple (a, b) stored in STd (i, j) as the texture coordinates to look Ts up the input texture N defined at Ts.

To create the trace slices for a given flow field, for every grid point (i, j) in the mesh, perform the following steps for each time step Td = 0...Tmax, where Tmax is the maximal time step in the time-varying field. Given a time Td, we first perform a backward advection from the grid point (i, j), which results in a pathline travelling in the space-time domain. Then, we sample the pathline at a sequence of time instants ti = Td − i × ∆t, i = 1..K to get K positions of the pathline for a constant K as long as ti stays greater than zero. Here the discussion about the choice of K is deferred to section 4.4. Since the pathline is advected backwards, this space-time position (a, b, ti) implies that a particle released from (a, b) at time step ti travels forwards and arrives at (i, j) at time Td. According to our definition of trace slices, the position (a, b) is

Td stored into Sti (i, j). If the same process is repeated for all grid points from the input mesh, a two-dimensional array STd (i, j), i ∈ [0,I ], j ∈ [0,J ] is formed, which is Ti max max a trace slice. Because the pathlines are sampled K times for different ti, there are

K trace slices, which all have the same destination time step Td. If the underlying

field is a steady flow field, the virtual time is used for computing streamlines in this process. Figure 4.1 shows the process of creating K trace slices.

4.3 Texture Advection

This section first describes how the trace slices are used to perform flow texture advections at run time for a fixed resolution. Multi-resolution texture advection will

31 Figure 4.1: The creation of trace slices by backward advection.

be described in section 4.5. Since the underlying flow field is defined on a two- dimensional structured recti- or curvi-linear mesh, this mesh surface can be rendered using one (for a regular Cartesian grid) or multiple polygons (one per cell for a structured curvilinear grid) mapped with flow textures that are computed at run time. The texture rendering algorithm takes as input an initial texture that is to be advected, and a set of trace slices loaded in as textures. Without a loss of generality, in the following it is assumed that the texture to be advected is a noise texture, although any texture can be used as an input for the advection.

A mapping is needed from the trace slice to the mesh surface. This is necessary because when the mesh geometry is rasterized, each fragment looks up the trace slices and use the 2-tuples stored in the corresponding locations to look up the noise texture. In the algorithm, this mapping is established using the two-dimensional mesh parameters (i, j), i ∈ [0,Imax], j ∈ [0,Jmax] as the texture coordinates for the mesh vertices, where Imax and Jmax are the dimensions of the structured mesh.

Besides the mapping between the trace slices and the surface mesh, a mapping from the input noise texture to the mesh is needed also. Conceptually this mapping

32 can be seen as distributing the noise on the surface in order for the advection to take place. In the algorithm, when the mesh is a regular Cartesian grid, the mapping is the same as how the trace slices are mapped to the mesh surface. For a Curvilinear mesh, however, care should be taken so that the noise is mapped to the mesh in the physical domain as uniform as possible regardless of cell size or shape.

With the mapping from the trace slices and the input noise to the mesh being established, the texture advection algorithm can now be explained as follows. Given an input texture N released at time Ts, knowing that the texture color at (x, y) from time step Ts is to be advected to the point (i, j) at time step Td if the trace slice

STd (i, j) = (x,y), a two-stage texture look-up to advect the texture N to time step Ts

Td can be performed. First, the mesh polygon is rasterized, where each fragment interpolates the texture coordinates provided to the neighboring mesh vertices and then look up the trace slice texture using the interpolated texture coordinates. Then, the 2-tuple (x, y) retrieved from the trace slice texture STd for the fragment is used Ts as the texture coordinates to look up the input noise texture N. The advection of the noise texture from Ts to Td is expressed as:

C(i, j, T ) = N(x, y) = N(STd (i, j)) (4.1) d Ts

where C(i, j, Td) is the color for the fragment with the texture coordinates (i, j) at

Td,(x, y) is the trace slice 2-tuple stored at (i, j) in the trace slice, and N(x, y) is the texel from the noise texture. Figure 4.2 shows the 2-stage texture lookup algorithm for texture advection.

33 Figure 4.2: Texture advection using two-stage texture lookups.

4.4 Spatial Coherence

The advection algorithm presented above only calculates the influence of the input texture released at Ts to the frame at Td. In fact, a fragment at time Td receives contributions from the noise texture released at multiple time steps. Therefore, to compute the output color for each fragment at time Td, we can change our algorithm to:

X N(STd (i, j)) C(i, j, T ) = Td−i×∆t (4.2) d K i=1..K where K represents the number to previous time steps before Td that the final color of each fragment is influenced by. An average of the color contribution from the K input noise texture is assigned to the fragment.

As demonstrated by the LIC algorithm [5], a coherence of the pixel intensity along the flow lines provides effective cues to illustrate the underlying flow direction.

The combination of textures described here allows to establish such pixel intensity coherence along the pathlines that are relatively steady. This is because adjacent pixels along the pathline have a large overlap in their backward advection traces.

34 Note that here the method to create the texture advection result at each animation frame Td is different from some of the existing methods (LEA [27], IBFV [69], IBFVS

[70]) in the sense that there is no need to use the output of the previous frame as the input texture to compute the current frame. There are several reasons for this choice.

First, the removal of inter-frame dependency allows the user to change the camera position or transform the mesh surface continuously since now the consecutive frames do not need to be rendered under the same view, a requirement for the previous methods. Second, when the underlying domain is not a simple two-dimensional flat plane and hence occlusions may occur, it is not possible to transform the output from the frame buffer back to object space and continue the advection for the next frame. Furthermore, as will be described in the next section, the algorithm computes the output image using different resolutions of trace slices STd and input texture Ts N based on OpenGL mipmapping mechanisms. The resolution of the trace slice and noise texture to use is determined for each fragment independently. Therefore, unless we are computing the texture advection for the previous frame at all levels, which is expensive, it is difficult to satisfy the need of all fragments that may be rendered at different mipmap levels. Finally, the animation frames in the algorithm can be generated simultaneously, so it becomes possible to use different threads with multiple graphics cards to implement the algorithm when the underlying dataset is large.

4.5 Multi-resolutional Texture Avection

In essence, the motivation for generating multi-resolution textures is to address the problems of aliasing and a lack of detail when visualizing the flow fields. With the

35 trace slices and the texture advection algorithm presented above, the algorithm can generate multi-resolution flow textures under various viewing conditions by adjusting the resolutions of the input noise and the trace slices on the fly. Specifically, to avoid the rendering artifact, it is important to ensure that both the ratio between the size of a texel from trace slices and the size of a fragment within the object’s projection area on the screen, and the ratio between the size of a texel from trace slices to the size of a texel from the input noise remain approximately one. This requirement can be enforced by using the classic mipmapping algorithm.

In the algorithm, OpenGL’s mipmapping function is exploited to implement the idea of multi-resolution texture advection. Starting from a base level of the input noise texture at a pre-defined resolution, average every 2x2 texels recursively to create a sequence of mip-mapped input textures. The same operation is applied to each of the trace slices, which is equivalent to creating a multi-resolutional version of the particle trace locations. It is worth noting that although conceptually creating a lower resolution trace slice is similar to down-sampling the flow field, it is in fact fundamentally different in the sense that the trace slices are computed from the original field. Integrating a particle using a vector field with lower resolution is much more susceptible to accumulating errors, which makes the pathline drift away from the correct path.

As the user zooms out of the field, when the density of the projected mesh cells exceeds that of the screen pixels, OpenGL mipmapping is triggered and automati- cally chooses a lower resolution trace slice for each fragment to perform the texture advection algorithm presented above. An appropriate level of the input noise texture is chosen when the trace slice 2-tuple STd (i, j) is used to access the noise texture. In Ts

36 the algorithm, mipmapping mainly helps when multiple cells are projected to a single pixel, which prevents aliasing. When a cell is projected to multiple pixels, since the texture advection is computed per fragment, flow patterns of fine granularity within the cell are still generated. This is because each fragment within the cell looks up the trace slice based on the texture coordinates interpolated from the corners of the cell and perform the advection of noise texels along the interpolated pathline locations.

As long as the input noise texture has enough resolution, the resulting flow texture can convey discernable flow patterns.

4.5.1 Adjustment of Advection Step Size

According to equation 4.2, each fragment takes a sequence of samples from the input noise texture following the pathline traces. To avoid aliasing and ensure spatial coherence between adjacent fragments along the same pathline, it is important for each fragment to sample contiguous texels from the input noise. This allows adjacent fragments along the pathline to average a similar set of noise input, hence creating spatial coherence. Previously, van Wijk and Jobard [27, 69] have made similar ob- servations and suggested that the step size for the particle integration should satisfy the following rule:

|v| ∗ ∆t <= w (4.3) where v is the velocity at the current fragment location, ∆t is the step size, and w is the texel width.

In the algorithm, ∆t should be adjusted based on the resolution of the noise tex- ture used for the fragment decided by OpenGL’s mipmapping algorithm. However,

37 since the level of detail for each fragment is determined by OpenGL independently at run time and not directly known to the application program, it is more difficult to determine ∆t for each fragment from the program. Fortunately, since ∆t is pro- portional to the texel size, and hence related to the noise texture resolution, a set of mipmapped ∆t can be computed to accompany the mipmapped noise texture. To do this, for the noise texture at the highest resolution, first map the texel size to the space where pathines are computed. Then, compute a ∆t for each grid point based on its local velocity. This produces a two-dimensional array of ∆t at the same resolution as the noise texture. Then, a sequence of down-sampled ∆t textures can be created in a similar manner as other mipmaps, except that now at each level, it is needed to multiply the down-sampled ∆t values by two since the corresponding texel size in the corresponding noise texture when mapped to the mesh surface becomes twice as large in each dimension every time when reducing the resolution by one level. Once the ∆t mipmaps are created, at run time, it can use the same texture coordinates for the noise texture to look up the mipmapped ∆t slices. Since the ∆t texture has the same resolution and the same number of mipmapping levels as the noise texture, each fragment uses an appropriate ∆t to match the noise texel size w.

According to equation 4.2, ∆t values accessed from the mipmapped texture are used to access the trace slices. It is possible that the value ti = Td − i × ∆t in the equation can fall in between the trace slices that sampled. In this case, a linear interpolation from adjacent trace slice 2-tuples is performed in the fragment program before looking up the noise texture.

38 4.6 Results

The algorithm was implemented using OpenGL 1.5 and OpenGL Shading Lan- guage (GLSL) 2.0 running on a PC with an Intel Pentium 4 2.00 GHz processor, 768

MB memory, and an nVIDIA 6800 GT graphics card with 256 MB of video memory.

The flow advection algorithm described above is primarily implemented in a fragment program. Each fragment is provided with the texture coordinates used to access the trace slices. The textures input to the fragment program include the noise texture, the ∆t texture, and K trace slices where K is the convolution kernel size according to equation 4.2. The mipmaps for all the input textures are implicitly managed by the

OpenGL run-time system, so no special handling is needed in the fragment program.

Dataset Dimensions Total Size Post 32x76 28.5 Shuttle 52x62 37.7 Vortex 100x100 3747

Table 4.1: Datasets used in the experiments. Note that the size for the vortex data includes all 31 time steps. The sizes are in KBytes.

Three datasets were used to test the algorithm, as listed in Table 4.1. The vortex dataset is a time-varying flow on a regular Cartesian grid data, and the rest are steady state flows on curvilinear grids. The trace slices were computed by first starting a backward pathline at every time step from each grid point, and then sampling the backward pathline locations. Each pathline was advected backwards as far as K time steps, where K is equal to the convolution size described in equation 4.2. When the underlying dataset is a steady field, the pseudo time for the particle integration is

39 used. In all of the experiments, K was set to 10. The value of K affects whether the convolution algorithm can be completed in a single pass or not. If K exceeds the maximal number of active textures that is allowed by GLSL, the texture advection is implemented by using multiple rendering passes. The experiment results represented in this paper were all done in a single rendering pass.

In the process of computing the trace slices, if a particle goes out of bounds before K time steps, its advection is terminated and the trace slice values for the remaining time steps are set to the location where the particle exits the domain. All trace slices were computed in a preprocessing stage. The second column in Table 4.2 lists the total preprocessing time for each dataset. For a steady state dataset, the preprocessing time is within a few seconds. For the time-varying data vortex data, the preprocessing time is slightly larger. It is worth noting that the preprocessing only needs to be done once, and can be used for all output resolutions and different viewing conditions.

Dataset Pre-Processing Texture Creation and loading Post 2.312 0.08 Shuttle 3.172 0.077 Vortex 8.875 0.24

Table 4.2: The time for trace slice preprocessing and texture creation and loading (in seconds).

When the user zooms in and out of the field, the level of detail for both the trace slices and noise are adjusted automatically and the texture advection is computed on

40 the fly. Using graphics hardware, this multi-resolution texture advection can be done very fast. For all the datasets used, after the textures were loaded into the video memory, the frame rate to advect and render the texture exceeded several hundred frames per second while the level of detail is being adjusted automatically. In fact, the nVIDIA GeForce 6800 GT can render 5.6 billion texels, and 525 million vertices at each second. The amount of geometry and textures were considerably lower than the peak load that the graphics hardware can handle. The third column in Table 4.2 lists the time for creating and loading all the necessary textures to the video memory.

Note that it is a process that only needs to be done in the beginning of the program so it is part of the program set-up time.

Additional tests were performed to verify the core idea of trace slices, that is, creating a down-sampled version of trace slice is more accurate than integrating par- ticles using the down-sampled flow field. Using the vortex dataset with a resolution of

10 100x100, we first created the trace slices St (i, j), t ∈ [0, 9] at a resolution of 100x100, and then down-sampled them to a resolution of 50x50 and a resolution of 25x25. The

flow field was also down-sampled to 50x50 and 25x25 and are computed backward pathlines using the down-sampled data. For the particle locations computed both from the down-sampled trace slices and the down-sampled fields, they were compared with the particle locations using the original field. Figure 4.3 shows the results for the 50x50 resolution. It can be seen that as the particles travelled farther, larger errors were accumulated when the down-sampled field was used. For the trace slices, the errors were bounded and no accumulation took place. As the dataset was further down-sampled to 25x25, as shown in Figure 4.4, the error of the particle locations became larger when using the down-sampled field.

41 Figure 4.3: Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slices and the down-sampled vortex dataset reduced from 100x100 to 50x50. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field.

Figure 4.4: Comparison of particle position errors for travelling 1 to 10 time steps using the down-sampled trace slice and the down-sampled vortex dataset reduced from 100x100 to 25x25. X axis indicates the time steps that the particles have travelled, and Y axis indicates particle position errors compared to the accurate traces using the Euclidean distance in the field.

The left image in Figure 4.5 shows a snapshot of the Post dataset with the multi- resolutional texture advection while the right one shows a snapshot without such a control. The mesh of the Post dataset has some interesting characteristics - the cells toward the center of the mesh quickly become much smaller than those close to the outside boundary, with several orders of magnitude difference. It can be seen that the algorithm can generate a result with less aliasing.

42 (a) (b)

Figure 4.5: Rendering of the post dataset (a) with (b) without the multi-resolution level of detail control.

The two images in Figure 4.6 show the comparison of the images for the shuttle dataset generated by our algorithm and traditional LIC. Each image was generated at the original resolution of 52x62 for the portion that is shown. The algorithm can generate image on the right with clearer flow patterns without up-sampling the field, while the LIC image on the right clearly does not include enough detail to show the

flow directions.

Figure 4.7 shows the test results for the shuttle dataset. Starting from a more close-up view and zooming, it can be seen from the image on the upper right that the multi-resolution algorithm quickly switched to a lower resolution of traces and noises and thus still produce clear flow pattern. The image on the lower right did not use the multi-resolution algorithm. The same test was performed on the vortex dataset. The image on the left of Figure 4.8 is a close-up view. The image on the upper right is the result generated with the multi-resolution algorithm and the image

43 on the lower right without. Similar to the shuttle dataset, the algorithm was able to generate a clearer flow pattern since aliasing was avoided.

(a) (b)

Figure 4.6: (a)With the correction of noise distribution, no stretched pattern can be seen (b) Rendering using LIC with the original resolution of 52x62.

The overhead of the algorithm is the additional space for storing the trace slices.

Table 4.3 shows the total size of the trace slices created for each data. For the datasets tested, the overhead was moderate.

Dataset Size of Trace Slices Post 1.00 Shuttle 1.32 Vortex 21.8

Table 4.3: The size of trace slices (in MBytes) including all time steps. Note that Vortex dataset is time-varying.

44 Figure 4.7: The image on the left was generated when zoomed in. As the user zoomed out from the image on the left, the algorithm was able to produce a clearer pattern by switching to a lower resolution of trace slices and noise texture (upper right), while the algorithm with no LOD control produced aliasing result (lower right).

45 Figure 4.8: A similar test as Figure 4.7 using the time-varying vortex dataset. It can be seen that this algorithm produced a better image (upper right) compared with no level of detail adjustment (lower right).

46 CHAPTER 5

ILLUSTRATIVE STREAMLINE PLACEMENT

5.1 Algorithm Overview

The primary goal is to generate streamlines succinctly for two-dimensional flow

fields by emphasizing the essential and deemphasizing the trivial or repetitive flow patterns. Fig. 5.1 shows an example of streamlines generated by the algorithm, in which the selection of streamlines is based on a similarity measure among streamlines in the nearby region. The similarity is measured locally by the directional difference between the original vector at each grid point and an approximate vector derived from the nearby streamlines, and globally by the accumulation of the local dissimilarity at every integrated point along the streamline path. To approximate the vector field from existing streamlines, two-dimensional distance fields recording the closest distances from each grid point in the field to the nearby streamlines are first computed. Then the approximate vector direction is derived from the gradients of the distance fields.

This algorithm greedily chooses the next candidate seed that has the least degree of similarity according to the metrics.

The algorithm has its unique characteristics when compared with the existing streamline seeding algorithms [63, 28, 72, 45, 39].

47 Figure 5.1: Streamlines generated by the algorithm.

First, the density of streamlines. Some of the existing techniques favor uniformly spaced streamlines. However, in this algorithm, the density of streamlines is allowed to vary in different regions. The different streamline densities reflect different degrees of coherence in the field, which allows the viewer to focus on more important flow features. Regions with sparse streamlines imply the flows are relatively coherent, while regions with dense streamlines mean more seeds are needed to capture the essential flow features. This characteristic of the algorithm matches with one of the general principles of visual design by Tufte [62]: different regions should carry different weights, depending on their importance. The information can be conveyed in a layered manner by means of distinctions in shape, color, density, or size.

Second, the representativeness of streamlines. The general goal of streamline placement is to visualize the flow field without missing important features, which can be characterized by critical points. Since the flow directions around critical points can change rapidly compared to those non-critical regions, this algorithm is able to capture those regions and place more streamline seeds accordingly.

48 Finally, the completeness of flow patterns. In the previous streamline place- ment algorithms that have explicit inter-streamline distance control, the advection of streamlines can be artificially terminated. This may cause visual discontinuity of the flow pattern, especially when it is near the vicinity of critical points. The seeding algorithm in this chapter, however, only determines where to drop seeds and allows the streamlines to be integrated as long as possible until they leave the domain, reach a critical point, or generate a loop. Without abruptly stopping the streamlines, the

flow patterns shown in the visualization are much more complete and hence easier to understand.

5.1.1 Distance Field

A distance field [30] represents the distance from every point in the domain to the closest point on any object. The distance can be unsigned or signed, and the sign is used to denote that the point in question is inside or outside of the object. With the distance field, some geometric properties can be derived such as the surface normal

[14]. The concept of distance fields has been used in various applications such as morphology [52], visualization [79], animation [13], and collision detection [3].

In this algorithm, unsigned distance fields are used to record the closest distance from every point in the field to the nearby streamlines that have been computed. In practice, a mathematically smooth streamline is approximated by a series of polylines integrated bidirectionally through numerical integrations. Given a line segment si =

3 {pi, pi+1}, where p ∈ R , i ∈ N, a vector vi = pi+1 − pi, the nearest point pq on the line segment si to an arbitrary point q can be computed by:

49 pq = pi + tvi (5.1) where

(q − pi) · vi t = clamp( 2 ), |vi| clamp(x) = min(max(x, 0), 1), (5.2)

The distance d(q, si) from the point q to the line segment si is computed by the

Euclidean distance between q and pq. For a given streamline L, where L = {∪si|si =

3 {pi, pi+1}, i ∈ N, p ∈ R }, and si is a line segment of line L, the unsigned distance function at a point q with respect to L is:

d(q, L) = min{d(q, si)|si ∈ L} (5.3)

To speed up the computation of distance fields, this is implemented on the GPU.

The discussion about the GPU implementation is deferred to section 5.4. The result- ing distance fields are used to derive an approximate vector field, which can be used to measure the dissimilarity between streamlines in the local regions.

5.1.2 Computation of Local Dissimilarity

Because of spatial coherence in the field, neighboring points can have advection paths with similar shapes, even though they may not be exactly the same. Given a streamline, considering the closest distance from every point in the field to this streamline, a distance field can be computed. The iso-contours of this distance field will locally resemble the streamline, i.e., the closer the contour to the streamline, the more similar their shapes will be. This is the basic idea of how to locally approximate

50 streamlines in the empty regions from existing ones, which forms the basis to measure the coherence of the vector field in local regions.

With the distance field, a gradient field is computed using the central difference operator. For each vector of this gradient field, after a 90 degree of rotation, an approximate vector is generated that is derived from the single streamline. Whether to rotate the gradient clockwise or counter-clockwise is decided based on the flow direction of the nearby streamline so that the resulting approximate vector points to roughly the same direction as the flow. To measure the local coherence, a local dissimilarity metric is defined by measuring the direction difference between the true vector at the point in question and its approximate vector. For a point p ∈ R3, the local dissimilarity Dl(p) at this point is written as the following:

v0(p) · v(p) Dl(p) = 1 − ( + 1)/2 (5.4) |v0(p)||v(p)| where v0(p) is the approximate vector at p, and v(p) the original vector. The value is in the range of 0.0 to 1.0; the larger the value is, the more dissimilar between the true vector and the approximate vector at that point. It is worthwhile noting that this metric only denotes the local dissimilarity between the vectors at the point, instead of the dissimilarity between the streamline originated from this point and its nearby streamline. Also, so far I only consider the case that there exists only one streamline in the field. In the next section, some discussion is given about how to consider multiple streamlines existing in the field and modify the dissimilarity metric, which is a more general case assumed in the algorithm.

51 5.1.3 Influence from Multiple Streamlines

When there exist multiple streamlines in the field, it is not enough by only using the standard definition of distance field and simply computing one smallest distance from each point to the streamlines, and evaluate the dissimilarity metric as presented above. This is because the distance field computed with this method will generate a discrete segmentation of the field. For example, the left image in Fig. 5.2 shows the approximate vectors in orange given two existing streamlines S1 and S2 in black.

For the points in the lower triangular region under the dotted line, they are classified to be the closest to streamline S2, while the points in the upper triangle are the closest to streamline S1. If only using a single distance field computed from the two lines to approximate the local vectors, the resulting vectors will be generated in a binary manner, as shown by those orange vectors. This binary segmentation causes discontinuity in the approximate vector field. Given two lines as shown in the example, for the empty space in between, a more reasonable approximation of the vectors should go through a smooth transition from one line to the other, as shown on the right in Fig. 5.2.

In the algorithm, a smooth transition of vector directions between streamlines can be achieved by blending the influences from multiple nearby streamlines. In the previous section, I discuss how to compute the dissimilarity metric if there exists only one streamline. For the more general case where multiple streamlines are present, for each point the M nearest streamlines are picked, and for each streamline the dissimilarity function as in equation 5.4 is evaluated respectively. Finally, the M

Dlk(p) is blended together to compute the final dissimilarity value at p as:

52 Figure 5.2: Assume the flow field is linear and streamlines are straight lines. The circle in the images denotes the region where a critical point is located. Black lines represent the exact streamlines seeded around the critical point. The orange lines represent the approximate vectors by considering the influence of only one closest streamline (left), and the blending influence of two closest streamlines (right).

XM Dl(p) = (wkDlk(p)) (5.5) k=1 where wk is the weight of the influence from the streamline k decided by the distance between point p and the streamline k. Dlk(p) is the dissimilarity value computed at point p using the distance field generated by streamline k. Analogously, the ap- proximate vector at p is the blending of the vectors generated from the M nearest streamlines, and each vector is a 90 degree rotation of the gradient computed from the corresponding streamline, as described above. It is worthwhile noting that differ- ent methods for assigning the weight can be used in the equation depending on the requirement of the user. For all the images presented in this chapter, the blending of two nearest streamlines is considered, that is, M equals to 2 in equation 5.5.

53 5.1.4 Computation of Global Dissimilarity

As mentioned in the previous section, at each point, there is a local dissimilar- ity measure that represents the direction difference between the true vector at that point and the approximate vector derived from the nearby streamlines. However, the local dissimilarity only captures the coherence about the local vectors instead of the similarity between streamlines. If we only use local dissimilarity to decide the seed placement, it can generate a lot of streamlines in the final images, even though most of them resemble the nearby streamlines, and is only different at some local segments.

In order to capture the coherence between a streamline originated from a point and its nearby streamlines, a global dissimilarity measure is defined by accumulating the local dissimilarity at every integrated point along its streamline path. Written in equation:

XL Dg(p) = (unDl(xn, yn)) (5.6) n=1 where Dg(p) is the global dissimilarity at point p, and (xn, yn) is the nth integrated point along the streamline originated from p. The length of the streamline is L.

Dl(xn, yn) is computed by interpolating the local dissimilar values at the four corner grid points. Based on different metrics, un can be computed differently. In the algorithm, the averaged local dissimilarity values along the streamline path is used, i.e., un is equal to 1/L.

5.1.5 Selection of Candidate Seeds

Before discussing the algorithm, two user-specified threshold values, Tl and Tg, are first introduced. Tl is the threshold for the minimum local dissimilarity, while Tg

54 is the threshold for the minimum global dissimilarity. To avoid drawing unnecessary streamlines, only seeds from grid points satisfying equation 5.7 are chosen.

D (i, j) > T l l (5.7) Dg(i, j) > Tg

The initial input is a streamline seeded at a random location in the field. For example, the central point of the domain can be used as the initial seed to generate the streamline. With the first streamline, the distance field is calculated and the dissimilarity value at each grid point is computed. The important step now is how to choose the next seed. Here a greedy but more efficient method for this purpose is presented. Given the two threshold values, the algorithm for choosing the next seed is described as follow:

1. Sort the grid points in descending order of the local dissimilar values computed

from equation 5.5.

2. Dequeue the first point (i, j) in the sorted queue. If Dl(i, j) is larger than Tl,

integrate a streamline from this point bidirectionally and compute the global

dissimilarity value Dg(i, j) by using equation 5.6. Otherwise, if Dl(i, j) is smaller

than Tl, the iteration terminates.

3. If Dg(i, j) is larger than Tg, this seed is accepted as the new seed and the

streamline being integrated is displayed. Otherwise, go back to step (2).

When a new streamline is generated, the nearest streamlines to each grid point is updated and the dissimilarity values as mentioned in section 5.1.3 is re-computed.

The above algorithm runs iteratively to place more streamlines. As more streamlines

55 are placed, the smaller the dissimilarity values will become at the grid points. The program terminates when no seed can be found that satisfies equation 5.7. At this point, there are enough streamlines to represent the underlying flow field according to the user desired coherence thresholds.

To speed up the process of choosing the candidate seeds, during the process men- tioned above, when Dg(i, j) is smaller than Tg, this grid point is marked, and those grid points at the four corners of the cells that are passed by the streamline originated from (i, j) are marked too. These points will be excluded from being considered any further in the later iterations, because there already exist nearby streamlines simi- lar to the streamlines that would have been computed from them. Therefore, it is unnecessary to check those grid points again. Generally speaking, for a dataset that has a sufficient resolution, the flow within a cell is very likely to be coherent, so this heuristic will not affect the quality of the visualization output much. That means, in most cases, streamlines from those grid points will be similar to the streamline that has already been rejected. This makes it possible to reduce the number of streamlines to compute and test substantially, without visible quality differences.

Fig. 5.3 shows an image of streamlines generated with an oceanfield data using this algorithm. For rendering, since the algorithm allows streamlines to be integrated as long as possible until they leave the domain, reach critical points, or generate a loop, the local density of ink in some regions may be higher than other regions. To even the distribution of ink, the streamlines are rendered in the alpha blending mode, where the alpha value of each line segment is adjusted according to the density distribution of the projected streamline points in image space. Each sampling point on the streamlines is first mapped to image space, and the corresponding screen space point is treated

56 as some energy source, which can be defined by the Gaussian function. Then, an energy distribution map based on all streamlines is generated. This energy map is mapped to an opacity map to control the opacity of the streamline line segments as they are drawn. This can effectively reduce the intensity of the lines if they are cluttered together.

Figure 5.3: Streamlines generated by the algorithm on the Oceanfield data.

5.2 Topology-Based Enhancement

Although without explicitly considering the flow topology, the algorithm would naturally place more streamlines around the critical points because of the lack of coherence there. Sometimes it is desired to highlight the streamline patterns around the critical points so that the viewer can clearly identify the type of the critical points. To achieve this goal, the algorithm can be adapted by placing an initial set of streamlines with some specific patterns around the critical points, instead of randomly dropping the first seed. This is similar to the idea of seed templates proposed by

Verma et al. [72]. For each type of critical points, a minimal set of streamlines to distinguish them from each other are used. For a source or sink, four seeds are placed

57 along the perimeter of a circle around the critical point, where each of the seeds is the intersection point of the x-y axes with the circle; for saddle, four seeds are placed along the two lines bisecting the eigendirections with two seeds on each line; for spiral or center, one seed is placed along a straight line emanating from the critical point.

Fig. 5.4 shows such an image of streamlines generated with topology information being considered.

Figure 5.4: Streamlines generated when the flow topology is considered. There are three saddle and two attracting focus critical points in this data.

Streamline placement guided by topology information alone is not always effective, which can happen when there is no critical point, or there are too many critical points in the field. When there are too many critical points, the final image may easily get cluttered. On the other hand, if there is no critical point at all in the field, then no rules can be applied to guide the placement of streamlines. Our algorithm can consider both the vector coherence and the flow topology.

58 5.3 Quality Analysis

As mentioned above, this algorithm generates representative streamlines to illus- trate the flow patterns of the underlying field. Given appropriate threshold values, the algorithm selects streamlines based on the flow coherence via the dissimilarity measures defined above. The density of the selected streamlines can vary based on the degree of coherence in the local regions. As in Fig. 5.3, there are void regions between the displayed streamlines, which indicates that the streamlines in those void regions look similar to each other and hence can be easily derived. Therefore, it does not place many seeds in those regions. Since only a small subset of the streamlines are drawn in the whole vector field, it is necessary to conduct quality analysis of the method. One method of analysis, which can be performed quantitatively, is to compare the original vector field with the approximate vector field derived from the streamlines selected by the algorithm. Another method is to perform user studies to verify whether the users can correctly interpret the field in the empty regions, and also whether this representation is an effective method to depict the vector fields. In the following, I first describe the approach used for performing quantitative analysis with some results, and then present findings from user studies.

5.3.1 Quantitative Comparison

The quantitative analysis consists of a data level comparison and a streamline level comparison. For the data level comparison, at first a vector field is reconstructed from the streamlines generated by the algorithm. Then compare the local vectors between the reconstructed field and the original field. For the streamline level comparison, originated from each grid point, two streamlines are integrated respectively in the

59 original vector field and the reconstructed one, and then compute the errors between these two streamlines. It is worth noting that the errors are only used to study whether the algorithm misses any regions that require more streamlines to be drawn.

The errors do not represent the errors in the visualization, since every streamline presented to the user is computed using the original vector field. In the following, I

first describe how to reconstruct a vector field from the streamlines that are displayed.

Then present the data level and streamline level comparison results.

Reconstruction of Flow Field

The process to reconstruct the approximate flow field from selected streamlines is the same as the process presented in section 5.1.2 and 5.1.3 being used to it- eratively introduce streamline seeds. The main difference is that now a final set of streamlines to generate the gradient fields are given. Given a streamline in the final streamline set, a distance field can be computed, from which its derived gradients can be computed. In section 5.1.3, I discuss the computation of the local dissimilarity by considering multiple nearby streamlines. With the same idea, for each grid point,

first identify the nearest M streamlines, and then use the distances to the streamlines to generate M gradients at that point. After rotating the gradients by 90 degrees to get the approximate vectors, the final reconstructed vector at this grid point is com- puted from an interpolation of the M vectors inversely proportional to the distances from the point to the corresponding streamline. As mentioned above, in the current implementation, I only consider the nearest two streamlines for each grid point, that is, M = 2. For the grid points that are selected as the seeds or there are streamlines passing through it, the original vectors as the reconstructed vectors are used.

60 Data Level Comparison

Data level comparison is performed between the original vector field and the re- constructed vector field at every grid point. The goal is to evaluate how well the streamlines displayed by the algorithm can represent the original vectors at the empty regions, based on the computational model introduced above. One of the challenges to perform data level comparison is to design appropriate metrics to quantify the errors. Since the goal is to evaluate how much the true vector direction at each grid point is aligned with the reconstructed vector, the cosine of the angle between the original vector and the reconstructed vector at each grid point is taken as a measure of similarity. Fig. 5.5 shows a result of the comparison using one vector data set.

In the image, dark pixels depict that the two vectors at the grid points are almost the same, while brighter pixels mean more errors. From the image, it can be seen that the streamlines displayed are representative for the original vector field, because in most of the empty regions, the approximate vectors from the streamlines are well aligned with those in the original field. There are a few regions with higher errors, which mostly fall into the following cases. The first case is regions near the domain boundary. The algorithm explicitly excludes the grid points on the boundary from being selected as candidate seeds. This is because sometimes the vectors on bound- aries are odd due to sampling issues, but the fieldlines in downstream or upstream tend to be more normal and stable. The second case of error is due to the imple- mentation. When selecting the next candidate seed, if a grid point is too near to an existing streamline, for example the distance to this streamline is within a cell, this point is excluded from being a candidate seed. This is really not a cause of concern because even if the streamline integrated from this point is different from this existing

61 streamline, eventually there will be some point elsewhere on this streamline or near this streamline being picked up as the seed. The third case might be a problem caused by the linear interpolation operator used to blend the influence from multiple nearby streamlines based on the distance from the grid points to those streamlines.

(a) (b)

Figure 5.5: (a) Representative streamlines generated by the algorithm (b) Gray scale image colored by one minus a normalized value of the cosine of the angle between vectors from the original field and the reconstructed field. Dark color means the two vectors are almost aligned with each other, while brighter color means more errors. The maximal difference between the vector directions in this image is about 26 degree, and the minimal difference is 0 degree.

Streamline Level Comparison

Besides comparing the original and the reconstructed vector fields with the raw data, these two fields can be compared in terms of some global features, such as streamlines. To do this, from every grid point, simultaneously integrate streamlines forward and backward in the original vector field and the reconstructed field, and then compute the distance between those two streamlines at every integration step

62 based on some metrics, such as Euclidean distance, or Manhattan distance. Fig. 5.6 shows a result of streamline comparison on the same vector fields as Fig. 5.5, where the average Euclidean distance between the two streamlines is computed. Similar to the cases discussed in section 5.3.1, some errors are detectable in some local regions but they are quite small. Fig. 5.6 (b) shows the histogram of the distance errors, which tells that most of the grid points from which the streamlines originated only bear small errors.

(a) (b)

Figure 5.6: (a) Gray scale image colored by the distance errors (in the unit of cells) between two streamlines integrated from each grid point in the original vector field and the reconstructed one. Dark color means low errors, while brighter color means higher errors (b) Histogram of the streamline errors collected from all grid points in the field. X axis is the error, while Y axis is the frequency of the corresponding error value. The maximal difference is 23.1 and the minimal is 0.0. The dimensions of the field is 100x100.

63 5.3.2 User Study

Abstract or illustrative presentations have been widely used and accepted in non- photorealistic rendering and artistic design to depict information succinctly. User study is a way to quantify the effectiveness of new methods, like in [32]. To evaluate the effectiveness of using illustrative streamlines generated by this algorithm, a user study was conducted which contained four questions categorized into two tasks. The tasks and questions were related to visualization of four different two-dimensional vector fields.

Participants

Subjects for the user study were 12 unpaid graduate students from the Depart- ment of Computer Science and Engineering. Five of them are majored or will be majored in Computer Graphics, and others are in other research groups, such as Ar- tificial Intelligence, Networking, etc. Two of them know a little about the concept of

flow fields and streamlines, but none of them had studied fluid mechanics or related courses. There were four female students and eight male students. They all have normal or corrected visions and can see the images presented to them clearly. The study took about 30 mins for each subject, and before the test, the subjects were given a tutorial introducing them to the application. I explained the purpose of us- ing streamlines to visualize flow fields, and different flow features being depicted by different types of critical points. The tests did not start until they could easily tell the flow features in the training datasets without help.

64 Tasks and Procedure

The first task was to evaluate whether the users were able to effectively iden- tify the underlying flow features, including flow paths and critical points, from the visualization generated by the algorithm. In particular, I wanted to verify whether the streamline representation was as effective as other existing algorithms, or more effective, in terms of allowing the users to understand the vector fields. This part was conducted on pieces of paper handed out to the subjects and there were three questions involved.

To perform the test, I chose two existing two-dimensional streamline placement algorithms by Mebarki et al. [45] and Liu et al. [39], plus the method presented here, and generated images using four datasets. We first described the tasks to be finished and gave a brief introduction of related background knowledge. The subjects were shown 15 groups of images, and each group included three images generated by the three algorithms respectively and was organized like Fig. 5.7. For the images within each group generated by the algorithms of Mebarki and Liu, the streamline densities were similar, but between different groups, the density of streamlines were different.

To avoid possible bias caused by a fixed ordering of images by the three algorithms, the order of three images was changed randomly in each group. Fig. 5.8 shows three groups of images used in the user study. At the beginning of this task, instructions were given to the subjects about the questions in detail. They were required to fully understand the questions before they started to give answers.

The first question in the test was to ask the subjects to rate the three images in each group according to the easiness of depicting the flow paths in the vector fields, where 1 was the best and 3 was the worst. The second question was about critical

65 points. If there were critical points in the fields, subjects were asked to circle them and rate how helpful the streamlines presented in the visualization were to detect those critical points. The third questions was about the overall effectiveness of visualization considering both the flow paths and critical points.

In the study, the subjects were not asked to classify the critical points. If the subjects thought all three images were equally helpful, then they could rate them equally.

The second task was to evaluate how correctly the subjects were able to interpret the flow directions with the images generated from the algorithm in those empty regions without streamlines being drawn. This task was run with a completely auto- mated program with four datasets. I pre-generated streamlines using the algorithm on each data set, which were used as the input to the program. When the program started with each data set, four random seed points were generated in those void regions. For each point, six circles with increasing radii were generated in a sequence.

The subjects were asked to mark where the streamlines would intersect with the cir- cles when they were advected from the seeds. That means, given a seed point, a circle with the smallest radius was first shown to the subject, who would then mark the streamline intersection point on the circle. After that, another circle with a larger radius was shown around the same point. This process repeated six times for each seed point. For some seed points, if the subjects believed the advection would go out of boundary or terminate at some point before it reached the circle, such as stagnant points, they could identify the last point in the circle instead of on the circle. Fig. 5.9 shows a screen snapshot of the interface for this task with only one circle drawn.

66 This user study was not timed, so subjects had enough time to give the answers.

In summary, the questions involved were:

1. Rate images based on the easiness to follow the underlying flow paths.

2. Rate images based on the easiness to locate the critical points by observing the

streamlines.

3. Rate images based on the overall effectiveness of visualization considering both

the flow paths and critical points.

4. Predict where a particle randomly picked up in the field will go in the subsequent

steps.

Results and Discussions

For the task about rating how easily the streamline images allow the subjects to follow the flow paths, the study result is shown in Table 5.1. From the result, it tells that most of the subjects prefer images generated by the algorithm. When analyzing the results from individual subjects in detail, I found that, for some images generated by the algorithm, if they are too abstract, some subjects tended to rate the evenly-spaced based methods higher. Even though the subjects could tell and follow the flow directions with images from the algorithm, evenly spaced methods were better for them to pinpoint the vectors at local points, because the streamlines were uniformly placed and cover all the domain. I also found that six subjects liked the images generated by the algorithm very much and always rated the highest, while one subject completely did not like all images generated by this algorithm and rated all images the lowest.

67 Algorithm Rank 1 Rank 2 Rank 3 Mebarki et al. ’s 5.4% 45.5% 51.0% Liu et al. ’s 20.1% 46.9% 30.0% Li et al. ’s 74.5% 7.6% 19.0%

Table 5.1: The percentages of user rankings for each image based on the easiness to follow the underlying flow paths.

Even though the algorithm does not explicitly place more streamlines near criti- cal points, it indeed captures most of the features around the critical points. This is because vectors around critical points are less coherent and the algorithm is designed to place streamlines based on the streamline coherence. Additionally, streamlines getting converged or diverged around critical points contribute more ink in the neigh- borhood of them, which makes the critical points much more noticeable. The second question in the first task was to ask the subjects to rank how helpful the streamlines in the images were for the subjects to detect critical points. The study result, shown in Table 5.2, suggests that images generated from the algorithm are more helpful for the subjects to detect the critical points. This result is in accordance with the initial expectation since the algorithm allows the viewer to focus on more prominent flow features. This algorithm allows the streamlines to advect as far as possible once they start. Around critical points, relatively speaking, the streamlines become dense and converge around a small region near each critical point. According to Tufte [61], more data ink should be accumulated around the more important regions.

The third question asked the users to rate the overall effectiveness of visualization considering both the flow paths and directions and it let the subjects to decide what they think are more important to visualize a vector field and how to balance the

68 Algorithm Rank 1 Rank 2 Rank 3 Mebarki et al. ’s 3.3% 42.5% 60.0% Liu et al. ’s 7.7% 52.7% 37.8% Li et al. ’s 89% 4.8% 2.2%

Table 5.2: The percentages of user rankings for each image based on the easiness to locate the critical points by observing the streamlines.

possible conflict between those two criteria. It is possible some images are good at depicting flow paths, while others are good at depicting critical points. The study result is shown in Table 5.3.

Algorithm Rank 1 Rank 2 Rank 3 Mebarki et al. ’s 3.5% 42.5% 57.0% Liu et al. ’s 19.9% 52.7% 37.8% Li et al. ’s 76.6% 4.8% 5.2%

Table 5.3: The percentages of user rankings for each image based on the overall effectiveness of visualization considering the flow paths and critical points.

For the task about predicting the advection paths of particles, error was measured as the Euclidean distance between the user-selected point and the correct point from the integration using the actual vector data, in the unit of cells. Mean errors are shown in Fig. 5.10 with error bars depicting plus and minus of the standard deviation. I observe that as the radius of circle was increased, the error became slightly larger.

In other words, the closer to the starting seed points, the easier for the subjects to pinpoint the particle path, except when the flow becomes convergent in some regions.

In this case, even if the radius of the circle becomes larger, because the space between

69 streamlines becomes smaller, it is still easier for the subjects to locate the advection path. Overall, the test result shows that the errors were well bounded. In other words, the subjects were able to predict the flow paths reasonably well given the illustrative streamlines drawn by the algorithm. In general, the error range is related to and constrained by the spacing between streamlines, which depends on how similar the nearby streamlines are.

5.4 Results

The algorithm was tested on a PC with an Intel Core 2 2.66GHz processor, 2 GB memory, and an nVIDIA Geforce 7950 GX2 graphics card with 512 MB of video mem- ory. The streamlines were numerically integrated using the fourth order Runge-Kutta integrator with a constant step size. In an earlier section, three comparative results generated by Mebarki et al. ’s, Liu et al. ’s, and this algorithm have been presented in Fig. 5.8. Generally speaking, algorithms generating evenly-spaced streamlines are fast, and the performance is relatively independent of the flow feature. For the algo- rithm generating streamlines by evaluating flow features locally and globally, however, from the timings listed in Table 5.5 for the four datasets (Table 5.4), it can also run at interactive speeds.

There are three main steps in the implementation: updating distance fields (sec- tion 5.1.1), computing local dissimilarity (section 5.1.2), and selecting seeds (sec- tion 5.1.5) including computing the global dissimilarity values. Updating distance

fields takes place whenever a new streamline is generated. This part was implemented on GPUs: for each line segment of the newly generated streamline, a quadrilateral is drawn to a window with the same size as the flow field. The fragment shader

70 computes the distance from each fragment to the line segment. This distance is set to be the depth of the fragment. After all line segments from a streamline are drawn, the depth test supported by the graphics hardware returns the smallest distance from every pixel to the streamline in the depth buffer, which is then read back to the main memory. On the CPU, the distances to the nearest M streamlines for each pixel are recorded. The computation of local dissimilarity is also performed on the CPU by blending the influence of multiple nearby streamlines. From the timings, it tells that when the size of the flow field increases, more time is spent on the portion of the algorithm that runs on the CPU. Although I have not done so, the computation of the dissimilarity metric for each pixel potentially can be implemented on GPUs as well. This could also reduce the overhead of transferring data from CPU to GPU, and reading back from GPU to CPU.

Dataset Dimension # of lines # of line segments Fig. 5.8(c) 64x64 18 696 Fig. 5.5(a) 100x100 19 1204 Fig. 5.4 400x401 28 3697 Fig. 5.3 576x291 45 6129

Table 5.4: Information of four different datasets, and the number of streamlines generated by the algorithm.

71 Total Updating Computing Finding Timing Distance Field Local dissimilarity Seeds 0.078 0.031 0.00 0.047 0.156 0.079 0.031 0.046 2.562 0.799 1.355 0.172 4.453 1.08 1.639 1.375

Table 5.5: Timings (in seconds) measured for generating streamlines. Each row corresponds to a data set listed in the same row of Table 5.4.

72 Figure 5.7: A group of images used in the first task of the user study.

73 (a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5.8: Streamlines generated by Mebarki et al. ’s algorithm (left), Liu et al. ’s algo- rithm (middle), and my algorithm (right).

74 Figure 5.9: Interface for predicting particle advection paths. Blue arrows on red streamlines show the flow directions. The red point is the particle to be advected from.

4 4

3 3

2 2

1 1

0 0

−1 −1 3.2 6.4 9.6 12.8 16 19.2 2.6 5.2 7.8 10.4 13 15.6 (a) (b)

4 4

3 3

2 2

1 1

0 0

−1 −1 2.8 5.6 8.4 11.2 14 16.8 3.2 6.4 9.6 12.8 16 19.2 (c) (d)

Figure 5.10: Mean errors for the advection task on the four different datasets. X axis stands for radius of circles around the selected points, and Y axis depicts the mean error plus or minus the standard deviation. Larger value along Y axis means higher error. Y axis starts from -1 to make the graphs easier to visualize. Dimensions of the datasets (a) 64x64 (b) 64x64 (c) 64x64 (d) 100x100.

75 CHAPTER 6

IMAGE BASED STREAMLINE GENERATION AND RENDERING

6.1 Algorithm Overview

The primary goal of this work is to control scene cluttering when visualizing three- dimensional streamlines and allow the user to focus on important local features in the

flow field. However, for three-dimensional data, addressing the issue of visual clutter- ing in object space is more challenging, since even if streamlines are well organized in object space, they might still clutter together after being projected to the screen. In real life, artists usually draw in the way such that strokes are drawn one by one onto the canvas; when some region gets cluttered, fewer strokes are placed, and vice versa.

Inspired by the idea of applying this principle to the flow visualization, I propose to place streamlines based on how they are distributed across the image plane.

Fig. 6.1 shows the visualization pipeline of the image-based streamline seeding and generation algorithm. The input to the algorithm is a three-dimensional vector

field and a two-dimensional image with a depth map. The image and depth map can come from the result of rendering vector field related properties such as stream surfaces, or can be the output of other visualization techniques such as isosurfaces or

76 slicing planes of various scalar variables. The algorithm will generate streamlines by placing seeds on the image plane. For the seeds selected in image space, in the region covered by the depth map, they can be unprojected back to object space. Streamlines are then integrated in 3D object space. The algorithm ensures that streamlines will not come too close to each other after they are projected to the screen.

Figure 6.1: Visualization pipeline of the image-based streamline generation scheme.

With the algorithm, it is possible to avoid scene cluttering caused by streamlines having a very high depth complexity. Although researchers have previously proposed to draw haloed lines to resolve ambiguity of streamline depths [43], when a large number of line segments generated from the haloing effect are displayed, the relative depth relationship between the streamlines becomes very difficult to comprehend. By controlling the spacing of streamlines on the image plane, it is possible to prevent the visualization from being overly crowded. Another advantage of the image-based

77 approach is that it enhances the understanding of the correlation between the under- lying flow field and other scalar variables. When analyzing a flow field, the user often needs to visualize additional variables in order to understand the underlying physical properties in detail. Directly dropping seeds in the regions of interest defined by those scalar properties can assist the user to create a better mental model to comprehend the data. Traditionally, visualization of streamlines and other scalar properties are performed independently. Placing streamlines in regions signified by other data at- tributes often requires the implementation of the seed placement algorithm to have knowledge about the specific features. In this work, a simple and unified framework is provided such that when the users find interesting features in the image, they can directly drop seeds in those regions and visualize the corresponding vector fields. This way, the process of issuing queries to answer the user’s hypotheses both in scalar and vector fields can be performed more coherently.

One key issue to realize this idea is how to place seeds and generate streamlines so that the visual complexity in the output image is well controlled. Based on the image space approach, we can create better visualizations of streamlines.

6.2 Image Space Streamline Placement

To generate well organized streamlines on three-dimensional vector fields, one issue to address is how to control the spacing between streamlines when they are projected to the image plane. Previously, researchers have proposed several two-dimensional evenly-spaced streamline placement algorithms [63, 28, 45] and extended the idea to three-dimensional vector fields by ensuring evenly spaced streamlines in object space

[43]. However, such a straightforward extension of the two-dimensional streamline

78 placement methods to the three-dimensional space does not always produce desired results since evenly spaced streamlines in three-dimensional space does not guarantee visual clarity after they are projected to the screen.

The main idea of the algorithm is in order to ensure that streamlines will be well organized in the resulting image, it is more effective to place seeds directly on the image plane. Those screen space seed positions can be unprojected back to unique positions in object space if a depth value is given at the corresponding position. When a streamline is integrated in object space, it is necessary to make sure that this line is not too close to the existing streamlines in image space.

6.2.1 Evenly-spaced Streamlines in Image Space

To start the algorithm, a random seed is first selected on the image plane and mapped back to object space. Here assume a depth value is available for every pixel on the screen. Details about different ways to generate the depth map is discussed in the next section. From the initial seed position, a streamline is integrated and placed into a queue Q. It is required that all streamlines keep a distance of dsep away from each other on the image plane. To ensure this, the following steps are repeated until

Q is empty:

1. Dequeue the oldest streamline in Q as the current streamline.

2. Select all possible seed points on the image plane at a distance dsep away from

the projection of the current streamline. Considering each projected sample

point on the current streamline, there are two candidate positions for the seeds,

one on each side of the streamline.

79 3. For each of the candidate seeds, a new streamline is integrated as long as possible

before it is within the distance dsep from other streamlines on the screen. Then

this new streamline is enqueued.

4. Go back to 1).

The algorithm above is very similar to the one presented in [28], which works well for two-dimensional flow fields. However, for three-dimensional vector fields, because there involves a projection process from object space to image space, some issues need to be addressed.

Perspective Projection

The algorithm in [28] approximates the distance between a seed point to the nearby streamlines using the distances from the seed point to the sample points of the streamlines, which are the points computed at every step of streamline integration.

Those distances will be used to compare with the desired distance threshold dsep to make sure that streamlines are not too close to each other. The assumption to make this approximation acceptable is that the distance between the sample points along a streamline must be smaller than dsep. In the algorithm, the streamline distance threshold dsep is defined in image space. Since the integration step size is controlled in object space, after being projected to the screen through perspective projection, the distance between the sample points along a streamline might be shortened or lengthened, which may violate the minimum dsep requirement. To address this issue, for each integration step, the projected distance between two consecutive points on the streamline is computed in image space. If the distance is larger than dsep, some intermediate sample points on the streamline are generated by interpolation.

80 Depth Comparison

Streamlines can overlap or intersect with each other after being projected from object space to image space. In the two-dimensional evenly-spaced streamline algo- rithm [28], a new sample point on the streamline is invalid if it is within dsep from existing streamlines, or when it leaves the domain defining the flow field. In those cases, the streamline will be terminated. In our algorithm, simply terminating a streamline if it is too close to existing streamlines’ projections on the image plane is not always desirable, because a streamline closer to the viewpoint should not be terminated by those far behind. To deal with this issue, in the algorithm, when the newly generated point of the current streamline is too close to an existing streamline,

first check whether this point is behind that existing streamline. If yes, the integra- tion is terminated. Otherwise, check whether the streamline segment connected to this new point intersects with the existing streamline on the image plane. If they intersect and the new segment is closer to the viewpoint, the intersected segment of the old streamline becomes invalid and will be removed, and the integration of the current streamline continues. If they do not intersect, the integration of the current streamline continues. This can ensure that a correct depth relationship between the streamlines is displayed.

6.2.2 Streamline Placement Strategies

Having described how to control the spacing between streamlines, in this section several strategies to place streamlines on the image plane are discussed. Since the streamline integration is performed in object space by unprojecting the seeds back to object space, depth values for the screen pixels, i.e., a depth map, will be needed. In

81 the algorithm, this depth map is generated as a result of rendering objects derived from the input data set, which defines the regions of interest that the user desires.

Implicit Stream Surfaces

Visualizing stream surfaces can be an effective way to explore flow fields since it is known that streamlines always adhere to the surface, and the local flow direction is perpendicular to the normal of the surface. By visualizing different stream surfaces, the user can get a better understanding of the flow field’s global structure. Showing only the stream surfaces, however, is not sufficient since no information about the

flow directions on the surface is displayed, as shown in the top images of Fig. 6.2. To create a more effective visualization, a stream surface is first rendered, and then the depth map from the rendered result is used as the input to the algorithm to create better organized streamlines.

To generate stream surfaces, a volumetric stream function needs to be computed.

Previously, van Wijk [68] proposed a method to generate implicit stream functions by computing a backward streamline from every grid point in the volume and recording its intersection point at the domain boundary. If some scalar values are assigned to the boundary, values from the boundary can be assigned back to the grid points according to the intersection points of their backward streamlines to produce a stream function. Isosurfaces can then be generated from this function to represent the stream surfaces. He proposed to paint certain patterns on the boundary, and see how the patterns evolve as the flow goes from the boundary into the domain.

A new method is proposed to assign scalar values to the boundary based on pre- selected streamlines. The goal is to more clearly visualize the flows in the regions spanned by those streamlines. First calculate the intersection of those streamlines to

82 the boundary. Then treat each intersection point on the boundary as a source of a potential function that emits energy to its surrounding area on the boundary. The energy distribution is set to be a Gaussian function where the intensity is inversely proportional to the distance to the source. For every grid point on the boundary, the energy contribution is summed up from all sources and we use the resulting scalar field on the boundaries to create the implicit stream function. With such a setup, the stream surfaces generated can enclose the input streamlines in layers using different isovalues, and the image space method will place streamlines on each of the stream surfaces to depict the flow directions. Fig. 6.2 shows two examples of the stream surfaces with different isovalues and the streamlines generated using the method proposed here. The data set was generated as part of a simulation that models the core collapse of supernova.

Flow Topology Based Templates

A great deal of insight about a flow field can often be obtained by visualizing the topology of the field, which is defined by the critical points and the tangent curves or surfaces connecting them. With the topology information, the behavior of flows and to some extent the structure of the entire field can be then inferred. Different types of critical points characterize different flow patterns in their neighborhood.

Given a critical point, the eigenvalues and eigenvectors of its Jacobian matrix can be computed. The eigenvalues can be used to classify the type of the critical point, and the eigenvectors to find its invariant manifold. Previously, Globus et al. [15] proposed to use three-dimensional glyphs to visualize the flow patterns around critical points. Ye et al. [77] proposed a template-based seeding strategy for visualizing three- dimensional flow fields. Briefly speaking, the method in [77] is to first identify and

83 Figure 6.2: Streamlines generated on two different stream surfaces.

classify critical points in the field. Then seeds are placed on the pre-defined templates around the critical points. Finally Poisson seeding is used to populate the empty region. The main goal of this method is to reveal the flow patterns in the vicinity of critical points. To incorporate the idea into the algorithm and highlight the flow topology, what is needed is a depth map that signifies the critical points. Solid objects can be used to be the templates. Rendering the templates can generate the input depth map for seed placement.

Based on the eigenvalues and eigenvectors, there are different templates for differ- ent types of critical points. Fig. 6.3 illustrates the templates for the following types of critical points. Note that seeds are not directly dropped on the solid object template

84 Figure 6.3: Seeding templates for different types of critical points - left: repelling or attracting node; middle: attracting or repelling saddle and spiral saddle; right: attracting or repelling spiral (critical point classification image courtesy of Alex Pang).

in object space, but on the image by rendering those templates at a given view. So only the shape, orientation, size and type of the templates matter, rather than how many seeds to drop, or where to drop seeds.

• Nodes: This type of critical points are sources or sinks of streamlines. The

template used is a solid sphere, which is centered at the position of the critical

point. Radius of the sphere is scaled by the eigenvalue’s real part.

• Node Saddles: Two cones are used as the template for this type, which point

to the opposite directions of the local eigenplane spanned by the eigenvectors.

The radius and height are scaled by the eigenvalue’s real part.

• Spiral: Two cones are used as the template for this type, which point to each

other from the opposite direction of the local eigenplane spanned by the eigen-

vectors. The radius and height are scaled by the eigenvalue’s real part.

• Spiral Saddles: The template for this type is the same as that of Node Saddle.

85 For a three-dimensional flow field, it is possible that there is more than one critical point. When multiple critical points are present, each critical point has its correspond- ing template and they are rendered together to the same image. The resulting depth map will have separate regions representing different templates, from which seeds are dropped. Fig. 6.4 shows streamlines integrated from the depth map generated by rendering solid object templates. There are four critical points in this synthetic flow

field: three sinks and one saddle.

Figure 6.4: Streamlines generated from critical point templates. Three sphere templates stand for sinks, while the two-cones template stands for saddle.

Isosurfaces of Flow Related Scalar Quantities

Many scalar variables are related to the properties of a flow field. For instance, vorticity magnitude can often reveal the degree of local rotations, while Laplacian can show the second order derivatives of the flows. As described in [59], those scalar quantities are often important for the understanding of the flow fields even though

86 they are not necessarily directly related to the flow directions. When exploring a

flow field, one can first generate images from isosurfaces of those variables. As the users find some interesting features from the isosurface, they can use the image space method to drop seeds on the screen directly. This allows one to enrich the image and highlight the correlations between the scalar variable and the flow directions. Fig. 6.5 shows an example of streamlines generated from an isosurface of velocity magnitude using the algorithm. The data set was from a simulated flow field of thermal downflow plumes on the surface layer of the Solar.

(a) (b)

Figure 6.5: (a) An isosurface of velocity magnitude colored by using the velocity (u,v,w) as (r,g,b) (b) Streamlines generated from the isosurface.

Slicing Planes

One effective way to visualize volume data, particular for regions that are easily occluded, is to slice through the volume and only visualize data on the slice plane.

87 Although slicing planes have been used frequently for visualizing three-dimensional scalar fields, they are not used as often for vector field visualization. One of the primary reasons is that only visualizing the vectors on the plane does not reveal enough insight about the global flow directions, while visualizing a large number of streamlines starting from a slice plane can easily clutter the scene. With the image space method, the visual clarity can be enhanced by first rendering selected slice planes to the screen, colored with optional scalar or vector attributes, and then drop seeds on the planes to compute the streamlines. With the spacing control mechanism, it is feasible to control the depth complexity and show only the outer layer of the streamlines originated from the slice. Fig. 6.6 shows an example of streamlines computed from seeds dropped on a slicing plane. Note that the streamlines are computed in 3D space rather than constrained on the slice.

(a) (b)

Figure 6.6: (a) A slicing plane colored by using the velocity (u,v,w) as (r,g,b) (b) Stream- lines generated from the slicing plane.

88 External Objects

Another application of the image space method is to drop seeds on the surface of a user selected object. This external object can be thought of as a three-dimensional rake[25], from which streamline seeds are emitted. While previously people have proposed to use widgets as seed placement tools, the seeds were explicitly placed on the surface of the widget. This requires an explicit discretization of the rake surface to determine the seed positions. In the image space method, what is needed is a two- dimensional depth map from the rendering result of the object. The seed density is determined in image space and thus can be easily adapted to the resolution of images.

Fig. 6.7 shows streamlines computed from seeds on the surface of a cylinder. Note that in this image the depth cue is enhanced by mapping the computed streamlines with a texture that emphasizes the outlines. Details about the rendering are described in section 6.2.3.

(a) (b)

Figure 6.7: (a) The Cylinder as an external object (b) Streamlines generated from a cylinder.

89 6.2.3 Additional Run Time Control

In this section, several additional controls and effects that can be achieved using the algorithm are described.

Level of Detail Rendering

In computer graphics, level of detail (LOD) is commonly used to save unnecessary rendering time for objects whose details are too small to be seen on the screen. Ren- dering low resolution data can also reduce rendering artifacts if the screen resolution is not high enough to sample the high frequency detail. One such example is the tex- ture mip-mapping algorithm supported by OpenGL. When visualizing streamlines, to improve the clarity of visualization, Jobard and Lefer [29] proposed to compute a sequence of streamlines with different densities, while Mebarki et al. [45] proposed to elongate all previously generated streamlines before placing new streamlines when the density is increased. The idea of LOD can be adopted here by adjusting the number of streamlines displayed on the screen according to the projection size of the domain on the screen. To achieve this effect, a constant streamline spacing defined in screen space between streamlines is used. As the user zooms out of the scene, because the screen projection area of the domain becomes smaller, fewer streamlines will be gen- erated as a result of attempting to keep the constant distance between streamlines.

As the user zooms into the scene, on the other hand, since now a larger projection area of the domain is displayed, more streamlines will be generated and displayed.

Fig. 6.8 shows an example of LOD streamlines generated at different zoom scales.

90 Figure 6.8: Level of detail streamlines generated at three different scales. It can be seen that as the field is projected to a larger area, more streamlines that can better reveal the flow features are generated.

Temporal Coherence

When the user zooms in and out of, or rotates the scene, the projection of the surface will be changed. If re-running the algorithm to generate a completely new set of streamlines whenever such changes occur, some unwanted flickering and other an- noyances may happen. To avoid this, it is necessary to maintain temporal coherence for the streamlines generated between consecutive frames. When the user zooms into the surface, the projection area becomes larger. The streamlines from the previous projection need to be retained, and placed into the queue as the initial set of stream- lines (see section 6.2.1). These streamlines are first elongated before new ones are generated. Some sample points along the streamlines may go out of the view frustum and thus becomes invalid. When the user zooms out, first verify the sample points along the streamlines from the previous frame and invalidate those points that are too close to other streamlines under the new projection. After this, new streamlines

91 will be added to fill the holes if any. For rotations, it involves both the elongation, validation, and insertion of new lines similar to the zoom operations.

(a) (b)

(a) (b)

Figure 6.9: Streamlines computed using different offsets from a depth map generated by a sphere. (a) no offset from the original depth map (b) by increasing a value from the original depth map (c) by further increasing a value from the original depth map (d) by decreasing a value from the original depth map.

Layered Display of Streamlines

To improve the clarity of visualization, sometimes it is necessary to reduce the rendered streamlines to a few depth layers. One type of techniques related to control- ling the depth of rendered scenes is depth peeling [11] for polygonal models. However,

92 Figure 6.10: An example of peeling away one layer of streamlines by not allowing them to integrate beyond a fixed distance from the input depth map.

depth peeling for lines is not well defined since lines themselves cannot form effec- tive occluders because the space between lines are not occupied. The image space method lends itself well to effective depth control and peeling. This is because seeds are placed on top of the depth map on the image plane. The user can “peel” into the flows by slowly increasing or decreasing a δz from the original depth map to drop the initial seeds and generate streamlines. Fig. 6.9 shows examples where streamlines are computed using different offsets from a depth map generated by a sphere. The display of streamlines can also be controlled by constraining them to integrate within a +/- δz away from the input depth map. This will effectively control the depth com- plexity of the rendered scene. This is essentially to create clipping planes to remove streamlines outside the allowed depth range. In this case, the clipping planes have shapes conforming to the initial depth map, which can be more flexible compared to the traditional planar clipping planes. Fig. 6.10 shows an example of opening up

93 a portion of the streamlines in the middle section by not allowing streamlines to go beyond a small δz from the input depth map.

Generate Streamlines From Multiple Views

Figure 6.11: First row: rendering images of stream surface from different viewpoints. Sec- ond row: streamlines generated at the corresponding viewpoint. Third row: the combined images of streamlines rendered from four different views.

Sometimes it can be beneficial to combine the streamlines generated from multiple views and display them all together. For each individual view, the spacing constrains are still enforced, i.e., not to allow streamlines to come too close to each other.

However, when combining the streamlines from multiple views, there is no constraint

94 enforced. The motivation behind this is that even though the projection of streamlines from different views may intersect and overlap with each other in image space, as long as the depth complexity in each view, and the number of combined views are well controlled, the combined streamlines can enhance the depth perception of the scene.

In the algorithm, the selection of different views is done by the user: Given an object displayed on the screen, a stream surface for example, the user can rotate the surface, identify a good view, and place streamlines based on the current view using the image space algorithm. Then, the user can rotate the scene again to reveal the region that was invisible in the previous view, and place more streamlines. When the user feels the scene is getting too cluttered, the accumulation of streamlines can be stopped.

Fig. 6.11 illustrates this process by showing images from four different views and the combined results. Another strategy for combining the streamlines is to keep the current camera view, but to move the probing objects to different locations. For example, the user can use a cylinder to probe the flow field and place streamlines from the projection of the cylinder surface. The user can keep the current camera view but to change the location of the cylinder and gradually populate the scene until the image reveals enough about the flow field without making the scene overly crowded. Fig. 6.12 shows an example of combining the streamlines generated from three different cylinder probe locations.

Importance Driven Streamline Placement

To distinguish regions of different importance, different spacing thresholds can be used to place the streamlines. For instance, more streamlines can be placed in regions with higher velocity magnitudes, while fewer streamlines are placed in other regions.

To achieve this effect, the algorithm takes an importance map as an input, which can

95 Figure 6.12: Streamlines generated from three different cylinder locations (left three im- ages) are combined together and rendered to the image on the right.

be generated by evaluating any function on the flow field, such as velocity or vorticity magnitude. At every step of the streamline integration, the point is projected to the screen and then the importance value is retrieved at the corresponding screen point.

After mapping the importance value to a streamline distance, whether to continue or terminate the integration of the current streamline is to be decided. Different transfer functions can be used to map the importance value to different streamline distance thresholds. For instance, in the implementation the functions of bias and gain proposed in [47] by Perlin and Hoffert to create non-linear mapping effects are used. Fig. 6.13 shows an example of using the velocity magnitude as the importance value to determine the streamline density on a slice plane, where more streamlines are placed in regions with higher velocity magnitudes.

96 (a) (b)

Figure 6.13: Streamline densities are controlled by velocity magnitude on a slice. (a) larger velocity magnitudes are displayed in brighter colors (b) the streamlines generated from the slice.

Stylish Drawing

One advantage of the image based streamline placement algorithm is that stream- lines are well spaced out with user control on the screen. With the spacing controlled, it becomes much easier to draw patches of desired widths along the streamlines on the screen to enhance the visualization of streamlines, since it is easy to avoid the stream patches overlapping with each other. To compute the stream patches, first compute the screen projection of the streamline as the skeleton. Then, extend the width of the stream patches along the direction that is perpendicular to the streamline’s local tangent direction on the screen. The width of the stream patches is controlled by the local spacing of the streamlines, which is defined by the image based algorithm. With the stream patches, a variety of textures can be mapped to enhance depth cues and simulate different rendering styles. It is also possible to vary the width and trans- parency of the stream patches based on local flow properties. Fig. 6.14 shows three examples of the stylish drawing of streamlines using different textures.

97 Figure 6.14: Streamlines generated and rendered with three different styles by the image- based algorithm.

6.3 Results

The algorithm was tested on a PC with an Intel Pentium M 2GHz processor,

2 GB memory, and a nVIDIA 6800 graphics card with 256 MB of video memory.

Two synthetic data sets (Fig. 6.4, 6.9) and two 3D flow simulation data sets (Plume and TSI) were used to test the algorithm and generate the images shown throughout the paper. The Plume data set (Fig. 6.5, 6.6, 6.7, 6.8, 6.10, 6.12, 6.13, 6.14) is a three-dimensional turbulent flow field with dimensions of 126x126x512. The original data set is a time-varying flow field, which models turbulence in a solar simulation performed by National Center for Atmospheric Research scientists. The TSI data set

(Fig. 6.2, 6.11) is a three-dimensional flow field with dimensions of 200x200x200. It was to model the core collapse of supernova and generated by collaboration among

Oak Ridge National Lab and eight universities. A few time steps of these two data sets were used during the test.

When running the algorithm, the user can control the streamline density by spec- ifying different separating distances in screen space. The coverage of the visualized objects in the input depth map affects the generation of streamlines. The larger the

98 area is, the more streamlines are generated compared with those generated from a smaller area, if the separating distance remains the same. In the tests, to allow the geometries producing the depth map to cover the screen as much as possible, it was achieved by zooming into the scene. The fourth order Runge-Kutta integrator with a constant step size was used to compute the streamlines.

The performance of the algorithm was shown using the Plume data set. The main steps include transformations of streamline points between object and image space, streamline integration, seed point selection, and validation of streamline points.

In the program, since longer streamlines were generally preferred, those streamlines that were too short were discarded. In the experiments, the threshold value for the minimum streamline length was 20, and the fixed integration step size was 1.0, both in voxels. This means for a streamline to be accepted, it should have at least 20 integration points. The larger this threshold value is, the possibility for a streamline generated from the distance control algorithm to be discarded becomes higher, thus the percentage of total computation time wasted on generating those short streamlines becomes higher too. Fig. 6.15 shows the percentages of time spent on each of the main steps in the algorithm. From the figure, it can be seen that the streamline integration process is the most time consuming part. Fig. 6.16 shows the number of streamlines and line segments generated with different distance thresholds. Fig. 6.17 shows the timings for generating streamlines with different separating distances, which directly influence the number of streamlines that were computed. Throughout the paper, all images of streamlines are rendered with stylish drawing, the average time to render one line segment is about 0.00259 ms.

99 Figure 6.15: The percentage of total time each main step used.

Figure 6.16: The pink curve (the left axis as scale) shows the number of streamlines, while the blue one (the right axis as scale) shows the number of line segments generated.

Figure 6.17: The time (in seconds) to generate streamlines from an isosurface for different separating distance (pixels) using the Plume data set.

100 CHAPTER 7

CONCLUSIONS

In this dissertation, three new algorithms have been developed for visualizing vector fields.

The view-dependent flow texture advection algorithm allows for multi-resolution rendering of two-dimensional structured rectilinear and curvilinear data and adjusts the output texture resolution on the fly as the user zooms in and out of the field so as to prevent from aliasing as well as ensure enough detail in the image regardless of the mesh size and density. This is achieved through a novel representation of the underlying flow field, called a trace slice, which allows for flexible up- and down- sampling of the field without affecting the accuracy or introducing additional cost for particle advections. The algorithm is a hybrid of image space and object space methods because the flow texture is directly computed at each fragment in image space and the particle integration and the texture advection are computed in object space. This trace slice representation can be directly used by GPUs to perform texture advection at a highly interactive speed.

The two-dimensional streamline placement in an illustrative and representative manner fully utilizes the spatial coherence in the underlying flow fields, such that the density of streamlines in the final images can be varied to reflect the coherence

101 of the underlying flow patterns and provide visual focuses. The method is based on measurement of dissimilarity between streamlines locally and globally. The approach is innovative in three regards: (1) the density of streamlines is closely related to the intrinsic flow features of the vector fields, (2) the method does not explicitly rely on detecting the existence of critical points, and (3) the abstract and illustrative visualization can effectively reduce visual cluttering. User studies were conducted to evaluate the effectiveness of using illustrative streamlines generated by the algorithm to depict the flow information. Results suggest that users can interpret the flow directions and capture important flow features.

The image-based approach for streamline generation and rendering reduces scene cluttering and allows the user to flexibly place streamline seeds on the screen when they identify hot spots from the visualization of other scalar or flow related variables.

The rendering output from a variety of visualization techniques, such as isosurfaces or slicing planes, can be used as the input to assist seed selections. As streamlines are integrated in object space, the algorithm monitors and controls their distances to the existing streamlines that have already been displayed. In addition to reducing visual cluttering, it can be used to achieve various effects, such as level of detail rendering, depth layering, and stylish drawing of streamlines.

For the techniques proposed in this dissertation, there are some future work can be done to improve or enhance the original ideas.

Trace Slice Representation: Trace slice representation in view-dependent multi- resolution flow texture in Chapter 4 can be enhanced in order to save the storage needed. This is quite promising because there is much redundant information stored because of spatial coherence of the vector field itself. The compression is performed

102 on trace slices, rather than original vector field. This is one of the advantages of trace slices, because the error introduced by compression of original vector field makes this error accumulate along the advection path. Compression of trace slices, however, makes the accumulation of error within some controlled range. I believe there is a large degree of coherence between trace slices of adjacent time steps, or among neigh- boring grid points. For example, for a time-varying flow field, if the flow pattern only has a small variation in some regions along time dimension, it is possible to predict the advection paths, which can achieve a higher compression ratio. The compres- sion should be able to reduce the redundant information between trace slices without sacrificing the correctness of sampled advection paths and the image quality of flow texture. Since the sampled advection paths are kept in floating point format, lossless compression will be difficult. However, the correctness and image quality should be in a controllable range. Meanwhile, the time overhead caused by decompression is another consideration. Algorithms for decompressing all trace slices before texture advection or at run time need to be designed.

User Study: User study becomes more and more important in visualizing in- formation data and scientific data. It tells not only how effective a visualization technique is, but also what we can do to further improve existing techniques. The motivation behind every visualization technique is to provide useful tools for the users to explore the world around us. In the field of information visualization, user study is widely used to evaluate whether the new information system can better answer users’ questions, find information hidden in the data, or predict future trends. How- ever, in the field of scientific visualization, user study is overlooked in most cases,

103 even though its importance is well recognized. The user study presented in Chap- ter 5 can be improved in some respects. More tasks about evaluating other two- dimensional streamline placement algorithms can be included. For example, Verma et al. ’s topology-based placement strategy in [72] preserves as much as possible the topology information, such as critical points. For topology related tasks, it might get higher scores compared with other algorithms. But we need to have comprehensive study results to decide. The user study should involve domain experts, since they know what the field should look like, and what representative information this field is supposed to tell. The current user study only included nonexpert users, who an- swered the questions according to their common sense and basic knowledge. However, they might not be able to answer those questions from the viewpoint of exploring the underlying scientific principles. For the tasks in the current user study, they may not include all questions experts expect to answer from the visualization results. To have a comprehensive and objective evaluation, both the user performance from nonexpert users and experts are very important.

Fieldline Placement on Time-varying Fields: Many techniques have been proposed to place streamlines on two-dimensional or three-dimensional static vector

fields, but not time-varying fields. Like the spatial coherence in a static field, there exists temporal coherence along time domain. For example, for pathlines, if the flow does not converge or diverge in the regions, they tend to be stable and have simi- lar dynamic pattern. A straightforward extension of the two-dimensional illustrative streamline placement strategy to time-varying fields can be a potential research di- rection. The idea of similarity between streamlines can be extended to similarity

104 between pathlines in the field. During the evaluation of local and global similarity between pathlines, information related to time needs to be considered.

105 BIBLIOGRAPHY

[1] D. K. Arrowsmith and C. M. Place. An Introduction to Dynamical Systems. Cambridge University Press, 1990.

[2] D. Asimov. Notes on the topology of vector fields and flows. Technical Report RNR-93-003, NASA Ames Research Center, 1993.

[3] R. Bridson, S. Marino, and R. Fedkiw. Simulation of clothing with folds and wrin- kles. In Proceedings of ACM SIGGRAPH/Eurographics symposium on Computer animation ’03, pages 28–36, 2003.

[4] A. Brun, H. Knutsson, H. J. Park, M. E. Shenton, and C.-F. Westin. Cluster- ing fiber tracts using normalized cuts. In Seventh International Conference on Medical Image Computing and Computer-Assisted Intervention, Lecture Notes in Computer Science, pages 368–375, 2004.

[5] B. Cabral and C. Leedom. Imaging vector fields using line integral convolution. In Proceedings of SIGGRAPH ’93, pages 263–270, 1993.

[6] W. W. Cheney and D. Kincaid. Numerical Mathematics and Computing. Brooks/Cole Publishing Co., 1985.

[7] I. Corouge, G. Gerig, and S. Gouttard. Towards a shape model of white matter fiber bundles using diffusion tensor mri. In Proceedings of the International Symposium on Biomedical Imaging ’04, volume 1, pages 344–347, 2004.

[8] W. de Leeuw and R. van Liere. Comparing lic and spot noise. In Proceedings of IEEE Conference on Visualization ’98, pages 359–365, 1998.

[9] W. de Leeuw and J. J. van Wijk. Enhanced spot noise for vector field visualiza- tion. In Proceedings of IEEE Conference on Visualization ’95, pages 233–239, 1995.

[10] Q. Du and X. Wang. Centroidal voronoi tessellation based algorithms for vector fields visualization and segmentation. In Proceedings of IEEE Conference on Visualization ’04, pages 43–50, 2004.

106 [11] C. Everitt. Interactive order-independent transparency. Technical report, NVIDIA Corporation, 2001. [12] L.K. Forssell and S.D. Cohen. Using line integral convolution for flow visual- ization: Curvilinear grids, variable-speed animation, and unsteady flows. IEEE Transactions on Visualization and Computer Graphics, 1(2):133–141, 1995. [13] N. Gagvani and D. Silver. Parameter-controlled volume thinning. Graphics Models and Image Processing, 61(3):149–164, 1999. [14] S. Gibson. Using distance maps for accurate surface representation in sampled volumes. In Proceedings of IEEE Symposium on Volume visualization ’98, pages 23–30, 1998. [15] A. Globus, C. Levit, and T. Lasinski. A tool for visualizing the topology of three- dimensional vector fields. In Proceedings of IEEE Conference on Visualization ’91, pages 33–40, 1991. [16] H. Hagen, M. M¨uller,and G. M. Nielson. Focus on Scientific Visualization. Springer-Verlag, 1993. [17] C. Hansen and C. Johnson. The Visualization Handbook. Academic Press, 2004. [18] B. Heckel, G. Weber, B. Hamann, and K. Joy. Construction of vector field hierarchies. In Proceedings of IEEE Conference on Visualization ’99, pages 19– 26, 1999. [19] H.-C. Hege and D. Stalling. Fast LIC with piecewise polynomial filter kernels. In Mathematical Visualization - Algorithms and Applications, pages 295–314. 1998. [20] A. Helgeland and O. Andreassen. Visualization of vector fields using seed lic and volume rendering. IEEE Transactions on Visualization and Computer Graphics, 10(6):673–682, 2004. [21] A. Helgeland and T. Elboth. High-quality and interactive animations of 3d time-varying vector fields. IEEE Transactions on Visualization and Computer Graphics, 12(6):1535–1546, 2006. [22] J. Helman and L. Hesselink. Representation and display of vector field topology in fluid flow data sets. IEEE Computer, 22(8):27–36, 1989. [23] J. Helman and L. Hesselink. Visualizing vector field topology in fluid flows. IEEE Computer Graphics Application, 11(3):36–46, 1991. [24] J. L. Helman and L. Hesselink. Surface representations of two- and three- dimensional fluid flow topology. In Proceedings of IEEE Conference on Visu- alization ’90, pages 6–13, 1990.

107 [25] K. Herndon and T. Meyer. 3d widgets for exploratory scientific visualization. In ACM Symposium on User Interface Software and Technology, pages 69–70, 1994.

[26] V. Interrante and C. Grosch. Strategies for effectively visualizing 3d flow with volume LIC. In Proceedings of IEEE Conference on Visualization ’97, pages 421–424, 1997.

[27] B. Jobard, G. Erlebacher, and Y. Hussaini. Lagrangian-eulerian advection for unsteady flow visualization. In Proceedings of IEEE Conference on Visualization ’01, pages 53–60, 2001.

[28] B. Jobard and W. Lefer. Creating evenly-spaced streamlines of arbitrary density. In Visualization in Scientific Computing ’97, pages 43–56, 1997.

[29] B. Jobard and W. Lefer. Multiresolution flow visualization. In WSCG (Posters), pages 34–35, 2001.

[30] M. Jones, J. Baerentzen, and M. Milos Sramek. 3d distance fields: A survey of techniques and applications. IEEE Transactions on Visualization and Computer Graphics, 12(4):581–599, 2006.

[31] M.-H. Kiu and D. Banks. Multi-frequency noise for lic. In Proceedings of IEEE Conference on Visualization ’96, pages 121–126, 1996.

[32] D. H. Laidlaw, M. Kirby, C. Jackson, J. S. Davidson, T. Miller, M. DaSilva, W. Warren, and M. Tarr. Comparing 2D vector field visualization methods: A user study. In IEEE Transactions on Visualization and Computer Graphics, 11(1):59–70, 2005.

[33] R. Laramee, H. Hauser, H. Doleisch, F. Post, B. Vrolijk, and D. Weiskopf. The state of the art in flow visualization: Dense and texture-based techniques. Com- puter Graphics Forum, 23(2):203–222, 2004.

[34] R. Laramee, B. Jobard, and H. Hauser. Image space based visualization of unsteady flow on surfaces. In Proceedings of IEEE Conference on Visualization ’03, pages 131–138, 2003.

[35] J.-F. Le Bihanand, D. Mangin, C. Poupon, C. Clark, S. Pappata, N. Molko, and H. Chabrait. Diffusion tensor imaging: Concepts and applications. Journal of Magnetic Resonance Imaging, 13:534–546, 2001.

[36] G.-S. Li, H.-W. Shen, and U. Bordoloi. Chameleon: An interactive texture- based rendering framework for visualizing three-dimensional vector fields. In Proceedings of IEEE Conference on Visualization ’03, pages 241–248, 2003.

108 [37] L. Li and H.-W. Shen. View-dependent multiresolutional flow texture advection. In Visualization and Data Analysis ’06, pages 1–11, 2006.

[38] L. Li and H.-W. Shen. Image based streamline generation and rendering. IEEE Transactions on Visualization and Computer Graphics, 13(3):630–640, 2007.

[39] Z. Liu, R. Moorhead, and J. Groner. An advanced evenly-spaced streamline placement algorithm. IEEE Transactions on Visualization and Computer Graph- ics, 12(5):965–972, 2006.

[40] O. Mallo, R. Peikert, C. Sigg, and F. Sadlo. Illuminated lines revisited. In Proceedings of IEEE Conference on Visualization ’05, pages 19–26, 2005.

[41] X. Mao, Y. Hatanaka, H. Higashida, and A. Imamiya. Image-guided streamline placement on curvilinear grid surfaces. In Proceedings of IEEE Conference on Visualization ’98, pages 135–142, 1998.

[42] X. Mao, L. Hong, A. Kaufman, N. Fujita, M. Kikukawa, and A. Imamiya. Multi- granularity noise for curvilinear grid lic. In Proceedings of Graphics Interface ’98, pages 193–200, 1998.

[43] O. Mattausch, T. Theußl, H. Hauser, and M. Gr¨oller.Strategies for interactive exploration of 3d flow using evenly-spaced illuminated streamlines. In Proceedings of Spring Conference on Computer Graphics, pages 213–222, 2003.

[44] N. Max and B. Becker. Flow visualization using moving textures. In Proceed- ings of the ICAS/LaRC Symposium on Visualizing Time-Varying Data, NASA Conference Publication 3321, pages 77–87, 1996.

[45] A. Mebarki, P. Alliez, and O. Devillers. Farthest point seeding for efficient placement of streamlines. In Proceedings of IEEE Conference on Visualization ’05, pages 479–486, 2005.

[46] B. Moberts, A. Vilanova, and J. J. van Wijk. Evaluation of fiber clustering methods for diffusion tensor imaging. In Proceedings of IEEE Conference on Visualization ’05, pages 65–72, 2005.

[47] K. Perlin and E. M. Hoffert. Hypertexture. In Proceedings of ACM SIGGRAPH ’89, pages 253–262, 1989.

[48] D. F. Perrens. Flow visualization in low speed wind tunnels. Physics Education, 5:262–265, 1970.

[49] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes in C. Cambridge University Press, Cambridge, UK, 1993.

109 [50] C. Rezk-Salama, P. Hastreiter, C. Teitzel, and T. Ertl. Interactive exploration of volume line integral convolution based on 3d-texture mapping. In Proceedings of IEEE Conference on Visualization ’99, pages 233–240, 1999.

[51] A. Sanna, B. Montrucchio, and P. Montuschi. A survey on visualization of vector fields by texture-based methods. Research Developments in Pattern Recognition, 1(1):13–27, 2000.

[52] J. Serra. Image Analysis and Mathematical Morphology. Academic Press, Inc., 1983.

[53] H.-W. Shen, C.R. Johnson, and K.-L. Ma. Visualizing vector fields using line integral convolution and dye advection. In Proceedings of 1996 Symposium on Volume Visualization, pages 63–69, 1996.

[54] H.-W. Shen and D. Kao. A new line integral convolution algorithm for visualizing time-varying flow fields. IEEE Transactions on Visualization and Computer Graphics, 4(2):98–108, 1998.

[55] H.-W. Shen, G.-S. Li, and U. Bordoloi. Interactive visualization of three- dimensional vector fields with flexible appearance control. IEEE Transactions on Visualization and Computer Graphics, 10(4):434–445, 2004.

[56] D. Stalling and H.-C. Hege. Fast and resolution independent line integral con- volution. In Proceedings of ACM SIGGRAPH ’95, pages 249–256, 1995.

[57] D. Stalling and M. Zockler. Fast display of illuminated field lines. IEEE Trans- actions on Visualization and Computer Graphics, 3(2):118–128, 1997.

[58] A. Sundquist. Dynamic line integral convolution for visualizing streamline evolu- tion. IEEE Transactions on Visualization and Computer Graphics, 9(3):273–282, 2003.

[59] Jang Y. Ebert-D. Gaither K. Svakhine, N. Illustration and Photography Inspired Visualization of Flows and Volumes. In Proceedings of IEEE Conference on Visualization ’05, pages 687–694, 2005.

[60] A. Telea and J. J. van Wijk. Simplified representation of vector fields. In Pro- ceedings of IEEE Conference on Visualization ’99, pages 35–42, 1999.

[61] E. Tufte. The Visual Display of Quantitative Information. Graphics Press, 1986.

[62] E. Tufte. Envisioning Information. Graphics Press, 1990.

[63] G. Turk and D. Banks. Image-guided streamline placement. In Proceedings of SIGGRAPH ’96, volume 30, pages 453–460, 1996.

110 [64] S.-K. Ueng, C. Sikorski, and K.-L. Ma. Efficient streamline, streamribbon, and streamtube constructions on unstructured grids. IEEE Transactions on Visual- ization and Computer Graphics, 2(2):100–110, 1996.

[65] T. Urness, V. Interrante, I. Marusic, E. Longmire, and B. Ganapathisubramani. Effectively visualizing multi-valued flow data using color and texture. In Pro- ceedings of IEEE Conference on Visualization ’03, pages 115–121, 2003.

[66] M. Van Dyke. An Album of Fluid Motion. Parabolic Press, 1982.

[67] J. J. van Wijk. Spot noise: Texture synthesis for data visualization. Computer Graphics, 25(4):309–318, 1991.

[68] J. J. van Wijk. Implicit stream surfaces. In Proceedings of IEEE Conference on Visualization ’93, pages 245–252, 1993.

[69] J. J. van Wijk. Image based flow visualization. In Proceedings of SIGGRAPH ’02, pages 745–754, 2002.

[70] J. J. van Wijk. Image based flow visualization on curved surfaces. In Proceedings of IEEE Conference on Visualization ’03, pages 123–131, 2003.

[71] V. Verma, D. Kao, and A. Pang. Plic: Briding the gap between streamlines and lic. In Proceedings of IEEE Conference on Visualization ’99, pages 341–348, 1999.

[72] V. Verma, D. Kao, and A. Pang. A flow-guided streamline seeding strategy. In Proceedings of IEEE Conference on Visualization ’00, pages 163–170, 2000.

[73] R. Wegenkittl and E. Gr¨oller. Fast oriented line integral convolution for vec- tor field visualization via the internet. In Proceedings of IEEE Conference on Visualization ’97, pages 309–316, 1997.

[74] R. Wegenkittl, E. Gr¨oller,and W. Purgathofer. Animating flow fields: Rendering of oriented line integral convolution. In Proceedings of the Computer Animation ’97, pages 15–21, 1997.

[75] D. Weiskopf, G. Erlebacher, and T. Ertl. A texture-based framework for spacetime-coherent visualization of time-dependent vector fields. In Proceedings of IEEE Conference on Visualization ’03, pages 107–114, 2003.

[76] D. Xue, C. Zhang, and R. Crawfis. Rendering implicit flow volumes. In Proceed- ings of IEEE Conference on Visualization ’04, pages 99–106, 2004.

[77] X. Ye, D. Kao, and A. Pang. Strategy for seeding 3d streamlines. In Proceedings of IEEE Conference on Visualization ’05, pages 471–478, 2005.

111 [78] M. Z¨ockler, D. Stalling, and H.-C. Hege. Parallel line integral convolution. Par- allel Computing, 23(7):975–989, 1997.

[79] K.J. Zuiderveld, A.H.J. Koning, and M.A. Viergever. Acceleration of ray-casting using 3d distance transforms. In Visualization in Biomedical Computing II, Proc. SPIE 1808, pages 324–335, 1992.

112