<<

Johnson/Hansen: The Handbook Page Proof 20.5.2004 12:32pm page 181

9 Multi-Dimensional Transfer Functions for

JOE KNISS, GORDON KINDLMANN, and CHARLES HANSEN Scientific Computing and Institute School of Computing,

single boundary, such as thickness. When 9.1 Introduction working with multivariate data, a similar diffi- Direct volume-rendering has proven to be an culty arises with features that can be identified effective and flexible visualization method for only by their unique combination of multiple 3D scalar fields. Transfer functions are funda- data values. A 1D transfer is simply mental to direct volume-rendering because their not capable of capturing this relationship. role is essentially to make the data visible: by Unfortunately, using multidimensional trans- assigning optical properties like color and fer functions in volume rendering is complicated. opacity to the data, the volume can be Even when the transfer function is only 1D, rendered with traditional finding an appropriate transfer function is gen- methods. Good transfer functions reveal the erally accomplished by trial and error. This is one important structures in the data without obscur- of the main challenges in making direct volume- ing them with unimportant regions. To date, rendering an effective visualization tool. Adding transfer functions have generally been limited dimensions to the transfer-function domain only Q1 to 1D domains, meaning that the 1D space of compounds the problem. While this is an on- scalar data value has been the only variable to going research area, many of the proposed which opacity and color are assigned. One methods for transfer-function generation and aspect of direct volume-rendering that has re- manipulation are not easily extended to higher- ceived little attention is the use of multidimen- dimensional transfer functions. In , fast sional transfer functions. volume-rendering algorithms that assume the Often, there are features of interest in volume transfer function can be implemented as a data that are difficult to extract and visualize linear lookup (LUT) can be difficult to with 1D transfer functions. Many medical data- adapt to multidimensional transfer functions sets created from CT or MRI scans contain a due to the linear interpolation imposed on such complex combination of boundaries between LUTs. multiple materials. This situation is problematic This chapter provides a detailed exposition of for 1D transfer functions because of the poten- the multidimensional transfer function concept, tial for overlap between the data-value intervals a generalization of multidimensional transfer spanned by the different boundaries. When one functions for both scalar and multivariate data value is associated with multiple boundar- data, as well as a novel technique for the inter- ies, a 1D transfer function is unable to render active generation of volumetric shadows. To them in isolation. Another benefit of higher resolve the potential complexities in a user inter- dimensional transfer functions is their ability face for multidimensional transfer functions, we to portray subtle variations in properties of a introduce a set of direct manipulation widgets

181 Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 182

182 The Visualization Handbook that make finding and experimenting with trans- for the display of iso-value contours in more fer functions an intuitive, efficient, and informa- smoothly varying data. The previous work tive process. In order to make this process most directly related to our approach for visual- genuinely interactive, we exploit the fast render- izing scalar data facilitates the semiautomatic ing capabilities of modern graphics hardware, generation of both 1D and 2D transfer functions especially 3D texture memory and -textur- [17,29]. Using principles of computer-vision edge ing operations. Together, the widgets and the detection, the semiautomatic method strives to hardware form the basis for new interaction isolate those portions of the transfer function modes that can guide users towards transfer- domain that most reliably correlate with the function settings appropriate for their visualiza- middle of material-interface boundaries. Other tion and data-exploration interests. work closely related to our approach for visual- izing multivariate data uses a 2D transfer func- tion to visualize data derived from multiple MRI 9.2 Previous Work pulse sequences [20]. Scalar volume-rendering research that uses 9.2.1 Transfer Functions multidimensional transfer functions is relatively Even though volume-rendering as a visualiza- scarce. One paper discusses the use of transfer tion tool is more than ten years old, only re- functions similar to Levoy’s as part of visualiza- cently has research focused on making the space tion in the context of wavelet volume represen- of transfer functions easier to explore. He et al. tation [27]. More recently, the VolumePro [12] generated transfer functions with genetic graphics board uses a 12-bit 1D lookup table algorithms driven either by user selection of for the transfer function, but also allows opacity thumbnail renderings or by some objective modulation by gradient magnitude, effectively image-fitness function. The Design Gallery [23] implementing a separable 2D transfer function creates an intuitive interface to the entire space [28]. Other work involving multidimensional of all possible transfer functions based on auto- transfer functions uses various types of second mated analysis and layout of rendered images. derivatives in order to distinguish features in the A more data-centric approach is the Contour volume according to their shape and curvature Spectrum [1], which visually summarizes the characteristics [15,34]. space of in terms of metrics like Designing color for displaying non surface area and mean gradient magnitude, volumetric data is a task similar to finding thereby guiding the choice of iso-value for iso- transfer functions. Previous work has developed surfacing, and also providing information strategies and guidelines for color creation, useful for transfer-function generation. Another based on visualization goals, types of data, per- recent paper [18] presents a novel transfer- ceptual considerations, and user studies function interface in which small thumbnail ren- [3,32,36]. derings are arranged according to their relation- ship with the spaces of data values, color, and opacity. 9.2.2 Direct Manipulation Widgets The application of these methods is limited to Direct manipulation widgets are geometric the generation of 1D transfer functions, even objects rendered with a visualization and are though 2D transfer functions were introduced designed to provide the user with a 3D interface by Levoy in 1988 [22]. Levoy introduced two [5,14,31,35,38]. For example, a frame widget styles of transfer functions, both 2D and both can be used to select a 2D plane within a using gradient magnitude for the second dimen- volume. Widgets are typically rendered from sion. One transfer function was intended for the basic geometric primitives such as spheres, cy- display of interfaces between materials, the other linders, and cones. Widget construction is often Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 183

Multi-Dimensional Transfer Functions for Volume Rendering 183 guided by a constraint system that binds elem- subsequent Volume-Pro PCI graphics board ents of a widget to one another. Each sub-part [28]. The VolumePro graphics board imple- of a widget represents some functionality of the ments ray casting combined with the shear widget or a parameter to which the user has warp factorization for volume-rendering [19]. access. It features trilinear interpolation with super- sampling, gradient estimation, and shaded volumes, and provides interactive frame rates 9.2.3 Hardware Volume Rendering for scalar volumes with sizes up to 5123. Many volume-rendering techniques based on graphics hardware utilize texture memory to store a 3D dataset. The dataset is then sampled, 9.3 Multidimensional Transfer Functions classified, rendered to proxy , and composited. Classification typically occurs in Transfer-function specification is arguably the hardware as a 1D table lookup. most important task in volume visualization. 2D texture-based techniques slice along the While the transfer function’s role is simply to major axes of the data and take advantage of assign optical properties such as opacity and hardware bilinear interpolation within the slice color to the data being visualized, the value [4]. These methods require three copies of the of the resulting visualization will be largely volume to reside in texture memory, one per dependent on how well these optical properties axis, and they often suffer from artifacts caused capture features of interest. Specifying a good by under-sampling along the slice axis. Trilinear transfer function can be a difficult and tedious interpolation can be attained using 2D textures task for several reasons. First, it is difficult to with specialized hardware extensions available uniquely identify features of interest in the on some commodity graphics cards [6]. This transfer-function domain. Even though a fea- technique allows intermediate slices along the ture of interest may be easily identifiable in the slice axis to be computed in hardware. These spatial domain, the range of data values that hardware extensions also permit diffuse shaded characterize the feature may be difficult to isol- volumes to be rendered at interactive frame rates. ate in the transfer-function domain due to the 3D texture-based techniques typically sample fact that other, uninteresting regions may con- view-aligned slices through the volume, lever- tain the same range of data values. Second, aging hardware trilinear interpolation [11]. transfer functions can have an enormous Other elements of proxy geometry, such as number of degrees of freedom. Even simple 1D spherical shells, may be used with 3D texture transfer functions using linear ramps require methods to eliminate artifacts caused by perspec- two degrees of freedom per control point. tive projection [21]. The pixel texture OpenGL Third, typical user interfaces do not guide the extension has been used with 3D texture tech- user in setting these control points based on niques to encode both data value and a diffuse dataset-specific information. Without this type illumination parameter that allows shading of information, the user must rely on trial and classification to occur in the same look-up and error. This kind of interaction can be espe- [25]. Engel et al. showed how to significantly cially frustrating since small changes to the reduce the number of slices needed to adequately transfer function can result in surprisingly sample a scalar volume, while maintaining a large and unintuitive changes to the volume- high-quality rendering, using a mathematical rendering. technique of preintegration and hardware Rather than classifying a sample based on a extensions such as dependent textures [10]. single scalar value, multidimensional transfer Another form of volume-rendering graphics functions allow a sample to be classified based hardware is the Cube-4 architecture [30] and the on a combination of values. Multiple data Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 184

184 The Visualization Handbook values tend increase the probability that a fea- and bone (C). Fig. 9.1b shows a log-scale ture can be uniquely isolated in the transfer- joint histogram of data value versus gradient function domain, effectively providing a larger magnitude. vocabulary for expressing the differences be- Since materials are relatively homogeneous, tween structures in the dataset. These values their gradient magnitudes are low. They can be are the axes of a multidimensional transfer func- seen as the circular regions at the bottom of tion. Adding dimensions to the transfer func- the histogram. The boundaries between the tion, however, greatly increases the degrees of materials are shown as the arches; air and freedom necessary for specifying a transfer soft tissue boundary (D), soft tissue and bone function and the need for dataset-specific boundary (E), and air and bone boundary (F). guidance. Each of these materials and boundaries can be In the following sections, we demonstrate the isolated using a 2D transfer function based on application of multidimensional transfer func- data value and gradient magnitude. Fig. 9.1c tions to two distinct classes of data: scalar data shows a volume-rendering with the corres- and multivariate data. The scalar-data applica- ponding features labeled. The air–bone bound- tion is focused on locating surface boundaries in ary, (F) in Fig. 9.1, is a good example of a a scalar volume. We motivate and describe the surface that cannot be isolated using a simple axes of the multidimensional transfer function 1D transfer function. This type of boundary for this type of data. We then describe the use appears in CT datasets as the sinuses and mas- of multidimensional transfer functions for multi- toid cells. Often, the arches that define material variate data. We use two examples, color boundaries in a 2D transfer function overlap. volumes and meteorological simulations, to In some cases this overlap prevents a material demonstrate the effectiveness of such transfer from being properly isolated in the transfer functions. function. This effect can be seen in the circled region of the 2D data value–gradient magni- tude joint histogram of the human tooth CT in 9.3.1 Scalar Data Fig. 9.2a. The background–dentin boundary For scalar data, the gradient is a first-derivative (F) shares the same ranges of data value and measure. As a vector, it describes the direction gradient magnitude as portions of the pulp– of greatest change. The normalized gradient is dentin (E) and the background–enamel (H) often used as the normal for surface-based boundaries. When the background–dentin volume shading. The gradient magnitude is a boundary (F) is emphasized in a 2D transfer scalar quantity that describes the local rate of function, the boundaries (E) and (H) are erro- change in the scalar field. For notational con- neously colored in the volume rendering, as venience, we will use f 0 to indicate the magni- seen in Fig. 9.2c. A second derivative measure tude of the gradient of f, where f is the scalar enables a more precise disambiguation of com- function representing the data. plex boundary configurations such as this. Some edge-detection algorithms (such as 0 f ¼kf k (9:1) Marr-Hildreth [24]) locate the middle of an This value is useful as an axis of the transfer edge by detecting a zero crossing in a second function since it discriminates between homo- derivative measure, such as the Laplacian. We geneous regions (low-gradient magnitudes) and compute a more accurate but computationally regions of change (high-gradient magnitudes). expensive measure, the second directional de- This effect can be seen in Fig. 9.1. Fig. 9.1a rivative along the gradient direction, which in- shows a 1D histogram based on data value and volves the Hessian (H), a matrix of second identifies the three basic materials in the partial derivatives. We will use f 00 to indicate Chapel Hill CT Head: air (A), soft tissue (B), this second derivative. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 185

Multi-Dimensional Transfer Functions for Volume Rendering 185

A B C

(a) A ID histogram. The black region represents the number of data value occurrences on a linear scale; the grey is on a log scale. The colored regions (A,B,C) identify basic materials.

F

D E f

A B C Data Value (b) A log-scale 2D joint histogram. The lower image shows the location of materials (A,B,C). and material boundaries (D,E,F).

d

f

c

b

e

(c) A volume-rendering showing all of the materials and boundaries identified above, except air (A), using a 2D transfer function.

Figure 9.1 Material and boundary identification of the Chapel Hill CT Head with data value alone (a) versus data value and gradient magnitude ( f 0), seen in (b). The basic materials captured by CT, air (A), soft tissue (B), and bone (C), can be identified using a 1D transfer function as seen in (a). 1D transfer functions, however, cannot capture the complex combinations of material boundaries; air and soft tissue boundary (D), soft tissue and bone boundary (E), and air and bone boundary (F), as seen in (b) and (c). Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 186

186 The Visualization Handbook

f'

(a) H E H G G F A BCD Data Value

f'' E (b) F E G + F 0 − H (c) 2D transfer function (d) 3D transfer function

Figure 9.2 Material and boundary identification of the human tooth CT with data value and gradient magnitude ( f 0), seen in (a), and data value and second derivative ( f 00), seen in (b). The background–dentin boundary (F) cannot be adequately captured with data value and gradient magnitude alone. (c) shows the results of a 2D transfer function designed to show only the background–dentin (F) and dentin–enamel boundaries (G). The background–enamel (H) and dentin–pulp (E) boundaries are erroneously colored. Adding the second derivative as a third axis to the transfer function disambiguates the boundaries. (d) shows the results of a 3D transfer function that gives lower opacity to nonzero second-derivative values.

1 f 00 ¼ (rf )THf rf (9:2) 9.3.2 Multivariate data k f k2 Multivariate data contains, at each sample point, More details on these measurements can be multiple scalar values that represent different found in previous work on semiautomatic trans- simulated or measured quantities. Multivariate fer function generation [16,17]. Fig. 9.2b shows data can come from numerical simulations that a joint histogram of data value verses this calculate a list of quantities at each time step, or second directional derivative. Notice that the from medical scanning modalities such as MRI, boundaries (E), (F), and (G) no longer overlap. which can measure a variety of tissue character- By reducing the opacity assigned to nonzero istics, or from a combination of different scan- second-derivative values, we can render the ning modalities, such as MRI, CT, and PET. background–dentin boundary in isolation, as Multidimensional transfer functions are an obvi- seen in Fig. 9.2d. The relationship between ous choice for volume visualization of multivari- data value, gradient magnitude, and the second ate data, since we can assign different data values directional derivative is made clear in Fig. 9.3. to the different axes of the transfer function. It is Fig. 9.3a shows the behavior of these values often the case that a feature of interest in these along a line through an idealized boundary be- datasets cannot be properly classified using any tween two homogeneous materials (inset). single variable by itself. In addition, we can com- Notice that at the center of the boundary, the pute a kind of first derivative in the multivariate gradient magnitude is high and the second de- data in order to create more information about rivative is zero. Fig. 9.3b shows the behavior of local structure. As with scalar data, the use of a the gradient magnitude and second derivative as first derivative measure as one axis of the multi- a function of data value. This shows the curves dimensional transfer function can increase the as they appear in a joint histogram or a transfer specificity with which we can isolate and visualize function. different features in the data. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 187

Multi-Dimensional Transfer Functions for Volume Rendering 187

f 0(x) f 9(x) f (x) υ υ5 4 υ 3

υ υ υ υ υ υ f(x) υ2 1 2 3 4 5 1 x

(a) (b)

Figure 9.3 The behavior of primary data value ( f ), gradient magnitude ( f 0), and the second directional derivative ( f 00)asa function of position (a) and as a function of data value (b).

One example of data that benefits from multi- context. In both of these figures, a slice of the dimensional transfer functions is volumetric original data is mapped to the surface of the color data. A number of volumetric color data- clipping plane for reference. sets are available, such as the Visible Human The kind of first derivative that we compute in Project’s RGB data. The process of acquiring multivariate data is based on previous work in color data by cryosection is becoming common color [8,33,7]. While the for the investigation of anatomy and histology. gradient magnitude in scalar data represents In these datasets, the differences in materials are the magnitude of local change at a point, an expressed by their unique spectral signatures. A analogous first-derivative measure in multivari- multidimensional transfer function is a natural ate data captures the total amount of local choice for visualizing this type of data. Opacity change, across all the data components. This can be assigned to different positions in the 3D derivative has proven useful in color image seg- RGB color space. mentation because it allows a generalization of Fig. 9.4a shows a joint histogram of the RGB gradient-based edge detection. In our system, we color data for the Visible Male; regions of this use this first-derivative measure as one axis in the space that correspond to different tissues are multidimensional transfer function in order to identified. Regions (A) and (B) correspond to isolate and visualize different regions of a multi- the fatty tissues of the brain, white and gray variate volume according to the amount of local matter, as seen in Fig. 9.4b. In this visualization, change, analogous to our use of gradient magni- the transition between white and grey matter is tude for scalar data. intentionally left out to better emphasize these If we represent the dataset as a multivariate m materials and to demonstrate the expressivity of function f(x,y,z): R3 ! R , so that the multidimensional transfer function Fig. 9.4c f(x, y, z) ¼ (f (x, y, z), f (x, y, z), shows a visualization of the color values that 1 2 ..., f (x, y, z)) represent the muscle structure and connective m tissues (C) of the head and neck with the skin then the derivative Df is a matrix of first partial surface (D), given a small amount of opacity for derivatives: Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 188

188 The Visualization Handbook

B A Blue D

B A E C Red C

Green D

(a) Histograms of the Visible Male (b) The white (A) and gray (B) matter (c) The muscle and connective tissues (C) of RGB dataset of the brain the head and neck, showing skin (D) for reference

Figure 9.4 The Visible Male RGB (color) data. The opacity is set using a 3D transfer function, and color is taken directly from the data. The histogram (a) is visualized as projections on the primary planes of the RGB color space.

2 3 @f1 @f1 @f1 which is the square root of the sum of the 6 7 6 @x @y @z 7 squares root of the sum of the squares of the 6 7 6 @f2 @f2 @f2 7 individual matrix components. As the L2 norm 6 7 6 @x @y @z 7 Df ¼ 6 7 is invariant with respect to rotation, this is the 6 . 7 6 . 7 same as the L2 norm of the three eigenvalues 6 . 7 4 5 of G, motivating our use of kGk as a direction- @f @f @f m m m ally independent (and rotationally invariant) @x @y @z measure of local change. Other work on By multiplying Df with its transpose, we can volume-rendering of color data has used a form a 3 3 tensor G that captures the direc- non-rotationally invariant measure of G [9]. tional dependence of total change: Since it is sometimes the case that the dynamic range of the individual channels ( fi) differ, we G ¼ (Df)TDf (9:3) normalize the ranges of each channel’s data In the context of color edge detection [8,33,7], value to be between zero and one. This allows this matrix (specifically, its 2D analog) is used as each channel to have an equal contribution in the basis of a quadratic function of direction n, the derivative calculation. which Cumani [7] terms the squared local con- trast in direction n:

S(n) ¼ nTGn 9.4 Interaction and Tools S(n) can be analyzed by finding the principal While adding dimensions to the transfer func- eigenvector (and associated eigenvalue) of G to tion enhances our ability to isolate features of determine the direction n of greatest local con- interest in a dataset, it tends to make the already trast, or fastest change, and the magnitude of unintuitive space of the transfer function even that change. Our experience, however, has been more difficult to navigate. This difficulty can be that in the context of multidimensional transfer considered in terms of a conceptual gap between functions, it is sufficient (and perhaps prefer- the spatial and transfer-function domains. The able) to simply take the L2 norm of G, kGk, spatial domain is the familiar 3D space for Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 189

Multi-Dimensional Transfer Functions for Volume Rendering 189 geometry and the volume data being rendered. automated transfer function generation tool The transfer-function domain, however, is more [16], or it could be the default transfer function abstract. Its dimensions are not spatial (i.e. the described later in Section 9.6. The user would ranges of data values), and the quantities at each then begin exploring the dataset. location are not scalar (i.e. opacity and three Exploration is the process by which a user colors). It can be very difficult to determine the familiarizes him or herself with the dataset. regions of the transfer function that correspond A clipping plane can be moved through the to features of interest, especially when a region is volume to reveal internal structures. A slice of very small. Thus, to close this conceptual gap, we the original data can be mapped to the clipping developed new interaction techniques, which plane, permitting a close inspection of the entire permit interaction in both domains simultan- range of data values. Sample positions are eously, and a suite of direct manipulation probed in the spatial domain, and their values, widgets that provide the tools for such inter- along with values in a neighborhood around actions. Fig. 9.5 shows the various direct ma- that point, are visualized in the transfer- nipulation widgets as they appear in the system. function domain. This feedback allows the user In a typical session with our system, the user to identify the regions of the transfer function creates a transfer function using a natural pro- that correspond to potential features of interest, cess of exploration, specification, and refine- made visible by the default transfer function or ment. Initially, the user is presented with a the sliced data. Once these regions have been volume-rendering using a predetermined trans- identified, the user can then begin specifying a fer function that is likely to bring out some custom transfer function. features of interest. This can originate with an During the specification stage, the user creates a rough draft of the desired transfer function. While this can be accomplished by manually adding regions to the transfer function, a simpler Data Probe method adds opacity to the regions in the trans- fer function at and around locations queried in the spatial domain. That is, the system can track, with a small region of opacity in the trans- Clipping Plane fer-function domain, the data values at the user-selected locations, while continually updat- ing the volume-rendering. This visualizes, in the spatial domain, all other with similar Light transfer-function values. If the user decides that Widget an important feature is captured by the current transfer function, he or she can add that region into the transfer function and continue querying Classification Widgets and investigating the volume. Once these regions have been identified, the user can refine them by manipulating control points in the transfer-function domain to better visualize features of interest. An important fea- Queried Value Re-projected Voxel ture of our system is the ability to manipulate portions of the transfer function as discrete en- Transfer Function tities. This permits the modification of regions corresponding to a particular feature without Figure 9.5 The direct-manipulation widgets. affecting other classified regions. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 190

190 The Visualization Handbook

Finally, this is an iterative process. A user domain interaction’’ to describe this approach to continues the exploration, specification, and re- transfer-function exploration and generation. finement steps until they are satisfied that all The top of Fig. 9.6 illustrates the specific steps features of interest are made visible. In the re- of dual-domain interaction. When a position mainder of this section, we will introduce the inside the volume is queried by the user with the interaction modalities used in the exploration data probe widget (a), the values associated with and specification stages and briefly describe the that position (multivariate values, or the data individual direct manipulation widgets. value, first and second derivative) are graphically represented in the transfer function widget (b). Then, a small region of high opacity (c) is tem- 9.4.1 Probing and Dual-Domain porarily added to the transfer function at the Interaction data values determined by the probe location. The concept of probing is simple: The user The user has now set a multidimensional transfer points at a location in the spatial domain and function simply by positioning a data probe visualizes the values at that point in the transfer- within the volume. The resulting rendering (d) function domain. We have found this feedback depicts (in the spatial domain) all the other loca- to be essential for making the connection be- tions in the volume that share values (in the tween features seen in the spatial domain and transfer-function domain) with those at the the ranges of values that identify them in the data probe tip. If the features rendered are of transfer-function domain. Creating the best interest, the user can copy the temporary transfer transfer function for visualizing a feature of function to the permanent one (e), by, for in- interest is only possible with an understanding stance, tapping the keyboard space bar with the of the behavior of data values at and around free hand. As features of interest are discovered, that feature. This is especially true for multidi- they can be added to the transfer function mensional transfer functions where a feature quickly and easily with this type of two-handed is described by a complex combination of interaction. Alternately, the probe feedback data values. The value of this dataset-specific can be used to manually set other types of classi- guidance can be further enhanced by automatic- fication widgets (f ), which are described later. ally setting the transfer function based on these The outcome of dual-domain interaction is queried values. an effective multidimensional transfer function In a traditional volume-rendering system, set- built up over the course of data exploration. The ting the transfer function involves moving the widget components that participated in this pro- control points (in a sequence of linear ramps cess can be seen in the bottom of Fig. 9.6, which defining color and opacity), and then observing shows how dual-domain interaction can help the resulting rendered image. That is, interaction volume-render the CT tooth dataset. The re- in the transfer-function domain is guided by mainder of this section describes the individual careful observation of changes in the spatial widgets and provides additional details about domain. We prefer a reversal of this process, in dual-domain interaction. which the transfer function is set by direct inter- action in the spatial domain, with observation of the transfer-function domain. Furthermore, by 9.4.2 Data Probe Widget allowing interaction to happen in both domains The data probe widget is responsible for simultaneously. We significantly lessen the con- reporting its tip’s position in volume space and ceptual gap between them, effectively simplify- its slider sub-widget’s value. Its pencil-like shape ing the complicated task of specifying a is designed to give the user the ability to point at multidimensional transfer function to pointing a feature in the volume being rendered. The other at a feature of interest. We use the term ‘‘dual- end of the widget orients the widget about its tip. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 191

Multi-Dimensional Transfer Functions for Volume Rendering 191

Queried region can be permanently set in transfer (e) function User moves Changes probe in are observed volume User sets in rendered (a) volume transfer function (d) by hand (f) Postition is Region is queried, and temporarily values displayed set around value in transfer (b) in transfer function function (c)

Figure 9.6 Dual-Domain Interaction

When the volume-rendering’s position or orien- slider. It is also responsible for reporting the tation is modified the data probe widget’s tip spatial position of a mouse click on its clipping tracks its point in volume space. A natural exten- surface. This provides an additional means of sion is to link the data probe widget to a haptic querying positions within the volume, distinct device, such as the SensAble PHANTOM, from the 3D data probe. The balls at the corners which can provide a direct 3D location and of the clipping plane widget are used to modify orientation [26]. its orientation, and the bars on the edges are used to modify its position. 9.4.3 Clipping Plane Widget The clipping plane is responsible for reporting 9.4.4 Transfer-Function Widget its orientation and position to the volume ren- The main role of the transfer-function widget is derer, which handles the actual clipping when it to present a graphical representation of the draws the volume. In addition to clipping, the transfer-function domain, in which feedback volume widget will also map a slice of the data from querying the volume (with the data probe to the arbitrary plane defined by the clip widget, or clipping plane) is displayed, and in which the and blend it with the volume by a constant transfer function itself can be set and altered. opacity value determined by the clip widget’s The balls at the corners of the transfer-function Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 192

192 The Visualization Handbook widget are used to resize it, as with a desktop the main transfer-function window. Classifica- window, and the bars on the edges are used to tion widgets are designed to identify regions translate its position. The inner plane of the of the transfer function as discrete entities. frame is a polygon texture-mapped with the Each widget type has control points that modify lookup table containing the transfer function. its position or size. Optical properties, such as A joint histogram of data, seen with the images opacity and color, are modified by selecting the in Section 9.3, can also be blended with the widget’s inner surface. The opacity and color transfer function to provide valuable informa- contributions from each classification widget tion about the behavior and relationship of data are blended together to form the transfer func- values in the transfer-function domain. tion. We have developed two types of classifica- The data values at the position queried in the tion widget: triangular and rectangular. volume (via either the data probe or the clipping The triangular classification widget, shown in plane widget) are represented with a small ball Figs. 9.5, 9.6, and 9.8, is based on Levoy’s ‘‘iso- in the transfer-function widget. In addition to value contour surface’’ opacity function [22]. the precise location queried, the eight data The widget is an inverted triangle with a base sample points at the corners of the voxel con- point attached to the horizontal data value axis. taining the query location are also represented The triangle’s size and position are adjusted by balls in the transfer-function domain, and with control points. There are an upper and are connected together with edges that reflect lower threshold for the gradient magnitude, as the connectivity of the voxel corners in the well as a shear. Color is constant across the spatial domain. By ‘‘reprojecting’’ a voxel from widget; opacity is maximal along the center of the spatial domain to a simple graphical repre- the widget, and it linearly ramps down to zero at sentation in the transfer-function domain, the the left and right edges. user can learn how the transfer-function vari- The triangular classification widgets are par- ables (data values at each sample point) are ticularly effective for visualizing surfaces in changing near the probe location. The values scalar data. More general transfer functions, for the third, or unseen, axis are indicated by for visualizing data that may not have clear coloring son the balls. For instance, with scalar boundaries, can be created with the rectangular data, second-derivative values that are negative, classification widget. The rectangular region zero, and positive are represented by blue, spanned by the widget defines the data values white, and yellow balls, respectively. When the that receive opacity and color. Like the triangu- projected points form an arc, with the color lar widget, color is constant, but the opacity is varying through these assigned colors, the more flexible. It can be constant, or fall off in probe is at a boundary in the volume as seen various ways: quadratically as an ellipsoid with in Fig. 9.5. When the reprojected data points are axes corresponding to the rectangle’s aspect clustered together, the probe is in a homoge- ratio, or linearly as a ramp, tent, or pyramid. neous region. As the user gains experience with As noted in the description of the transfer- this representation, he or she can learn to function widget, even when a transfer function ‘‘read’’ the reprojected voxel as an indicator of has more than two dimensions, only two dimen- the volume characteristics at the probe location. sions are shown at any one time. For 3D trans- fer functions, classification widgets are shown as their projections onto the visible axes. In this 9.4.5 Classification Widgets case, a rectangular classification widget be- In addition to the process of dual-domain inter- comes a box in the 3D domain of the transfer action described above, transfer functions can function. Its appearance to the user, however, as also be created in a more manual fashion by 2D projections, is identical to the rectangular adding one or more classification widgets to widget. When the third axis of the transfer func- Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 193

Multi-Dimensional Transfer Functions for Volume Rendering 193 tion plays a more simplified role, interactions and control the light direction and color. Fixing along this axis are tied to sliders seen along the a few lights in view space is generally effective top bar of the transfer function. For instance, for renderings; therefore, changing the lighting since our research on scalar data has focussed is an infrequent operation. on visualizing boundaries between material regions, we have consistently used the second 9.4.7 Color-Picker Widget derivative to emphasize the regions where the The color picker is an embedded widget that is second-derivative magnitude is small or zero. based on the hue-lightness-saturation (HLS) Specifically, maximal opacity is always given to color space. Interacting with this widget can be zero second derivatives, and decreases linearly thought of as manipulating a sphere with hues towards the second-derivative extrema values. mapped around the equator, gradually becom- How much the opacity changes as a function ing black at the top and white at the bottom. To of second-derivative magnitude is controlled select a hue, the user moves the mouse horizon- with a single slider, which we call the ‘‘boundary tally, rotating the ball around its vertical axis. emphasis slider.’’ With the slider in its left-most Vertical mouse motion tips the sphere toward or position, zero opacity is given to extremal away from the user, shifting the color towards second derivatives; in the right-most position, white or black. Saturation and opacity are opacity is constant with respect to the second selected independently using different mouse derivative. We have employed similar tech- buttons with vertical motion. While this color niques for manipulating other types of third- picker can be thought of as manipulating an axis values using multiple sliders. HLS sphere, no geometry for this is rendered. While the classification widgets are usually set Rather, the triangular and rectangular classifi- by hand in the transfer-function domain, based cation widgets embed the color picker in the on feedback from probing and reprojected vox- polygonal region, which contributes opacity els, their placement can also be somewhat auto- and color to the transfer-function domain. The mated. This further reduces the difficulty of user specifies a color simply by clicking on creating an effective higher-dimensional transfer that object, then moving the mouse horizontally function. The classification widget’s location and vertically until the desired hue and lightness and size in the transfer-function domain can be are visible. In most cases, the desired color can tied to the distribution of the reprojected voxels be selected with a single mouse-click and ges- determined by the data probe’s location. For ture. instance, the rectangular classification widget can be centered at the transfer-function values interpolated at the data probe’s tip, with the size 9.5 Rendering and Hardware of the rectangle controlled by the data probe’s slider. The triangular classification widget can be While this chapter is conceptually focused on located horizontally at the data value queried by the matter of setting and applying higher- the probe, with the width and height determined dimensional transfer functions, the quality of by the horizontal and vertical variance in the interaction and exploration described would reprojected voxel locations. This technique pro- not be possible without the use of modern duced the changes in the transfer function for the graphics hardware. Our implementation relies sequence of renderings in Fig. 9.6 heavily on an OpenGL extension known as de- pendent texture reads. This extension can be used for both classification and shading. In 9.4.6 Shading Widget this section, we describe our modifications to The shading widget is a collection of spheres the classification portion of the traditional 3D that can be rendered in the scene to indicate texture-based volume-rendering pipeline. We Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 194

194 The Visualization Handbook

also describe methods for adding interactive plane, for instance. These values are then con- volumetric shading and shadows to the pipeline. verted to texture coordinates and used to Our system supports volumes that are stored acquire the color and alpha values in the trans- as 3D textures with one, two, or four values per fer-function texture per-pixel in screen space. Q2 texel. This is is due to memory-alignment re- For eight-bit data, an ideal transfer-function strictions of graphics hardware. Volumes with texture would have 256 color and alpha values three values per sample utilize a four-value tex- along each axis. For 3D transfer functions, ture, where the fourth value is simply ignored. however, the transfer-function texture would Volumes with more than four values per sample then be 2563 4 bytes. Besides the enormous could be constructed using multiple textures. memory requirements of such a texture, the size also affects how fast the classification widgets can be rasterized, thus affecting the 9.5.1 Dependent Texture Reads interactivity of transfer-function updates. We Dependent texture reads are hardware exten- therefore limit the number of elements along sions that are similar but more efficient imple- an axis of a 3D transfer function based on its mentations of a previous extension known as importance. For instance, with scalar data, the pixel texture [10,13,25,37]. Dependent texture primary data value is the most important, reads and pixel texture are names for operations the gradient magnitude is secondary, and the that use color fragments to generate texture co- second derivative serves an even more tertiary ordinates, and replace those color fragments role. For this type of multidimensional transfer with the corresponding entries from a texture. function, we commonly use a 3D transfer-func- This operation essentially amounts to an arbi- tion texture with dimensions 256 128 8 for trary function evaluation with up to three vari- data value, gradient magnitude, and second de- ables via a lookup table. If we were to perform rivative, respectively. 3D transfer functions can this operation on an RGB fragment, each chan- also be composed separably as a 2D and a 1D nel value would be scaled between zero and one, transfer function. This means that the total size and these new values would then be used as of the transfer function is 2562 þ 256. The tra- texture coordinates of a 3D texture. The color deoff, however, is in expressivity. We can no values produced by the 3D texture lookup re- longer specify a transfer function based on the place the original RGB values. The nearest- unique combination of all three data values. neighbor or linear-interpolation methods can Separable transfer functions are still quite be used to generate the replacement values. The powerful. Applying the second derivative as a ability to scale and interpolate color channel separable 1D portion of the transfer function is values is a convenient feature of the hardware. quite effective for visualizing boundaries be- It allows the number of elements along a dimen- tween materials. With the separable 3D transfer sion of the texture containing the new color function for scalar volumes, there is only one values to differ from the dynamic range of the boundary-emphasis slider that affects all classi- component that generated the texture coordin- fication widgets, as opposed to the general case ate. Without this flexibility, the size of a 3D where each classification widget has its own dependent texture would be prohibitively large. boundary-emphasis slider. We have employed a similar approach for multivariate data visual- ization. The meteorological example used a sep- 9.5.2 Classification arable 3D transfer function. Temperature and Dependent-texture reads are used for the trans- humidity were classified using a 2D transfer fer-function evaluation. Data values stored function and the multiderivative of these values in the color components of a 3D texture are was classified using a 1D transfer function. interpolated across some proxy geometry, a Since our specific goal was to show only regions Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 195

Multi-Dimensional Transfer Functions for Volume Rendering 195 with high values of kGk, we needed only two or a full environment map. This approach, how- sliders to specify the beginning and ending ever, comes at the cost of reduced performance. points of a linear ramp along this axis of the A per-pixel cube map evaluation can be as much transfer function. as three times slower than evaluating the dot products for a limited number of light sources in the fragment stage. 9.5.3 Surface Shading Surface-based shading methods are well Shading is a fundamental component of suited for visualizing the boundaries between volume-rendering because it is a natural and materials. However, since the surface normal is efficient way to express information about the approximated by the normalized gradient of a shape of structures in the volume. However, scalar field, these methods are not robust for much previous work with texture-memory- shading homogeneous regions, where the gradi- based-volume-rendering lacks shading. Many ent magnitude is very low or zero and its meas- modern graphics hardware platforms support urement is sensitive to noise. Gradient-based multitexture and a number of user-defined op- surface shading is also unsuitable for shading erations for blending these textures per-pixel. volume-renderings of multivariate fields. While These operations, which we will refer to as frag- we can assign the direction of greatest change ment shading, can be leveraged to compute a for a point in a multivariate field to the eigen- surface-shading model. vector (e1) corresponding to the largest eigen- The technique originally proposed by Rezk- value (l1) of the tensor G from Equation 9.3, e1 Salama et al. [6] is an efficient way to compute is a valid representation of only orientation, not the Blinn- model on a per-pixel the absolute direction. This means that the sign basis for volumes. This approach, however, can of e1 can flip in neighboring regions even though suffer from artifacts caused by denormalization their orientations may not differ. Therefore, the during interpolation. While future generations vector e1 does not interpolate, making it a poor of graphics hardware should support the square choice of surface normal. Furthermore, this root operation needed to renormalize on a per- orientation may not even correspond to the pixel basis, we can utilize cube map dependent surface normal of a classified region in a multi- texture reads to evaluate the shading model. variate field. This type of dependent texture read allows an RGB color component to be treated as a vector and used as the texture coordinates for a cube 9.5.4 Shadows map. Conceptually, a cube map can be thought Shadows provide important visual queues relat- of as a collection of six textures that make up ing to the depth and placement of objects in the faces of a cube centered about the origin. a scene. Since the computation of shadows Texels are accessed with a 3D texture coordin- does not depend on a surface normal, they pro- ate (s,t,r) representing a direction vector. The vide a robust method for shading homogeneous accessed texel is the point corresponding to the regions and multivariate volumes. Adding sha- intersection of a line through the origin in the dows to the volume lighting model means that direction of (s,t,r) and a cube face. The color light gets attenuated through the volume before values at this position represent incoming dif- being reflected back to the eye. fuse radiance if the vector (s,t,r) is a surface Our approach differs from previous hardware normal or specular radiance if (s,t,r) is a reflec- shadow work [2] in two ways. First, rather than tion vector. The advantages of using a cube map creating a volumetric shadow map, we utilize dependent texture read is that the vector (s,t,r) an off-screen render buffer to accumulate the does not need to be normalized, and the cube amount of light attenuated from the light’s map can encode an arbitrary number of lights point of view. Second, we modify the slice axis Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 196

196 The Visualization Handbook to be the direction halfway between the view from the eye’s point of view providing shadows. and light directions. This allows the same slice This approach, however, suffers from an arti- to be rendered from both the eye and the light fact referred to as attenuation leakage. The points of view. Consider the situation for com- visual consequences of this are blurry shadows puting shadows when the view and light direc- and surfaces that appear much darker than they tions are the same, as seen in Fig. 9.7a. Since the should due to the image-space high frequencies slices for both the eye and the light have a one- introduced by the transfer function. The attenu- to-one correspondence, it is not necessary to ation at a given sample point is blurred when precompute a volumetric shadow map. The light intensity is stored at a coarse resolution amount of light arriving at a particular slice is and interpolated during the observer rendering equal to one minus the accumulated opacity of phase. This use of a 2D shadow buffer is similar the slices rendered before it. Naturally, if the to the method described in Chapter 8 except we projection matrices for the eye and the light address slice-based volume-rendering while they differ, we need to maintain a separate buffer address splatting. for the attenuation from the light’s point of Rather than slice along the vector defined by view. When the eye and light directions differ, the view direction or the light direction, we the volume would be sliced along each direction modify the slice axis to allow the same slice to independently. The worst-case scenario happens be rendered from both points of view. When the when the view and light directions are perpen- dot product of the light and view directions is dicular, as seen in Fig. 9.7b. In the case, it would positive, we slice along the vector halfway be- seem necessary to save a full volumetric shadow tween the light and view directions, seen in Fig. map that can be resliced with the data volume 9.7c. In this case, the volume is rendered in

l

l

v v (a) (b)

S q l l −1 S 2 q −1 2 q

v v −v (c) (d)

Figure 9.7 Modified slice axis for light transport. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 197

Multi-Dimensional Transfer Functions for Volume Rendering 197 front-to-back order with respect to the observer. a pixel buffer as a texture, avoiding the unneces- When the dot product is negative, we slice along sary copy operation. the vector halfway between the light and the This approach has a number of advantages inverted-view directions, seen in Fig. 9.7d. In over previous volume shadow methods. First, this case, the volume is rendered in back-to- attenuation leakage is no longer a concern be- front order with respect to the observer. In cause the computation of the light transport both cases the volume is rendered in front-to- (slicing density) is decoupled from the reso- back order with respect to the light. Care must lution of the data volume. Computing light at- be taken to insure that the slice spacing along tenuation in image space allows us to match the the view and light directions are maintained sampling frequency of the light transport with when the light or eye positions change. If the that of the final volume-rendering. Second, this desired slice spacing along the view direction is approach makes far more efficient use of dv and the angle between v and l is y, then the memory resources than those that require a slice spacing along the slice direction is volumetric shadow map. Only a single add- itional 2D buffer is required, as opposed to a y ds ¼ cos dv (9:4) potentially large 3D volume. One disadvantage 2 of this approach is that, due to the image-space This is a multi-pass approach. Each slice is sampling, artifacts may appear at shadow first rendered from the observer’s point of view boundaries when the opacity makes a sharp using the results of the previous pass from the jump from low to high. This can be overcome light’s point of view, which modulates the by using a higher resolution for the light buffer brightness of samples in the current slice. The than for the frame buffer. We have found that same slice is then rendered from light’s point of 30%–50% additional resolution is adequate. view to calculate the intensity of the light arriv- As noted at the end of the previous section, ing at the next layer. surface-based shading models are inappropriate Since we must keep track of the amount of for homogeneous regions in a volume. However, light attenuated at each slice, we utilize an off- it is often useful to have both surface-shaded and screen render buffer, known as a pixel buffer. shadowed renderings regardless of whether or This buffer is initialized to 1 – light intensity.It not homogeneous regions are being visualized. can also be initialized using an arbitrary image To insure that homogeneous regions are not to create effects such as spotlights. The projec- surface shaded, we simply interpolate between tion matrix for the light’s point of view need not surface-shaded and unshaded using the gradient be orthographic; a projection matrix magnitude. Naturally, regardless of whether or can be used for point light sources. However, not a particular sample is surface shaded, it is the entire volume must fit in the light’s view still modulated by the light attenuation provid- frustum. Light is attenuated by simply accumu- ing shadows. In practice we have found that lating the opacity for each sample using the over interpolating based on 1 (1 krf k)2 pro- operator. The results are then copied to a tex- duces better results, since mid-range gradient ture that is multiplied with the next slice from magnitudes can still be interpreted as surface the eye’s point of view before it is blended into features. Fig. 9.8 shows a rendering that com- the frame buffer. While this copy-to-texture op- bines surface shading and shadows in such a eration has been highly optimized on the cur- way. Fig. 9.1 shows a volume-rendering using rent generation of graphics hardware, we have shadows with the light buffer initialized to simu- achieved a dramatic increase in performance late a spotlight. Fig. 9.2 shows volume-rendering using a hardware extension known as render to using only surface-based shading. Fig. 9.4 uses texture. This extension allows us to directly bind only shadows for illumination. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 198

198 The Visualization Handbook

Figure 9.8 Volume renderings of the Visible Male CT (frozen) demonstrating combined surface shading and shadows.

While the triangular classification widget is 9.6 Discussion based on Levoy’s iso-contour classification Using multidimensional transfer functions function, we have found it necessary to have heightens the importance of densely sampling additional degrees of freedom, such as a shear. the voxel data in rendering. With each new Shearing the triangle classification along the axis in the transfer function, there is another data value axis, so that higher values are em- dimension along which neighboring voxels can phasized at higher gradients, allows us to follow differ. It becomes increasingly likely that the the center of some boundaries more accurately. data sample points at the corners of a voxel This is a subtle but basic characteristic of straddle an important region of the transfer boundaries between a material with a narrow function (such as a region of high opacity) distribution of data values and another material instead of falling within it. Thus, in order for with a wide value distribution. This pattern can the boundaries to be rendered smoothly, the be observed in in the boundary between soft distance between view-aligned sampling planes tissue (Fig. 9.9 narrow value distribution) and through the volume must be very small. Most bone (wide value distribution) of the Visible of the figures in this paper were generated Male CT, seen in Fig. 9.9. Thresholding the with rates of about 3 to 6 samples per voxel. minimum gradient magnitude allows better fea- At this sample rate, frame updates can take ture discrimination. nearly a second for a moderately sized (256 While multidimensional transfer functions 256 128) shaded and shadowed volume. For are quite effective for visualizing material this reason, we lower the sample rate during boundaries, we have also found them to be interaction, and rerender at the higher sample useful for visualizing the materials themselves. rate once an action is completed. During inter- For instance, if we attempt to visualize the action, the volume-rendered surface will dentin of the Human Tooth CT using a 1D appear coarser, but the surface size and loca- transfer function, we erroneously color the tion are usually readily apparent. Thus, even background–enamel boundary, seen in Fig. with lower volume-sampling rates during inter- 9.10a. The reason for this can be seen in action, the rendered images are effective feed- Fig. 9.2a, where the range of data values back for guiding the user in transfer-function that define the background–enamel boundary exploration. overlap with the dentin’s data values. We can Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 199

Multi-Dimensional Transfer Functions for Volume Rendering 199

(a) (b)

Soft Tissue Bone

Figure 9.9 The soft tissue–bone boundary of the Visible Male CT. It is necessary to shear the triangular classification widget to follow the center of this boundary.

(a) A 1D transfer function (b) A 2D transfer function

Figure 9.10 The dentin of the Human Tooth CT (a) shows that a 1D transfer function, simulated by assigning opacity to data values regardless of gradient magnitude, will erroneously color the background–enamel boundary. A 2D transfer function, shown in (b), can avoid assigning opacity to the range of gradient magnitudes that define this boundary. easily correct this erroneous coloring with a 2D A further benefit of dual-domain interaction transfer function that gives opacity only to is the ability to create feature-specific multidi- lower-gradient magnitudes. This can be seen in mensional transfer functions, which would be Fig. 9.10b. extremely difficult to produce by manual place- Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 200

200 The Visualization Handbook ment of classification widgets. If a feature can 2. U. Behrens and R. Ratering. Adding shadows be visualized in isolation with only a very small to a texture-based volume renderer. In 1998 Volume Visualization Symposium, pages 39–46, and accurately placed classification widget, the 1998. best way to place the widget is via dual-domain 3. L. D. Bergman, B. E. Rogowitz, and L. A. interaction. Treinish. A rule-based tool for assisting color- Dual-domain interaction has utility beyond map selection. In IEEE Proceedings Visualiza- setting multidimensional transfer functions. tion 1995, pages 118–125, October 1995. 4. B. Cabral, N. Cam, and J. Foran. Acceler- Dual-domain interaction also helps answer other ated volume rendering and tomographic recon- questions about the limits of direct volume- struction using hardware. In rendering for displaying specific features in the ACM Symposium On Volume Visualization, data. For example, the feedback in the transfer- 1994. function domain can show the user whether a 5. D. B. Conner, S. S. Snibbe, K. P. Herndon, D. C. Robbins, R. C. Zeleznik, and A. van Dam. certain feature of interest detected during spatial Three-dimensional widgets. In Proceedings of domain interaction is well-localized in the trans- the 1992 Symposium on Interactive 3D Graphics, fer function domain. If reprojected voxels from pages 183–188, 1992. different positions, in the same feature, map to 6. C. Rezk-Salama, K. Engel, M. Bauer, G. Grei- widely divergent locations in the transfer-func- ner, and T. Ertl. Interactive volume rendering on standard PC graphics hardware using multi-tex- tion domain, then the feature is not well local- tures and multi-stage rasterization. In Siggraph/ ized, and it may be hard to create a transfer Eurographics Workshop on Graphics Hardware function the clearly visualizes it. Similarly, if 2000, 2000. probing inside two distinct features indicates 7. A. Cumani, P. Grattoni, and A. Guiducci. An that the reprojected voxels from both features edge-based description of color images. GMIP, 53(4):313–323, 1991. map to the same location in the transfer-function 8. S. Di Zenzo. A note on the gradient of a multi- domain, then it may be difficult to selectively image. Computer Vision, Graphics, and Image visualize one or the other feature. Processing, 33(1):116–125, 1986. 9. D. Ebert, C. Morris, P. Rheingans, and T. Yoo. Designing effective transfer functions for volume rendering from photographic volumes. Acknowledgments IEEE TVCG (to appear), 2002. The authors would like to thank Al McPherson 10. K. Engel, M. Kraus, and T. Ertl. High-quality pre-integrated volume rendering using hard- from the ACL at LANL for fruitful and pro- ware-accelerated pixel shading. In Siggraph/ vocative conversations about volumetric Eurographics Workshop on Graphics Hardware shadows. This research was funded by grants 2001, 2001. from the Department of Energy (VIEWS 11. A. Van Gelder and K. Kim. Direct volume 0F00584), the National Science Foundation rendering with shading via three-dimensional textures. In ACM Symposium On Volume Visu- (ASC 8920219, MRI 9977218, ACR 9978099), alization, pages 23–30, 1996. and the National Institutes of Health National 12. T. He, L. Hong, A. Kaufman, and H. Pfister. Center for Research Resources (1P41RR12553- Generation of transfer functions with stochastic 2). We would also like to thank NVIDIA, ATI search techniques. In Proceedings IEEE Visual- and sgi for their making their latest generations ization 1996, pages 227–234, 1996. 13. W. Heidrich, R. Westermann, H.-P. Seidel, and of hardware available. T. Ertl. Applications of pixel textures in visual- ization and realistic image synthesis. In Proceed- ings of the 1999 Symposium on Interacive 3D References Graphics, 1999. 14. K. P. Hernandon and T. Meyer. 3D Widgets for 1. C. L. Bajaj, V. Pascucci, and D. R. Schikore. The exploratory scientific visualization. In Proceed- contour spectrum. In Proceedings IEEE Visual- ings of UIST ’94 (SIGGRAPH), pages 69–70. ization 1997, pages 167–173, 1997. November 1994. Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 201

Multi-Dimensional Transfer Functions for Volume Rendering 201

15. J. Hladu˚vka, A. Ko¨nig and E. Gro¨ller. Curva- 26. T. Miller and R. C. Zeleznik. The design of 3D ture-based transfer functions for direct volume haptic widgets. In Proceedings 1999 Symposium rendering. In Bianca Falcidieno, ed., Spring on Interactive 3D Graphics, pages 97–102, 1999. Conference on Computer Graphics 2000, 27. S. Muraki. Multiscale volume representation by 16(5):58–65, 2000. a DoG wavelet. IEEE Trans. Visualization and 16. G. Kindlmann. Semi-automatic generation of Computer Graphics, 1(2):109–116, 1995. transfer functions for direct volume rendering. 28. H. Pfister, J. Hardenbergh, J. Knittel, H. Lauer, Master’s thesis, , Ithaca, and L. Seiler. The VolumePro real-time ray- NY, January 1999 (http://www.cs.utah.edu/ casting system. In ACM Computer Graphics ~gk/MS). (SIGGRAPH ’99 Proceedings), pages 251–260, 17. G. Kindlmann and J. W. Durkin. Semi-auto- August 1999. matic generation of transfer functions for direct 29. H. Pfister, C. Bajaj, W. Schroeder, and G. volume rendering. In IEEE Symposium On Kindlmann. The transfer function bake-off. In Volume Visualization, pages 79–86, 1998. Proceedings IEEE Visualization 2000, pages 18. A. Ko¨nig and E. Gro¨ller. Mastering transfer 523–526, 2000. function specification by using VolumePro tech- 30. H. Pfister and A. E. Kaufman. Cube-4-a scalable nology. In Tosiyasu L. Kunii, ed., Spring Con- architecture for real-time volume rendering. In ference on Computer Graphics 2001, 17(4): 279– IEEE Symposium On Volume Visualization, 286, 2001. pages 47–54, 1996. 19. P. Lacroute and M. Levoy. Fast volume 31. J. T. Purciful. Three-dimensional widgets for rendering using a shear-warp factorization of scientific visualization and . Master’s the viewing transform. In ACM Computer thesis, University of Utah, June 1997. Graphics (SIGGRAPH ’94 Proceedings), pages 32. P. Rheingans. Task-based color scale design. In 451–458, July 1994. Proceedings Applied Image and Pattern Recogni- 20. D. H. Laidlaw. Geometric model extraction tion. SPIE, October 1999. from magnetic resonance volume data. PhD 33. G. Sapiro. Color Snakes. CVIU 68(2):247–253, thesis, California Institute of Technology, May 1997. 1995. 34. Y. Sato, C.-F. Westin, and A. Bhalerao. Tissue 21. E. LaMar, B. Hamann, and K. I. Joy. classification based on 3D local intensity struc- Multiresolution techniques for interactive tex- tures for volume rendering. IEEE Transactions ture-based volume visualization. In IEEE, Pro- on Visualization and Computer Graphics, ceedings Visualization ’99, pages 355–361, 6(2):160–179, 2000. October 1999. 35. P. S. Strauss and R. Carey. An object-oriented 22. M. Levoy. Display of surfaces from volume 3D graphics toolkit. In ACM Computer data. IEEE Computer Graphics & Applications, Graphics (SIG-GRAPH ’92 Proceedings), 8(5):29–37, 1988. pages 341–349, July 1992. 23. J. Marks, B. Andalman, P. A. Beardsley, and H. 36. C. Ware. Color sequences for univariate maps: Pfister et al. Design galleries: a general approach theory, experiments, and principles. IEEE Com- Q3 to setting parameters for computer graphics and puter Graphics and Applications, 8(5):41–49, animation. In ACM Computer Graphics (SIG- 1988. GRAPH ’97 Proceedings), pages 389–400, 37. R. Westermann and T. Ertl. Efficiently using August 1997. graphics hardware in volume rendering applica- 24. D. Marr and E. C. Hildreth. Theory of edge tions. In ACM Computer Graphics (SIG- detection. Proceedings of the Royal Society of GRAPH ’98 Proceedings), pages 169–176, London, B 207:187–217, 1980. August 1998. 25. M. Meissner, U. Hoffmann, and W. Strasser. 38. R. C. Zeleznik, K. P. Herndon, D. C. Robbins, Enabling classification and shading for 3D tex- N. Huang, T. Meyer, N. Parker, and J. F. ture mapping based volume rendering using Hughes. An interactive toolkit for constructing OpenGL and extensions. In IEEE Visualization 3D widgets. Computer Graphics, 27(4):81–84, 1999, pages 207–214, 1999. 1993.

AUTHOR QUERIES

Q1 Au: ok? Q2 Au: ok? Q3 Au: possible to find other author names? Johnson/Hansen: The Visualization Handbook Page Proof 20.5.2004 12:32pm page 202