Quick viewing(Text Mode)

Illustrative Visualization of Anatomical Structures Erik Jonsson

Illustrative Visualization of Anatomical Structures Erik Jonsson

LiU-ITN-TEK-A--11/045--SE

Illustrative of Anatomical Structures Erik Jonsson

2011-08-19

Department of Science and Technology Institutionen för teknik och naturvetenskap Linköping University Linköpings universitet SE-601 74 Norrköping, Sweden 601 74 Norrköping LiU-ITN-TEK-A--11/045--SE

Illustrative Visualization of Anatomical Structures Examensarbete utfört i medieteknik vid Tekniska högskolan vid Linköpings universitet Erik Jonsson

Examinator Karljohan Lundin Palmerius

Norrköping 2011-08-19 Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extra- ordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/

© Erik Jonsson Abstract

Illustrative visualization is a term for visualization techniques inspired by tradi- tional technical and medical . These techniques are based on knowl- edge of the human perception and provide effective visual abstraction to make the visualizations more understandable. Within these expressive visualizations can be achieved using non-photorealistic rendering that combines different levels of abstraction to convey the most important information to the viewer. In this thesis I will look at illustrative techniques and show how these can be used to visualize anatomical structures in a medical volume data. The result of the thesis is a prototype of an anatomy education application, that makes use of illustrative techniques to have a focus+context visualization with feature enhancement, tone and labels describing the anatomical struc- tures. This results in an expressive visualization and interactive exploration of the human anatomy.

1 Acknowledgements

I would like to thank my supervisor Karl-Johan Lundin Palmerius and Lena Tibell at the Department of Science and Technology, Linköping University for their help and assistance throughout the thesis work. Thanks also to Daniel Forsberg at the Department of Biomedical Engineering, Linköping University for providing the human body data set together with the segmented data.

2 Contents

1 Introduction 7 1.1 Motivation ...... 7 1.2 Purpose & Goal ...... 7 1.3 Limitations ...... 8 1.4 Outline ...... 8

2 Background 9 2.1 Anatomy Education ...... 9 2.1.1 Dissections ...... 9 2.2 Volume Rendering ...... 10 2.2.1 Volume Rendering Integral ...... 10 2.2.2 Segmented Volume Data ...... 11 2.2.3 Ray Casting ...... 11 2.2.4 GPU-based Ray Casting ...... 12 2.2.5 Transfer Functions ...... 14 2.2.6 Local Illumination ...... 14 2.3 Illustrative Visualization ...... 15 2.3.1 Medical ...... 15 2.3.2 Visual Abstraction ...... 16 2.3.3 Cut-away Views and Ghosted Views ...... 16 2.3.4 Visibility Control ...... 18 2.3.5 Textual Annotations ...... 18 2.4 ...... 19

3 Theory 21 3.1 The Importance-aware Composition Scheme ...... 21 3.2 The Tone Shading Model ...... 22

4 Implementation 24 4.1 Illustrative Ray Casting ...... 24 4.1.1 Segmentation Classification ...... 24 4.1.2 Tone Shading ...... 25 4.1.3 Importance-aware Composition ...... 26 4.2 Labeling of Segmented Data ...... 28 4.2.1 Segment Description File ...... 29 4.2.2 Layout Algorithm ...... 29 4.2.3 Rendering ...... 30 4.3 Anatomy Application ...... 31 4.3.1 Design and User Interface ...... 31 4.3.2 Focus+Context Widget ...... 31 4.3.3 Labeling Widget ...... 32

5 Conclusion 35 5.1 Results ...... 35 5.1.1 Result of the Importance-aware Composition ...... 35 5.1.2 Result of the Tone Shading ...... 37 5.1.3 Result of the Anatomy Application ...... 37 5.1.4 Performance ...... 38 5.2 Discussion ...... 40 5.2.1 The Illustrative Techniques ...... 40 5.2.2 The Anatomy Application ...... 40 5.3 Future work ...... 41 5.3.1 Additional Features ...... 41

4 List of Figures

2.1 The front and back face from the bounding box of the volume . . 13 2.2 The ray casting technique via rasterization ...... 13 2.3 A transfer function represented by a 1D texture ...... 14 2.4 Cut-away and ghosted illustration of a sphere ...... 17 2.5 Medical illustrations by Leonardo da Vinci ...... 17 2.6 The standard workspace in VoreenVE ...... 19

3.1 Tone shading of a red object with blue/yellow tones ...... 23

4.1 1D TF textures stored in a 2D segmentation TF texture . . . . . 25 4.2 Tone shading parameters ...... 26 4.3 Importance Measurements Parameters ...... 27 4.4 Convex hull: A set of points enclosed by an elastic band . . . . . 30 4.5 The placement of labels ...... 30 4.6 The network of the anatomy application ...... 32 4.7 Layout of the Labeling widget ...... 33

5.1 The intensity measurement ...... 36 5.2 The gradient magnitude, silhouetteness and background measure- ment...... 36 5.3 Focus+context visualization ...... 36 5.4 Comparision of Blinn-Phong shading and Tone shading . . . . . 37 5.5 The Anatomy Application: Selection on Pericardium ...... 38 5.6 The Anatomy Application: The Digestive and Urinary System . 39 List of Tables

5.1 Performance measurements of front-to-back composition and importance- aware composition with different settings on importance measure- ments (IM) and early ray termination (ERT)...... 38 5.2 Performance measurement of tone shading and Blinn-Phong shad- ing using front-to-back composition...... 38 Chapter 1

Introduction

In this Master’s thesis an illustrative volume rendering system has been devel- oped at the division for Media and Information Technology, Department of Sci- ence and Technology at Linköping University. Illustrative techniques are used in the system to achieve an expressive visualization of anatomical structures. The thesis serves as a fulfillment of a Master of Science degree in Media Technology at Linköping University, Sweden.

1.1 Motivation

The study of medicine and biology has always relied on visualizations to learn about anatomical relationships and structures. In these studies are dissections often used to support the anatomical learning with both visual and tactile expe- rience. However, the use of dissection is declining for schools that have anatomi- cal education [14]. High schools and universities are more often using other aids such as textbooks, plastic specimens and simulators to support their anatomy education. The computerized aids offer many new possibilities, where simulators and educational software lets the user explore anatomical structures in three dimen- sions. Often these applications are using surface rendering to render pre-modeled 3D models. However, through a technique called volume rendering the structures can be rendered directly from the medical data. Volume rendering has for a long time been considered as much slower than surface rendering, but with newer GPU’s it is possible to achieve interactive frame rates. With volume rendering it is possible to acquire renders that better corresponds with the real material. The density values in the medical data sets are directly mapped to RGBA values for the pixels in the rendered images. This allows for fuzzy surfaces with varying opacity, where surface and internal details can be rendered together, for example material such as soft tissue and blood vessels.

1.2 Purpose & Goal

In this thesis an interactive volume visualization system for illustrative visual- ization and exploration of medical volume data is proposed. The purpose with the thesis is to develop a volume rendering application for anatomy education, which allows the user to interactively explore anatomical structures in a medical

7 8 Introduction data set. The goal of using illustrative techniques is to achieve an expressive vi- sualization, where complex data is conveyed in an intuitive and understandable way. Otherwise, the information can quickly overwhelm the user, which makes it harder for the user to convey the information. The goal with the thesis is to achieve illustrative visualization of anatomical structures and to show its use in an application for anatomy education.

1.3 Limitations

The application in this thesis is based on research material and is developed as a proof-of-concept, where the potential of the methods are evaluated. This means that the user’s satisfaction is not evaluated and no user requirements are collected. Otherwise, the user’s need and opinions in such an application would have been questioned. The potential users are medical students and other med- ical expertise that would be interested in an application for anatomy education.

1.4 Outline

The structure of the thesis is outlined as follows.

Chapter 1: Introduction Describes the motivation, purpose, goal and limitations of the thesis.

Chapter 2: Background Presents anatomy education and how it is performed at schools. Explains the theory and background behind volume rendering and illustrative visualization.

Chapter 3: Theory Explains the theory behind the used illustrative methods in the thesis.

Chapter 4: Implementation Explains the implementation of the illustrative methods and how they have been used in the anatomy application.

Chapter 5: Conclusion Presents the result of the implementation and the performance of the methods. Discusses the result and the future work arising from the thesis.

8 Chapter 2

Background

2.1 Anatomy Education

In medicine and biology education, the anatomy of animals and humans are studied to learn about the anatomical structures, functions and relationships. With this knowledge we can understand how our bodies work and how the evo- lution has created us and other living creatures. Textbooks are often used as an aid in anatomy education, where illustrations give a better understanding of the anatomical structures. Another aid is the use of dissections that give both visual and tactile experience to the anatomy education. Dissections can be traced back to the Renaissance [9], where dissections were applied on human cadavers. In modern time, dissections are often introduced in high school, where animal cadavers are studied. In veterinary and medical school, the studies are done on both animal and human cadavers. However, this has started to change and dissections are declining as an aid in anatomy education as described by Winkelmann [14].

2.1.1 Dissections The role of dissections as an anatomy teaching tool for medical students are described by McLachlan et al. [9] as an opportunity to study real material op- posed to textbooks and other teaching material. The dissections also give an important three-dimensional view of the anatomy, where knowledge from lec- tures and tutorials can be used. Moreover, McLachlan et al. [9] mentions that it increases the self-directed learning and team working. However, the use of dissections also has its shortcomings, where practical problems concern ethical and moral issues, cost-effectiveness and safety. Cadavers might be dealt with improperly, the preservations are expensive and it can have potential health risks. Other problems are more about the educational value, where the ma- jor consideration is if the dissections are the most suitable way for high school students to study anatomy and also for those medical and veterinary students that will not work with real material in their future work. These students may only encounter anatomy through medical and then the knowledge from dissections would be hard to translate to the views produced by imaging as described by McLachlan et al. [9].

9 10 Background

2.2 Volume Rendering

Volume rendering is a technique to visualize three-dimensional data and has grown as a major field in scientific visualization. The volume data can be ac- quired from many different sources such as simulations of water, wind, clouds, fog, fire or other natural phenomena. However, the major application area for volume visualization is , where the data is acquired from com- puted tomography (CT) or magnetic imaging (MRI). These techniques use either x-ray beams or magnetic fields to extract and visualize scanned bodies or objects. With modern graphics hardware more efficient volume rendering techniques has evolved, which makes it possible to achieve volume rendering with inter- active frame rates. The graphical processor units (GPUs) allow for hardware accelerated volume rendering techniques that takes advantage of the parallelism of modern graphic hardware. The ray casting techniques in volume rendering benefits especially from the parallelism where multiple rays can be processed at the same time and thus achieve real-time volume rendering. In this section I will explain the fundamental parts of volume rendering and how it efficiently can be produced by modern graphics hardware. Most of the volume rendering parts can be referred to the book by Engel et al. [5], which presents the fundamental parts of real-time volume graphics.

2.2.1 Volume Rendering Integral The volume rendering integral is the physical description of the volume render- ing technique. The integral uses an optical model to find a solution to the light transport, where the flow of light is followed to produce the virtual imagery. In the light transport the light can interact with participating media and be emit- ted, absorbed and scattered. However, after a number of interactions the light transport becomes very complex and the complete solution becomes a compu- tationally intensive task. Simplified optical models are therefore often used to achieve a more efficient volume rendering. The most common models are the following.

• Absorption only (light can only be absorbed)

• Emission only (light can only be emitted)

• Emission-Absorption model (light can be absorbed and emitted)

• Single scattering and shadowing (local illumination)

• Multiple scattering ()

In the classic volume rendering integral 2.1 the emission-absorption model is used. In this model the light can be emitted and absorbed, however it cannot be scattered as in other more complete illumination models.

D − R κ(t)dt Z D R D s − κ(t)dt I(D) = I0e 0 + q(s)e s ds (2.1) s0 In the volume rendering integral the light flow is followed from the back- ground of the volume s0, through the volume and against the position of the eye

10 11 Background

D. The result is the total outgoing intensity I(D). In equation 2.1 the optical properties of emission and absorption is described respectively with the terms q(s) and κ(t). To simplify the integral the term

Z s2 τ(s1, s2) = κ(t)dt (2.2) s1

is defined as the optical depth between the positions s1 and s2 and the corresponding transparency is defined as

s − R 2 κ(t)dt −τ(s1,s2) s T (s1, s2) = e = e 1 (2.3) With these definitions of optical depth and transparency the following volume rendering integral can be obtained.

Z D I(D) = I0T (s0,D) + q(s)T (s, D)ds (2.4) s0

In the first term of equation 2.4 the initial intensity I0 from the background is attenuated through the volume, where the optical depth τ controls the trans- parency T of the medium. For small values of τ the medium is rather transparent and for larger values the medium is more opaque. In the second term the integral contribution of the emission source term q(t) is attenuated by the participating media along the remaining path through the volume to the viewer. To be able to compute the volume rendering integral 2.1 it needs to be discretized. This is commonly done by partioning the integration domain into several parts and thus approximates the integral with a Riemann sum. The discrete volume rendering integral can then be written as

n n X Y I(D) = ci Tj (2.5) i=0 j=i+1

with c0 = I(s0), where the integral is approximated from the starting point s0 to the eye point D with n number of intervals.

2.2.2 Segmented Volume Data The volume data consists of a 3D scalar field and is represented on a discrete uniform grid, where each of the cubic volume elements is called a voxel. In segmented volume data, each of the voxels is tagged as belonging to a segment The segments can be seen as individual objects, that has been separated from the volume by a process called volume segmentation. This is done before the actual volume rendering and is used to distinguish individual objects. In medical visualization this can for example be used to visualize a specific organ in a human body data set.

2.2.3 Ray Casting Ray casting is an image-based volume rendering technique, where the volume integral is evaluated along rays through the volume data. Usually the traversal order is front-to-back, where the volume data is traversed from the eye and into the volume. For each pixel in the image to be rendered, a ray is cast into the volume and data is sampled at discrete positions along the ray. At

11 12 Background each sample point on the ray an interpolation is done to get the correct voxel position. Transfer functions are then used to the scalar data values to optical properties such as color and opacity. In the last step the samples are composited together to get the resulting pixel color. With the discrete volume rendering integral in equation 2.5 the composition schemes can be obtained, where the illumination I is represented with RGBA components with the color as C and the opacity as α. The composition equation for front-to-back traversal is given as follows.

0 0 0 Ci = Ci−1 + (1 − αi−1)Ci 0 0 0 αi = αi−1 + (1 − αi−1)αi (2.6)

0 0 0 The new values Ci and αi are calculated from the color Ci−1 and opacity 0 αi−1 from the previous location i − 1, and the color and opacity Ci and αi at the current location i. With these steps the color and opacity is accumulated along the ray, which results in a RGBA value for the current pixel. In a similar way the back-to-front composition schemes are obtained as follows,

0 0 Ci = (1 − αi)Ci+1 + Ci (2.7) 0 where the value of Ci is calculated from the color Ci and αi from the current 0 location i, and the color Ci+1 from the previous location i + 1. 0 In the back-to-front composition the opacity αi is not updated as in the 0 front-to-back composition 2.6. That is since the color contribution Ci can be determined without using the accumulated opacity in a back-to-front traversal. However, a major advantage of the front-to-back traversal is that the traversal 0 can be terminated when the accumulated opacity αi [0,1] reaches above one. Then the most opaque material has been evaluated along the ray and further traversal is unnecessary. This is a technique called early ray termination and is an efficient way to optimize the rendering and can be easily executed in the ray casting loop. For this reason is front-to-back composition the most commonly used composition scheme in volume rendering.

2.2.4 GPU-based Ray Casting In GPU-based ray casting the entire volume is stored in a 3D texture. The texture is transferred to a fragment and the rays are cast through the volume in a per-pixel basis. In order to calculate the ray direction different approaches can be used. The most basic solution is to compute the direction from the camera position and the screen space coordinates, but another way is to use rasterization [8]. In this technique the range of depths from where the ray enters the volume and to where the ray exits the volume is computed in a ray setup prior to the ray casting. This yields the front face and the back face from the bounding box of the volume, as can be seen in figure 2.1. The front and back face coordinates can then be used to compute the direction coordinates as follows,

D(x, y) = Texit(x, y) − Tentry(x, y) (2.8)

12 13 Background

(a) Front face (b) Back face

Figure 2.1: The front and back face from the bounding box of the volume

where the coordinates can be seen as the entry and exit points of the ray traversing the volume. After the ray setup the ray casting is performed in a ray casting loop where equation 2.8 is used to determine when the ray has reach the exit point of the volume. The ray casting technique via rasterization can be seen in figure 2.2, where the rays (r) are traversed from the front faces (f) to the back faces (b).

r0 b0

r1 f0 b1

f 1 r b2 2

f2

f3 r3

b3

fx rx bx

Figure 2.2: The ray casting technique via rasterization

In the ray casting loop the rays are cast through the volume, where each ray is iteratively stepped through and the 3D texture is sampled using tri-linear interpolation. The sample is then used to apply the transfer function and get the color and opacity of the given sample. Finally, a composition scheme is used to blend the samples together. When the last sample has been reached the final RGBA value of the pixel has been computed and can be returned from the fragment shader. The expensive stage in this algorithm is the actual ray casting loop and therefore has many optimization techniques been developed to make it more efficient. Early ray termination is one technique that already has been presented in section 2.2.3, but another powerful technique is called empty

13 14 Background space skipping. This technique tries to not sample empty space through the volume, which occurs when visible parts of the volume does not fill up the entire bounding box. However, if the volume is subdivided into smaller blocks we can determine for each whether it is empty or not. To achieve this we can use the front and back face of the smaller blocks and have a a much more fit and close bounding box of the volume, that more closely resembles the visible parts of the volume.

2.2.5 Transfer Functions The transfer functions are applied in the ray casting process as explained in section 2.2.3. These are used to map the optical properties such as absorption and emission to the scalar values in the volume and by that evaluate the volume rendering integral. In medical volume data the scalar values most commonly represent material density. The transfer functions classify the data and map it to color contributions, where each scalar value between 0 and 255 corresponds to a color and opacity. The transfer functions are commonly applied with the use of lookup tables, which contains discrete samples from the transfer function and are stored in a 1D or 2D texture. An example of a transfer function stored in a 1D texture can be seen in figure 2.3.

0 255

Figure 2.3: A transfer function represented by a 1D texture

2.2.6 Local Illumination The emission-absorption model presented in section 2.2.1 does not involve local illumination. However, the volume rendering integral in equation 2.1 can be extended to handle local illumination by adding an illumination term to the emission source term q(s):

qextended(s) = qemission(s) + qillumination(s) (2.9)

where qemission(s) is identical to the emission source term in the absorption- emission model. The term qillumination(s) describes the local reflection from a light that comes directly from the light source. With this term is is possible to achieve single scattering effects using local illumination models similar to traditional methods for surface lighting. In these the surface normal is used to calculate the light reflection. However, to use the local illumination models in volume rendering the normal is substituted by the normalized gradient vector of the volume. To do this the gradient is computed in the fragment shader using finite differencing schemes. These are based on Taylor expansion and can estimate the gradients by forward, backward or central difference. The most common approach in volume rendering is central differencing as seen in equation

14 15 Background

2.10, which has a higher-order approximation error than forward and backward differences and thus creates a better estimation.

f(x + h) − f(x − h) f 0(x) = (2.10) 2h With the central difference formula in equation 2.10 the three components in the gradient vector ∇f(x, y, z) are estimated and can be used in a local il- lumination model, like for example the Blinn-Phong model. This model is the most common shading technique and computes the light reflected by an object by combination of the terms, ambient, diffuse and specular reflection.

IBlinnP hong = Iambient + Idiffuse + Ispecular (2.11)

The ambient term Iambient is used to compensate from the missing indirect illumination. This is achieved by modeling a constant global ambient light that prevents the shadows from being completely black. With the diffuse and specular term the reflected incident light is modeled to create matte and shiny surfaces. The diffuse term Idiffuse corresponds to the light that is scattered in all directions and the specular term Ispecular to the light that is scattered around the direction of the perfect reflection. The local illumination model in equation 2.11 can be integrated into the absorption model by adding the scattered light to the emission term as explained in equation 2.9. This means that the illumination of the volume can be determined by adding the local illumination to the emission of the volume.

2.3 Illustrative Visualization

Volume rendering is often concerned with photorealistic rendering, where the goal is to produce highly realistic images. This is important for many applica- tions, but photorealism can also prohibit the effective depiction of features of interest as described by Rautek et al. [10]. Important features may not be rec- ognizable among the other visual content. Non-photorealistic rendering (NPR) has therefore emerged to visualize features that cannot be shown using a physi- cally correct light transport. These techniques have been inspired by the artistic styles used in pen-and-ink drawings, hatching, stippling and water color paint- ings. The techniques that are inspired by technical and medical illustrations are called illustrative visualization techniques [10]. These make use of visual ab- straction to effectively convey information to the viewer, where the techniques concerns about what and how to visualize the features in order to achieve an expressive visualization.

2.3.1 Medical Illustrations Scientific illustrations are often used for educational purposes to instruct and explain complex technical information. These can illustrate the mounting of a , a surgical procedure, the anatomy of an animal or a technical device. Medical illustrations are used extensively in the medical field to represent anatomical structures in a clear and informative way. An illustration of a heart can for example give insight into its function and relation to other organs. These illustrations are often drawn using traditional or digital techniques and used in

15 16 Background textbooks, advertisements, presentations and many other contexts. The illus- trations can also be of three-dimensions and be used as material in educational applications, instructional films or medical simulators. In the educational process the illustrations give an impact to the learning, where the illustrations provide insight by effectively conveying the information. The main goal of scientific illustrations are to convey information to the viewer, which is done by letting the viewer focus on the important parts instead of the parts that is not interesting. This is an approach called visual abstraction and is most commonly used in medical illustrations to emphasize important structures without removing them completely from its context. Visual abstraction and its use in medical illustrations are further explained in the following sections 2.3.2 and 2.3.3.

2.3.2 Visual Abstraction Visual abstraction is an important component in illustrative visualization, which is inspired by the abstraction techniques from traditional illustration. With abstraction the most important information is conveyed to the viewer, where the visual overload is reduced by letting the viewer focus on what is important. This is often done by having certain structures be emphasized and others be suppressed to ensure the visibility of the important structures and reduce the visual clutter. The different ways to provide abstraction can be divided into low- level and high-level abstraction techniques as described by Rautek et al. [10]. Where the low level techniques deals with how to visualize features of interest and high level techniques deals with what to visualize. The low-level techniques represent the artistic style of the illustration. Some examples of handcrafted techniques are silhouettes (or contours), hatching and stippling. The silhouette technique draws lines along the contours to enhance the shape depiction. Hatching and stippling are handcrafted shading techniques, which draws the illustration by only using strokes or small points. In many of these techniques have been simulated to provide computerized stylized depiction and most effective is the silhouette technique, which is often used in surface and volume rendering. The high-level abstraction techniques concerns about the visibility in the illustration, where the most important information is uncovered to provide visi- bility of the more important features. Some examples of illustration techniques that are used in technical and medical illustrations are cut-away views, ghosted views and exploded views. These change the level of abstraction or the spatial arrangement of features to reveal the important information. These techniques are also called focus+context techniques and by Viola et al. [12] referred as smart visibility techniques.

2.3.3 Cut-away Views and Ghosted Views In technical and medical illustrations it is often important to visualize the inte- riors to understand the relation between different parts. However, without the context it is hard to see the spatial relationship and put together a mental picture of how parts are related. Different techniques have therefore been developed to be able to focus on important features while still maintaining the context. In the

16 17 Background area of information visualization these are often called focus+context techniques and have become one key component in illustrative visualization [10]. Cut-away views and ghosted views are techniques used in traditional illustra- tions to apply this sort of abstraction to the data. The techniques use different approaches to reveal the most important parts in an illustration. In cut-away views the occluding parts are simply cut away to make the important parts visi- ble, whereas the ghosted views removes the occluding parts by fading them away. These techniques concerns about the occluding parts, which is either removed or faded. This results in an illustration with focus on the important parts that are not completely removed from its context. An example of cut-away and ghosted view can be seen in figure 2.4.

(a) Whole sphere (b) Cut-away sphere (c) Ghosted sphere

Figure 2.4: Cut-away and ghosted illustration of a sphere

Medical illustrations have been using cut-away views, ghosted views and other similar techniques for centuries, where it helped the viewers recognize what they were looking at. It was already used by Leonardo da Vinci in the beginning of the 16th century in his drawings of anatomical structures as shown in figure 2.5. Nowadays, these techniques are frequently used, since we still gain most information from unknown data by seeing only small portions of the data as described by Krüger et al [7].

Figure 2.5: Medical illustrations by Leonardo da Vinci (Courtesy of ’The Royal Collection c 2005, Her Majesty Queen Elizabeth II’)

17 18 Background

2.3.4 Visibility Control High-level visual abstraction as explained in 2.3.2 is one of the main components in illustrative visualization. This abstraction technique reveals the most impor- tant features by controlling the visibility in illustrations. In order to achieve similar things in illustrative visualization the visibility need to be controlled during volume rendering. Importance-driven volume rendering is introduced in the work by Viola et al. [11], where importance is defined as a visibility priority to determine the vis- ibility of features in the rendering. In their work a high importance gives a high visibility priority in the rendering, which ensures the visibility of the important features. The rendering is based on segmented data, uses two rendering passes and consist of the following steps: 1. Importance values are assigned to the segmented data

2. The volume is traversed to estimate the level of sparseness

3. The final image is rendered with respect to the object importance Objects occluding more important structures in the rendering are rendered more sparsely to reveal the important structures. With this feature enhancement it is possible achieve both cut-away and ghosted views. Another approach is context-preserving volume rendering introduced by Bruck- ner et al. [2]. This approach modulates the opacity based on volume illumina- tion, where regions that receive little illumination are emphasized, like for exam- ple the silhouettes of an object. The opacity is modulated by using the shading intensity, gradient magnitude, distance to eye and previously accumulated opac- ity. This makes it possible to explore the interiors of a data set without the need of segmentation. The context-aware volume rendering can be implemented in a single-pass fragment shader on the GPU. This makes this approach much more effective than importance-driven volume rendering, which requires multiple ren- dering passes. However, both of these approaches depends on data parameters and gives only indirect control over the focus location. A different approach was introduced by Krüger et al. [7] called ClearView, a context-preserving hot spot visualization technique. With this approach a user has direct control over the focus and can interactively explore the data sets. This technique uses a context layer and a focus layer which are rendered separately and composed into a final image. The contents of the layers are defined by the user together with a focus point, which allows for an interactive focus+context exploration.

2.3.5 Textual Annotations Textual annotations, labels or legends are techniques often seen in illustrations. These are used to describe the illustration and thus make the identification of dif- ferent parts easier. This creates more meaningful illustrations, which the viewer can relate to and understand better. This is often used in medical illustra- tions, where education material uses textual annotations to explain anatomical structures. In anatomy education this is used to help medical students identify structures and see its relation to other structures. In the work by Bruckner et al. [3] an illustrative volume rendering system called VolumeShop was developed with the intent to provide a system for medical

18 19 Background illustrators. Within this system, textual annotations were implemented to match traditional illustrations and simplify the orientation in the interactive system. In this implementation an anchor point is connected with a line to a label, which is placed with the following guidelines for all objects in the data. • Labels shall not overlap • Lines connected between label and anchor point shall not cross • A label shall be placed as close as possible to its anchor point Moreover, the annotations are placed along the silhouette of the object, in order to not be occluded by the object. To do this the algorithm approximates the silhouette and places the labels at the closest distance to their anchor point, but outside of the silhouette. This results in rendered textual annotations that for example describe a medical data set with a text label for each anatomical structure.

2.4 Voreen

Voreen [13] is a volume rendering engine developed by the Visualization and Computer Graphics Research Group (VisCG) at the Department of Computer Science at the University of Münster. The software is open source and built in C++. In Voreen a framework is provided for rapid prototyping of ray-casting- based volume visualizations, where a data-flow network concept is used to pro- vide flexibility and reusability to the system. The network consists of processors, ports and properties. The nodes in the network are called processors, which have ports of different types (e.g. volume, data and geometry) to transfer data between them. The properties are used to control the processors and can be linked between different processor nodes. An example of a Voreen network is shown in figure 2.6.

Figure 2.6: The standard workspace in VoreenVE

The environment in figure 2.6 is called VoreenVE and is developed together with Voreen. VoreenVE provides an environment to visualize the network, where

19 20 Background processors are visualized as nodes and can interactively be added, removed or connected to other processors. The environment also simplifies changing user- defined parameters with interactive GUI widgets, like for example sliders, color pickers and transfer function widgets.

20 Chapter 3

Theory

This chapter describes the theory behind the illustrative techniques. This in- volves the composition scheme, importance-aware composition and the shading model, tone shading. These have been chosen to achieve both high- and low-level abstraction in the illustrative visualization of medical data.

3.1 The Importance-aware Composition Scheme

In order to have visibility control in the visualization the method importance- aware composition [4] was chosen. This is a method closely related to the visibil- ity control techniques presented in section 2.3.4. The front-to-back composition equation 2.6 is modified in the Importance-aware Composition method to also measure sample importance, which makes it possible to achieve importance- based visibility control in a single rendering pass. In the composition equation 2.6 the visibility (transparency) can be obtained 0 as one minus the accumulated opacity (1 − αi), where the visibility is a value between [0,1]. This means that the visibility of a sample i can be controlled by 0 modulating the accumulated opacity αi of the previous samples. Obviously, a sample would be visible if all previous samples would be invisible. To modu- late the opacity based on the importance we thus need to control the visibility through sample importance and accumulated importance as explained by Pinto et al. [4]. This is done with a visibility function of the sample importance Ii and 0 accumulated importance Ii, which computes the minimum visibility required for a sample i.

0 0 vis(Ii,Ii) = 1 − exp(Ii,Ii) (3.1) Using the visibility function in equation 3.1 the opacity and color can be modulated with a scale (modulation) factor m as follows,

 0  1 if Ii <= Ii,  0 mi = 1 if 1 − αi >= vis(Ii,Ii), (3.2) 1−vis(I ,I0)  i i otherwise ai which is applied for each sample i in the composition step. The scale factor m in equation 3.2 modifies the accumulated opacity and color when the sample importance is greater than the accumulated importance and the visibility from the accumulated opacity is less than the required minimum visibility. With this

21 22 Theory we can obtain an importance-aware composition scheme that is valid for opaque samples as described by Pinto et al [4],

0 0 0 Ci = mCi−1 + (1 − mαi−1)Ci 0 0 0 αi = mαi−1 + (1 − mαi−1)αi 0 0 Ii = max(Ii−1,Ii) (3.3)

where the accumulated opacity is always one for opaque samples. However, for translucent samples the accumulated importance computation also need to involve the sample opacity. This can be seen as a measurement of sample rele- vance, where a sample with zero opacity should not have any influence on the visualization as explained by Pinto et al. [4]. This leads to equation 3.4, where 0 the accumulated importance Ii is computed based on the previously accumu- 0 lated importance Ii−1, the current sample importance Ii and the sample opacity αi.

0 0 0 Ii = max(Ii−1, ln(αi + (1 − αi) exp(Ii−1 − Ii)) + Ii) (3.4)

With this we can finally write the complete importance-aware composition scheme:

ˆ0 0 0 Ci = mCi−1 + (1 − mαi−1)Ci 0 0 0 αˆi = mαi−1 + (1 − mαi−1)αi 0 0 0 αi = αi−1 + (1 − αi−1)αi  0 if αˆ0 = 0, 0  i C = 0 ˆ0 i αiCi  0 otherwise αˆi 0 0 0 Ii = max(Ii−1, ln(αi + (1 − αi) exp(Ii−1 − Ii)) + Ii) (3.5)

0 In equation 3.5 the opacity αi is used to scale up the accumulated color 0 Ci to ensure a desired composition of a low-opacity/high-importance sample followed by a high-opacity/low-importance sample as described by Pinto et al. [4]. To use the importance-aware composition scheme the sample importance need to be measured for each sample in the composition step. Several importance measurements are presented in the work by Pinto et al. [4], but only some are used in this implementation. These are the measurements of intensity, gradient and silhouetteness. The implementation of these measurements together with the importance-aware composition scheme is further explained in section 4.1.3.

3.2 The Tone Shading Model

Tone shading presented by Gooch et al. [6] is a non-photorealistic (NPR) shading technique that is based on technical illustrations, where surfaces are often shaded in both hue and luminance. From observations it is known that we perceive warm tones, such as red, orange or yellow closer to us and cool tones like blue, purple or green farther away. Shadows are for example perceived in a bluish tone closer to the horizon, due to the scattering effect. Therefore it is possible to improve

22 23 Theory the depth perception by interpolating from a warm tone to a cool tone in the shading. By that a clearer picture of shapes and structures can be obtained as described by Gooch et al. [6]. Tone shading can either be used as a variant or extend the existing local illumination model. Most commonly is it used to modify the diffuse term in the Blinn-Phong model, which is described in section 2.2.6. The diffuse term of the Blinn-Phong model determines the intensity of diffuse reflected light with (n · l), where n is the surface normal and l is the light vector. This gives the full range of angles [-1, 1] between the vectors, but to avoid surfaces being lit from behind, the model uses max((n · l, 0), which restricts the range to [0, 1]. However, by doing this the shape information in the dark regions is hidden, which makes the actual shape of the object harder to perceive. Unlike this, the Tone shading model uses the full range [-1, 1] to interpolate from a cool color to a warm color as shown in the following equation,

1 + (n · l) 1 + (n · l) I = ( )k + (1 − )k (3.6) 2 a 2 b where l is the light vector and n is the normalized gradient of the volume. In equation 3.6 the terms ka and kb are derived from a linear blend between the colors kcool and kwarm and the color of the transfer function kt as shown in the following equations,

ka = kcool + αkt (3.7)

kb = kwarm + βkt (3.8)

where the factors α and β are parameters between 0 and 1 to control the contribution of the sample color kt. With these equations the Tone shading

+

=

Figure 3.1: Tone shading of a red object with blue/yellow tones model can be evaluated in a fragment shader, where the shading is applied to the samples on a per-fragment basis. An example of tone shading is shown in figure 3.1.

23 Chapter 4

Implementation

In this thesis, an application for anatomy education has been developed using the Voreen volume-rendering engine and the visualization environment, VoreenVE [13]. In Voreen a module with a new set of processors has been developed to create an illustrative visualization of anatomical structures. The application uses the framework for the graphical user interface and the graphics library OpenGL and the shading language GLSL for the visualization and volume ren- dering implementation. The following sections describe the implementation of the chosen illustrative methods.

4.1 Illustrative Ray Casting

This section describes the ray casting process and how it was implemented to achieve an illustrative visualization. The ray caster was constructed to allow both non- and pre-segmented data. In the ray caster processor the segmented volume data is rendered by receiving entry and exit points as well as volume data and segmented volume data. The ray casting loop is performed in a single-pass fragment shader and implemented using OpenGL Shading Language (GLSL). The entry and exit points are used to utilize rasterization in the GPU ray casting process as explained in section 2.2.4. In the fragment shader each ray are cast from the eye towards the volume, where the ray direction is computed from the entry and exit points of the volume. The ray casting loop then iteratively steps through the volume along the ray, samples the 3D volume texture using tri-linear interpolation, applies transfer function, shading and performs composition to achieve the final rendering. To achieve illustrative visualization of the segmented data a couple of additional methods have been implemented. This includes a segmentation transfer function, tone shading and importance-aware composition implementation. These have been implemented as described in the following sections.

4.1.1 Segmentation Classification In the classification step the data is mapped to optical properties in the volume rendering integral. This is often done using a transfer function, which the samples to color and opacity. In this implementation the ray casting is done on the GPU and the transfer function is stored as 1D textures. The textures are passed to the fragment shader, where RGBA samples are found through texture

24 25 Implementation lookups as described in section 2.2.5. However, with segmented volume data this becomes a bit different. In this case each segment can have its own transfer function, where different color and opacity can be applied for each segment. The implementation of the segmentation classification is based on the Seg- mentationRaycaster processor in Voreen. In this processor the volume and seg- mented volume are sent as 3D textures to the shader. Whereas the multiple transfer functions are sent as a single 2D texture, where all the segments 1D transfer functions are stored as shown in figure 4.1. segment id segment

0 255

Figure 4.1: 1D TF textures stored in a 2D segmentation TF texture

In figure 4.1, each row in the 2D texture corresponds to a segment ID. In order to find the correct 1D transfer function of the segment the implementation determines the segment ID from the 3D texture of the segmentation volume. This is then used to look up the 1D TF textures that are stored in the 2D segmentation TF texture. The transfer function of a segment can then be found and used in the rendering. In the implementation of the importance-aware ray caster this is used to achieve rendering of segmented volume data, with different transfer function for each segment.

4.1.2 Tone Shading Tone shading is implemented as a shading technique in the fragment shader to achieve illustrative shading in the visualization. This technique uses a warm and a cool color to increase the perception of depth and shapes as described in section 3.2. This technique is implemented similar to other shading techniques in Voreen, such as the Blinn-Phong shading. The diffuse term of the Blinn-Phong model is replaced with the Tone shading model in equation 3.6. This way the ambient and specular terms was kept as a part of the model. To have a more flexible ray casting processor the Tone shading technique was added to a drop-down box containing the existing shading techniques. The parameters of the technique were also made updatable, where the factors, the warm and cool color were made definable in VoreenVE through sliders and color pickers. The parameter setup for the Tone shading technique can be seen in figure 4.2.

25 26 Implementation

Figure 4.2: Tone shading parameters

4.1.3 Importance-aware Composition The importance-aware composition method was implemented to provide visi- bility control in the illustrative visualization. This was achieved by replacing the traditional composition method in the GPU ray casting loop with one that was based on sample importance as described in section 3.1. The composition method was implemented in the fragment shader as shown in algorithm 1.

Algorithm 1 Importance-aware Composition 0 Set importance (Ii) to zero for all samples do Compute sample importance (Ii) Set scale factor (m) Perform composition scheme and scale the result 0 Accumulate importance (Ii) end for

The sample importance can be computed with several measurements as pro- posed by Pinto et al. [4]. These are calculated for each sample during rendering and are used to emphasize features in different ways based on its importance. However, these measurements are not dependent on segmented data, which this thesis also covers. For this reason have a measurement for segmented data also been implemented among other measurements proposed by Pinto et al. [4], such as intensity, gradient, and silhouetteness measurement, and measurements for suppressing structures and achieving focus+context visualizations. The importance measurement is implemented in the fragment shader to com- pute the sample importance value, where the measurements are combined in a weighted sum to compute the final importance value. This is done together with 0 0 0 0 a global weight to scale the weighted sum (Ii = Wglobal · (Wi Ii + ... + WnIn)), where a global weight of zero, results in zero importance and thus a traditional composition. In the implementation every weight is passed to the shader and can be changed in VoreenVE with sliders from 0 to 1 as seen in figure 4.3. The following importance measurements are the ones that have been implemented. Intensity: The intensity measurement was implemented as described in Pinto et al. [4]. In this measurement the visibility is ensured for samples with high intensity, IS = Wintensity · intensity (4.1)

where Wintensity is the corresponding weight and can be used to control the sample importance. This was implemented by using the intensity of the samples.

26 27 Implementation

Figure 4.3: Importance Measurements Parameters

Gradient Magnitude: The gradient magnitude measurement ensures the vis- ibility for the strongest boundaries with

IS = Wgradient · |gradient| (4.2) where the magnitude of the gradient is obtained from the sample in the imple- mentation. Silhouetteness: The silhouetteness measurement was implemented to empha- size the silhouettes in the rendering. This measures how much a sample belongs to a silhouette using the normalized view vector V , the normalized gradient N and the gradient magnitude mG as described by Pinto et al. [4].

p sil = mG · smoothstep(s1, s2, (1.0 − abs(V · N))) (4.3) IS = Wsil · sil (4.4)

By changing the influence of the gradient magnitude (p) or the slope of the smoothstep function (s1, s2) the look of the silhouettes is controlled. However, this only ensures the visibility of silhouettes, so to make them more distinguish- able the sample color Ci is scaled with a factor ρ to make the silhouettes darker, exp(−ρ · sil). The silhouettes become darker as the silhoutteness importance increases. This was implemented in the composition step by using the gradi- ent obtained from the sample and the normalized view vector obtained as the difference between the view and the sample position. Segment Visibility: Another measurement was also implemented to be able to control the visibility of segments in segmented volume data. First a 1D texture is generated in the ray caster processor that stores the visibility values [0,1] of each segment. This is then passed to the shader where the segment ID of the samples is used to lookup the corresponding visibility value. The sample importance is then measured with the visibility value and a visibility weight, which is used to control the importance of the visible segments. Background: The intensity, gradient and silhoutteness can be used to em- phasize important structures. However, to have a much clearer picture of the important structures the unimportant structures could be de-emphasized and

27 28 Implementation suppressed. As described by Pinto et al. [4] this can be accomplished by con- sidering the background as a layer of opaque samples, that have a importance assigned to them together with a adjustable weight. This is implemented by con- sidering the last sample in the ray traversal as the background, where the sample color is set to the background color. When a background sample is reached, the sample importance is set to the background weight. By adjusting the weight the background is made visible through the volume, as it suppresses the less important structures. Focus+Context: The combined weight sum is scaled with a global weight to control all the weights with the same variable as described previously. However, we can also scale the weights using a per-ray-global weight, where weights are scaled differently for each ray. This can be used to achieve focus+context visu- alizations as described by Pinto et al. [4]. In the implementation of the widget a focus is defined by a circular area, which can be interactively dragged and resized in the view. This is implemented as a global weight in the composition step, where the weight is obtained from the current ray (pixel) Pcoord, position Pfocus and radius rfocus of the focus area as shown in the following algorithm.

Algorithm 2 Focus+context weight P = Pfocus · D r = min(Dx,Dy) · rfocus l = length(Pcoord − P ) Compute the weight Wfocus = smoothstep(r − 1, r + 1, l)

In algorithm 2 the viewport dimension D is used together with the min() function to makes it possible to have a tall and narrow viewport as well as a wide and low one. By using -1 and 1 to adjust the step width of the smoothstep() function the circle is anti-aliased, independent on the viewport resolution. It is also possible to achieve a softer circle by changing the step width.

4.2 Labeling of Segmented Data

Textual annotations (or labels) are implemented to add descriptions to the vi- sualization of segmented data as described in section 2.3.5. With this a user is allowed to easier identify the different segments in the visualization. The label- ing implementation is based on the Labeling processor in Voreen, which can be used to have illustrative labels in visualizations of segmented data. However, this processor have been extended and modified to be more interactive and include an information panel (or labeling widget), where the segments are presented a hierarchical list together with an information view. The implementation of this panel is further explained in section 4.3.3. The labeling process consist of the following steps, read the segmentation description file, generate the labels, posi- tion the labels and render the labels to the screen. These steps are described in the following sections.

28 29 Implementation

4.2.1 Segment Description File The labeling processor in Voreen uses an XML file to describe the segments with information about id and caption. However, this was extended with a group, name and info node to be able to have a tree hierarchy of labels and label groups as shown in the following example file. ... Top Level The top level item Node The node item 0 Leaf The leaf item Listing 4.1: Example of a segment description file

Where the Top Level item is a parent to the Node item and the Node item is a parent to the Leaf item. This results in the following tree structure: Top Level → Node → Leaf. With this a tree hierarchy list could be achieved, which is used in the labeling widget presented in section 4.3.3.

4.2.2 Layout Algorithm The Labeling processor uses an IDRaycaster that renders an ID map to position the labels. The IDRaycaster receives the entry and exit points of the volume, the segmented volume data and the first hit points of the volume data ray casting result. The result of the ID map is a color coded map, where the segmentation ID’s are stored in the alpha channel together with the first hit positions in the three color channels. In the Labeling processor this is used to place the labels at the correct positions, where the ID map tells which segments that are visible at the moment. To place the labels the processor applies distance transform (or distance maps) on the ID map, which stores the closest distance to the segment border for each pixel. This is used to place the anchor points according to the size of the segment and the distance from the particular pixel to the segment border. The labels are then placed according to the guidelines in section 2.3.5, where the labels should be placed near the anchor point, but outside of the objects bor- ders, without overlapping another label or causing an intersection with another connection line. This is done by approximating the silhouette of the object with a convex hull algorithm, which is used to compute the convex shape for a set of points. This can be seen as an elastic band that is stretched open and released to fit the boundary of the object as seen in figure 4.4. The convex hull is calculated in the Labeling processor, where the silhouette points of the ID map is used to give an approximation of the silhouette. This

29 30 Implementation

Figure 4.4: Convex hull: A set of points enclosed by an elastic band

is then used in the placement of the labels, where the labels are placed outside of the convex hull at the closest distance to their anchor point. Finally the label positions are corrected for line intersections and label overlaps and can be rendered to the screen. The placement of labels is illustrated in figure 4.5.

Figure 4.5: The placement of labels

4.2.3 Rendering In the rendering step the labels, anchor points and connection lines are rendered. This is done in two rendering passes, where the first one renders halos around the anchor points and connection lines. This is done by using thicker lines and points colored with a specified halo color. The next pass renders the anchor points and connection lines with normal thickness and colors them with the same color as the label text. After that the pass renders quads at the label positions and maps the font texture onto them. The font texture is pre-generated for each label in the XML file. The caption of the labels is rendered to a bitmap using the font rendering library FreeType [1] and bound to a texture. However, to also be able to mark certain labels a selection color is added to the labeling processor. This is used in the font rendering to highlight the selected labels with a different font color than the specified label color. How the label selection is implemented is further explained in section 4.3.3.

30 31 Implementation

4.3 Anatomy Application

Dissections, plastic models and textbooks are often used as aids in the anatomy education as explained in section 2.1. However, computerized technology offers new possibilities in how the teaching can be done. For this purpose a proto- type of an anatomy application has been implemented. Text books with medical illustrations are often used as aids in anatomy education. These provide abstrac- tion, which is crucial in order to have effective illustrations. The prototype is for this reason based on illustrative visualization techniques to achieve abstraction in the visualization. In the prototype a pre-segmented human body data set is used, which have been provided by the Center for Medical Image Science and Visualization (CMIV) in Linköping.

4.3.1 Design and User Interface In the design of the anatomy application the illustrative ray casting and labeling component are used together, where the Compositor processor is used to blend the renders together. The network of the system can be seen in figure 4.6, which is taken from VoreenVE. The user interface is designed to allow a user to interactively explore the anatomical structures, where the illustrative rendering can be controlled and information about the segmented data can be presented. This is achieved by the implementation of a focus+context and labeling widget as described in the following sections.

4.3.2 Focus+Context Widget The focus+context widget is implemented to interactively control the position and radius of the focus area. This is implemented as a geometry renderer in Voreen, which is used together with a geometry processor to be able to have multiple geometry rendering processors as seen in figure 4.6. In the fo- cus+context widget a draggable and resizable 2D circle is rendered on the view plane. The circle is rendered and made clickable through the methods render() and renderP icking() derived from the GeometryRendererBase class in Voreen. In the render() method the outer borders of the circle are rendered using its color, position and radius, where the lines are anti-aliased using GL_LINE_SMOOTH. The renderP icking() method is used to render the pickable regions to a IDManager object, which color codes the pickable regions and stores it in a render target. The method then performs the rendering similar to the render method, but the inner parts of the circle are rendered instead of the outer border lines. This allows the user to pick the circle by clicking somewhere within the circle. The circle can then be dragged or resized by checking isHit() in the IDManager object and see if the circle has been picked or not. The circle is dragged by saving its initial position (px, py) and mouse coordinates (x0, y0) when the circle is hit. The circle position P is then updated according to the new mouse coordinates (x, y) until the user decides to release the circle. To resize the circle the initial radius r is saved instead. The circle radius R is then updated according to the change in the y direction. For a positive change in the y direction the circle is enlarged and vice versa. The drag and resizing computation can be seen in algorithm 3.

31 32 Implementation

Figure 4.6: The network of the anatomy application

4.3.3 Labeling Widget A labeling widget was created to be able to interactively change which organs that are important to see in the visualization. This is added as a processor widget to the labeling processor. The widget is created as an abstraction layer in the Labeling class and is implemented in VoreenQt, the Qt GUI library of Voreen. In the Qt implementation a view is setup to hold a text label, text area, a tree view and three buttons as seen in figure 4.7. The tree view is implemented to organize the anatomical structures into which biological system they belong to and group structures that belong together with other structures, for example a group was created for the heart and its

32 33 Implementation

Algorithm 3 Drag and resize circle with mouse if isClicked then ∆x = x−x0 Dx ∆y = y0−y Dy if dragCircle then //Update circle position P Px = px + ∆x Py = py + ∆y else if resizeCircle then //Update circle radius R offset = (∆x, ∆y) resizeDir = (0, 1) 1 factor = 1+(offset·resizeDir) R = r · factor end if end if

Figure 4.7: Layout of the Labeling widget

different parts was included as childs to the group. The different biological systems are the groups of organs that work together to achieve certain task, these are for example the circulatory, digestive and respiratory system. Where for example the heart belongs to the circulatory system. These systems were chosen for the implementation since they are often studied in human anatomy. In the tree view a tree hierarchy view is created, which is filled with labels and label groups from the segmentation description file as described in listing 4.1. When traversing the XML file the labels and label groups are added to the tree view according to their parent. If the item has a parent it is found in the tree and the item can be added as a child to the found parent. This way their hierarchy in the XML file can be translated to the tree view. For each item a checkbox is also added. The text area and text label are used to show information about the labels. These are updated when a label or label group is selected in the tree view. This is done by linking the selection in the tree view to the text area and text label. The selection is also linked to the labels in the rendering, where a selected label is highlighted with a chosen color. These

33 34 Implementation rendered labels were also made clickable through an IDManager object similar as in the focus+context widget in section 4.3.2. A rendering target for picking is used to render the quads of the labels as color coded regions. This is then used to find if a label is hit or not using the mouse coordinates and the isHit() method of the IDManager. A picked label is highlighted in the view and set as selected in the tree view. In order to change which organ or system that should be visible a process was implemented to toggle the visibility of segments (organs). By using the seg- mentation visibility measurement as described in 4.1.3, the visibility is changed by updating the 1D texture. The visibility can then be changed in several ways. Either by selecting the checkbox icon of one of the items in the tree view, the button Show all segments or one of the buttons, Show/Hide or Hide others when having one item selected. When an action is performed on a label group or label it is propagated to the ray caster processor using property linkage. This linkage is done between two processors in VoreenVE and allows a property to be updated by another property of the same type. For example when choosing an action on a selected label, the action and segment ID is set in the labeling processor, which automatically sets the same properties in the ray caster processor. This processor then updates the visibility texture according to the action and segment ID. For example if the action is set to Hide the corresponding segment ID is found in the visibility texture and set to 0, which means that the segment has no importance in the importance-aware composition and will not be rendered.

34 Chapter 5

Conclusion

5.1 Results

In this section the result of the implementation is presented. The importance- aware composition result is first presented followed by the tone shading result. Finally, the anatomy application result is presented, which uses the two previous components together with the labeling component. A human hand data set is used as test data for both the importance-aware composition and tone shading component and a pre-segmented human body data set is used for the anatomy application.

5.1.1 Result of the Importance-aware Composition The importance-aware composition is implemented with different sample impor- tance measurements as explained in section 4.1.3. In this method, the mea- surements for intensity, gradient magnitude, silhouetteness, background and fo- cus+context are implemented together with a measurement for segmented data, which is used in the anatomy application. To combine multiple importance mea- surements a weighted sum with a global weight is used in the implementation, where every measurement has its own weight to control its contribution to the visualization. The results of the different sample importance measurements are shown in the following figures. The intensity measurement is shown in figure 5.1, where the importance weight is changed from no weight (5.1a), to a moderate (5.1b) and high weight (5.1c). The gradient magnitude, silhouetteness and background measurement are seen in figure 5.2. In figure 5.2a the gradient magnitude is combined with the intensity measurement in a weighted sum. This increases the importance of the boundaries, which makes the shape more distinct than using only intensity measurement. In figure 5.2a the silhouetteness and background measurement are also added to the weighted sum. The silhouetteness values s1 = 0.4, s2 = 3.0, p = 0.5 are used to create the emphasized contours in the image and the background is suppressed by using a non-zero background weight. In figure 5.3 the combined weighted sum of the result in 5.2b is scaled with a per-ray-global weight. With this a focus+context visualization is produced, where the focus is defined as a circular area. The step width [radius+1, radius/2] is used to achieve the soft circle area.

35 36 Conclusion

(a) No intensity weight (b) Moderate intensity weight (c) High intensity weight

Figure 5.1: The intensity measurement

(a) Rendered with intensity and gradi- (b) Rendered as in a) but combined with ent measurements a silhouetteness measurement and back- ground measurement

Figure 5.2: The gradient magnitude, silhouetteness and background measure- ment

Figure 5.3: Focus+context visualization

36 37 Conclusion

5.1.2 Result of the Tone Shading The result of the tone shading implementation is seen in figure 5.4, where tone shading (5.4b) is compared with the traditional Blinn-Phong shading (5.4a). In the figure the tone shading is setup using an orange warm tone with factor α = 0.8 and a blue cool tone with factor β = 0.3.

(a) Blinn-Phong shading (b) Tone shading

Figure 5.4: Comparision of Blinn-Phong shading and Tone shading

5.1.3 Result of the Anatomy Application The implementation of the anatomy application resulted in an educational tool for anatomy education. An illustrative visualization is achieved by using the importance-aware composition, tone shading and labeling implementation. With these implementations the expressiveness of the volume rendering is increased. Within the application a user can explore a human body through a focus+context technique and view information about selected organs. The application interface consists of a 3D canvas view and an information panel. In the canvas view the user controls the volume visualization by rotating, zooming and panning the view. By using the focus+context widget the user can also control the size and position of the circular focus area. The information panel holds the list of organ structures available in the human body data set and presents them in a hierarchical list based on its biological system. Through the panel a user can hide and show specific organs or biological systems. In figure 5.5 the pericardium is selected, which contains the heart and belongs to the circulatory system. The anatomical structures that does not belong to the circulatory, digestive and respiratory systems has been hidden to have a clear view of the pericardium, for example the skin, muscle and skeleton structures. The information panel is shown to the left in figure 5.5, where information about the pericardium is presented and its place in the tree list view is shown. In the canvas view the visualization of the human anatomy is shown, where the pericardium label is highlighted to show the current selection. Another view of the application is shown in figure 5.6, where the digestive and urinary system is visualized. In this view a user has hidden the other systems in the data set to only make the current ones visible.

37 38 Conclusion

Figure 5.5: The Anatomy Application: Selection on Pericardium

5.1.4 Performance The performance of the composition and shading method can be seen in table 5.1.4 and 5.1.4, where the performance is measured for two data sets with dif- ferent settings. The result is rendered in a 256x256 viewport on the following system: 2.0GHz Intel Core 2 Duo T6400, 4GB RAM, ATI Mobility Radeon HD 4370. CT Human Hand CT Human Thorax Composition scheme 244x124x256 256x256x256 Front-to-back 18.2 (55) 29.4 (34) Front-to-back (no ERT) 27.8 (36) 37.0 (27) Importance-aware (no IM’s) 28.6 (35) 38.5 (26) Importance-aware (all IM’s) 38.5 (26) 43.5 (23) ms (fps) ms (fps)

Table 5.1: Performance measurements of front-to-back composition and importance-aware composition with different settings on importance measure- ments (IM) and early ray termination (ERT).

CT Human Hand CT Human Thorax Shading method 244x124x256 256x256x256 Blinn-Phong shading 66.7 (15) 76.9 (13) Tone shading 71.4 (14) 83.3 (12) ms (fps) ms (fps)

Table 5.2: Performance measurement of tone shading and Blinn-Phong shading using front-to-back composition.

38 39 Conclusion

(a) The front of the human body

(b) The back of the human body

Figure 5.6: The Anatomy Application: The Digestive and Urinary System

39 40 Conclusion

5.2 Discussion

Dissection simulators and anatomy education software are often using surface rendering to visualize the anatomical structures. However, the for this thesis was to develop an educational software that would use volume rendering of real material data instead of using surface rendering of pre-modeled meshes. Volume rendering is often concerned with highly realistic rendering, but in educational software it is often more important to have expressive visualizations that fulfill a meaning rather than have a realistic look. In the visualization a user should be able to focus on the important parts and not be distracted by the complex information. This is often achieved in illustrations, which inspired this thesis to use illustrative visualization techniques. This resulted in an implemen- tation of an illustrative compositing, shading and labeling method, which was used in the anatomy application.

5.2.1 The Illustrative Techniques The importance-aware composition and tone shading method was implemented into the ray casting loop of the GLSL shader. With the importance-aware compo- sition method it was possible to achieve feature enhancements and focus+context visualizations in a single rendering pass. However, the method could not be op- timized with early ray termination, since the whole volume would need to be propagated to know the importance of each sample. For example a sample at the end of the ray can have a high importance even if the sample is not visible for the eye. For this reason, the method has a low performance reduction when compared to a ray casting method that have not been optimized with early ray termination, as seen in table 5.1.4. The combined importance measurements also affects the performance as seen in table 5.1.4, but this depends on the compu- tational complexity of the measurement and how many measurements that are involved in the combined weighted sum. The tone shading technique is controlled by setting two colors and two factors for the warm and cool tone. It was not hard to find good parameters for tone shading, but compared to Blinn-Phong shading where no parameters are needed it was a bit more difficult to setup. On the other hand was no information lost in the dark regions and the depth perception was increased by using the warm- to-cool tones. With Blinn-Phong shading it can be hard to distinguish the shape of the object, especially when only one light source is used. When looking at the performance of tone shading in table 5.1.4 the performance reduction is very small for using this illustrative shading technique instead of the traditional shading technique.

5.2.2 The Anatomy Application The prototype implemented in this thesis made use of illustrative volume render- ing techniques to enhance the visualization in an anatomy education application. With the importance-aware composition method it was possible to enhance the important features and suppress the background elements in a focus+context visualization. However, since a segmented data set was used the intensity and gradient measurements were not needed. That is since the features are already found so it is no longer important to find the features in the data set. In the

40 41 Conclusion application was the tone shading and labeling techniques used to improve the ex- pressiveness in the visualization. With the tone shading technique the shapes of the anatomical structures were made easier to perceive and the labels increased the understanding of the human anatomy. From the labels a user can access the information panel, which gives an overview of the biological systems in the human body and presents more detailed information about selected structures. Together with the illustrative visualization this resulted in an application, where a user can interactively explore the anatomical structures and learn about their relationships and functions.

5.3 Future work

In this section the future of the thesis work is discussed. What features that need to be improved and what new features that would be desirable. One of the limitations with the thesis was that the anatomy application was not evaluated and no requirements were collected. In a continued work with the prototype this shall be prioritized, since it is important to know the user’s opinions and needs. The end user’s of the system are medical students and teachers, so it would be good to first know their thoughts on the prototype. Improvements and optimizations of the current methods are necessary to increase the performance of the current anatomy application. The use of methods should also be questioned, like for example the importance-aware composition method, which is able to find important structures in not pre-segmented data sets. However, this ability is not used in the anatomy application, where instead a segmented data set is used to visualize the different anatomical structures in the human body. This means that it may exist better and less expensive ways to achieve similar effects of silhouettes, suppressed structures and focus+context visualization as used in the anatomy application.

5.3.1 Additional Features In the current system it is not possible to select multiple organs or systems. This would be a nice feature, since it would make it possible to select a couple of organs from different systems and make them visible by hiding the others. Currently the user needs to hide or show one selected organ or system at a time. Another feature that would be nice to have is shadows casted by occlu- sion. This would increase the depth perception and make it easier to see the spatial relationship between structures. Maybe it would be possible to use an approximated technique such as and still have an interactive visualization instead of introducing a much more computational expensive global illumination model to the volume rendering. In the anatomy application a user can interactively learn about the anatom- ical structures in the human body. However, since the data is obtained from medical imaging it would be simple to introduce some other data sets in the application. It would for example be possible to study the anatomy of animals, which could be useful for both veterinaries and high school students. The only requirement is that the data is pre-segmented so the different anatomical struc- tures can be distinguished by the application.

41 42 Conclusion

Another interesting feature for the anatomy application would be to develop it for a multi-touch environment. Within this application a user would be able to use his/hers fingers to interact and explore the anatomical structures on the screen. In this setting it would be interesting to see the focus+context approach, where a user would be able to drag and resize the focus lens directly on the screen.

42 References

[1] The FreeType Project. http://www.freetype.org/, 2011. Accessed 2011- 08-04.

[2] Stefan Bruckner, Sören Grimm, Armin Kanitsar, and Meister Eduard Gröller. Illustrative context-preserving exploration of volume data. IEEE Transactions on Visualization and Computer Graphics, 12(6):1559–1569, 2006.

[3] Stefan Bruckner and Meister Eduard Gröller. Volumeshop: An interac- tive system for direct volume illustration. In H. Rushmeier C. T. Silva, E. Gröller, editor, Proceedings of IEEE Visualization 2005, pages 671–678, 2005.

[4] Francisco de Moura Pinto and Carla M. D. S. Freitas. Importance-aware composition for illustrative volume rendering. In Graphics, Patterns and Images (SIBGRAPI), 2010 23rd SIBGRAPI Conference on, pages 134 – 141, 2010.

[5] Klaus Engel, Markus Hadwiger, Joe M. Kniss, Christof Rezk-salama, and Daniel Weiskopf. Real-time Volume Graphics. A. K. Peters, Ltd., Natick, MA, USA, 2006.

[6] Amy Gooch, Bruce Gooch, Peter Shirley, and Elaine Cohen. A non- photorealistic lighting model for automatic . In Pro- ceedings of the 25th annual conference on Computer graphics and interac- tive techniques, SIGGRAPH ’98, pages 447–452, New York, NY, USA, 1998. ACM.

[7] Jens Krüger, Jens Schneider, and Rüdiger Westermann. ClearView: An in- teractive context preserving hotspot visualization technique. IEEE Trans- actions on Visualization and Computer Graphics, 12(5), 2006.

[8] Jens Krüger and Rüdiger Westermann. Acceleration Techniques for GPU- based Volume Rendering. In Proceedings IEEE Visualization 2003, 2003.

[9] John C. McLachlan, John Bligh, Paul Bradley, and Judy Searle. Teaching anatomy without cadavers. Medical education., 38:418–424, 2004.

[10] Peter Rautek, Stefan Bruckner, Eduard Gröller, and Ivan Viola. Illustrative visualization: New technology or useless tautology? SIGGRAPH Comput. Graph., 42:4:1–4:8, 2008.

43 44 References

[11] Ivan Viola, Armin Kanitsar, and Meister E. Groller. Importance-driven feature enhancement in volume visualization. Visualization and Computer Graphics, IEEE Transactions on, 11(4):408 –418, 2005.

[12] Ivan Viola and Mario Costa Sousa. Focus of attention+context and smart visibility in visualization. In ACM SIGGRAPH 2006 Courses, SIGGRAPH ’06, New York, NY, USA, 2006. ACM.

[13] Visualization and Computer Graphics Research Group. Voreen. http: //www.voreen.org/, 2011. Accessed 2011-08-04.

[14] Andreas Winkelmann. Anatomical dissection as a teaching method in med- ical school: A review of the evidence. Medical Education, 41:15–22, 2007.

44