DUAL MARCHING SQUARES: IMPLEMENTATION

AND ANALYSIS USING VTK

by

MANU GARG

A thesis submitted to the Graduate Faculty of the

University of Colorado Colorado Springs

in partial fulfillment of the

requirements for the degree of

Master of Science

Department of Computer Science

2017

© 2017

MANU GARG

ALL RIGHTS RESERVED

As is a common practice in Computer Science, journal and conference articles are published based on the MS/Ph.D. thesis work, which means that wording, tables, figures, and sentence structure are sometimes identical in the dissertation document, and journal and conference articles co-authored by the student and their advisor.

This thesis for the Master of Science degree by

Manu Garg

has been approved for the

Department of Computer Science

by

Sudhanshu K. Semwal, Chair

T. S. Kalkur

Al. Glock

Date October 3, 2017

ii

Garg, Manu (M.S., Computer Science)

Dual Marching Squares: Implementation and Analysis Using VTK

Thesis directed by Professor Sudhanshu K. Semwal

ABSTRACT

In the past few decades’ is perhaps one of the most visited research topics in the field of scientific . Since volume datasets are large and require considerable computing power to process, the issue of supporting real time interaction has received much attention. Extracting a polygonal mesh from an existing scalar field identified in the volume data has been the focus since nineteen-eighties. Many algorithms, like Marching Cube, and Marching Square, have been developed to extract the polygonal mesh from the scalar interpretation of the volume data. However, only a few of these techniques claim to solve all known existing problems due to concave nature of the surfaces embedded inside the volume data. Some extract meshes with too many polygons. Many such polygons, with same orientation, could be combined.

Sharp features or small detail in the underlying surface could be lost due to polygonal approximation. Other techniques suffer from topological inconsistencies, self- intersections, inter-cell dependencies, and other similar issues. Recently published Dual

Marching Squares (DMS) produce smother results in comparison to the Marching

Square algorithm. According to best of our knowledge, there has been no other implementations, except the original theoretical research paper on DMS which was published in Feb. 2016. In this thesis, we implement DMS using the VTK pipeline. A comparison of MS and DMS is provided.

Keywords: Isosurface, Extraction, Implicit Surfaces, Scalar Field Polygonization, Meshing, Marching Cube, Marching Square, Dual Marching Cube, Dual Marching Square, Volume visualization, Volume Rendering, .

iii

TABLE OF CONTENTS

CHAPTER

I. INTRODUCTION...... ……………………..1

II. BACKGROUND...... ……………………...6

III. TECHNICAL BACKGROUND...... ………………...... 9

IV. ANALYSIS AND IMPLEMENTATION. . ……………...... 15

V. RESULTS AND DISCUSSION...... …………………30

VI. CONCLUSION AND FUTURE WORK...... ……………...... 34

REFERENCES ...... ……………………..35

APPENDICES

A. Steps to install VTK-Python on Mac…………………………………..38

B. Codes...... ………………………...39

C. Code for getting attributes for DICOM file . . .………………...... 42

iv

LIST OF TABLES

TABLE

5.1 Time taken to run the algorithm to generate the 3D dataset……………..32

v

LIST OF FIGURES

FIGURE

3.1. A marching square. The points at the corners denote the sample points. In this example, red points are outside the isoline (or isoband) and yellow points inside the isoline. The dotted line, marks the isoline, and the blue color defining the 'inside (Drawn using draw.io)'………………………… 9

3.2 The 16 configurations of the Marching Squares algorithm (Drawn using draw.io…………………………………………………………………………. 10

3.3 The 4 unique configurations of the Marching Squares algorithm, which are necessary to reproduce all others (Drawn using draw.io). 11 ………………………………..……………………………………………

3.4 The 15 configurations into which the 256 variations of straddling cubes, decompose, in the MC algorithm (Drawn using draw.io). …………………… 11

3.5 The curve achieved by Dual Marching Square (a) compared to Marching Square (b) (Drawn using draw.io). 14 ………………………………..……………………………………………

4.1 Testing shape for the algorithm………………………………..……………… 15

4.2 First five quad tree level for test data “h” ………………………………..…… 17

4.3 Merging of cells to form larger leaf (green color) after ADFs.…………… 18

4.4 Quad tree created for test shape “h” after ADFs. Leaf cells (Green Color), Empty cells (Grey Color), and Full cells (Back Color). …………………… 19

4.5 Dual grid (Black color) over a primary quadtree (Grey color)…………….. 20

4.6 Vertices (Red color) of the test shape “h” for Dual Marching Square. Leaf cells (Green Color), Empty cells (Grey Color), and Full cells (Back 21 Color). ………………………………..…………………………………………………….

4.7 Left “h” generated using Marching Square. Right “h” generated using Dual Marching Square …………………………………………………………………….. 22

4.8 DICOM file attributes………………………………..…………………………………. 25

vi

5.1 Left Female head_Front Marching Square (duration 0:00:00.246355) Right Female head_Front Dual Marching Square (duration 30 0:00:00.247965) ………………………………..……………………………………

5.2 Left Female Head_side Marching Square (duration 0:00:00.246355) Right Female Head_side Dual Marching Square (duration 30 0:00:00.247965) ………………………………..………………………………………

5.3 Left Female Head_back Marching Square (duration 0:00:00.246355), Right Female Head_back Dual Marching Square (duration 31 0:00:00.247965)……………………………..……………………………………………

5.4 Left Female Eye_ Front Marching Square (duration 0:00:00. 213779), Right Female Eye_Front Dual Marching Square (duration 0:00:00. 31 223355) ………………………………..……………………………………………

5.5 Left Female Right_Eye_WireFrame Marching Square Square (duration 0:00:00. 213779), Right Female Right_Eye_WireFrame Dual Marching 31 Square (duration 0:00:00. 223355)………………………………..………………

5.6 Left Female_Ear_Front Marching Square(duration 0:00:00.244979), Right Female_Ear_Front Dual Marching Square(duration 0:00:00. 31 253785) ………………………………..……………………………………………

5.7 Left Female_Ear_Front Marching Square Wireframe (duration 0:00:00.244979), Right Female_Ear_Front Dual Marching Square WireFrame (duration 0:00:00. 253785) Does not include rendering time, only the mesh generation time is specified in the above table………………… 32

vii

LIST OF ABBREVIATIONS

• CAD Computer Aided Design

• CT Computed Tomography

• DC Dual Contouring

• DICOM Digital in Communications and Medicine

• DMC Dual Marching Cube

• DMS Dual Marching Square

• MC Marching Cubes

• MRI Magnetic Resonance Imaging

• MS Marching Squares

• VTK Visualization Toolkit

• CSG Constructive Solid Geometry

viii

CHAPTER I

INTRODUCTION

“Imagination or visualization, and in particular the use of , has a crucial part to play in scientific investigation.” - Rene Descartes, 1637.

The term visualization, as Ware [1] describes, means the construction of a visual image in the mind (Oxford English Dictionary, 1973). But it has also come to mean something more tangible: the graphical representation of data or concepts in multi- participant virtual environments. Earliest visualizations can be found in Chinese in the year 1137 [2]. For example, Volume 86 of the historical text Records of the Grand Historian (Shi Ji) dated 227 BC has earliest mention of a .

Visualization once was a manually intensive task, involving ink and paper.

Sometimes, models were created using stick-and-balls in classrooms and texts. The advent of the computing era has brought in an ability to process, and subsequently visualize, large amounts of data. Real-time interaction provides an ability to concentrate on interpreting and understanding the data. Some believe that human beings will always have vital a role in visualization. This is because ultimately human beings are the one who can perceive the best way of visually representing the volumetric data.

Computational process works as a tool to aid in this process.

Computational Scientific visualization is a framework that enables scientists to computationally analyze, understand, and communicate the numerical data generated by scientific research. In recent years, volume data is being collected at a rate beyond what can be possibly studied and comprehended by a person. Scientific visualization uses computer graphics and Human Computer Interaction (HCI) techniques to process numerical data into two- and three-dimensional visual images. This visualization process includes gathering, processing, displaying, analyzing, and interpreting data.

Volume visualization is a set of techniques used to extract meaningful

1 information from volumetric data using image processing and interactive graphics techniques. It helps with the representation of the volume data, modeling, manipulation, and rendering. Volume datasets can be collected by sampling, simulation, or modeling techniques. For example, Computed Tomography (CT) can be used to get a sequence of 2D slices or Magnetic Resonance Imaging (MRI) data set. MRI is a diagnostic tool which can be used to produce detailed 3D model of the inside of the human-body. This technology has also been used for non-destructive inspection of composite materials or mechanical parts. Likewise, the data obtained from confocal microscopes can be visualized to study the morphology of biological structures. Volume visualization can also be used in many computational fields, e.g. computational fluid dynamics (CFD), where results/outcome of simulation generated on a supercomputer can be used to analyze and verify the aspects of fluid flow, including compressible, non-isothermal, non-Newtonian, multiphase, and porous media flows. One of the methods used for volume rendering CFD is ray casting introduced by Ebert [33]. Many traditional computer graphics and geometric modeling applications such as Computer-aided drafting (CAD) are used to benefit Volume visualization techniques called volume graphics for modeling, manipulation, and visualization using Image-based meshing to create computer models automatically from 3D image data known as the Volume CAD

(V-CAD) [34]. We can now store relevant 3D physical attributes as well as shape data.

V-CAD allows sharing of robust data with decisive accuracy by various simulations and flexible manufacturing methods.

Over the years a lot of algorithms have been developed to visualize volumetric data. Most of these techniques require approximating a surface contained within the data using geometric primitives. Common methods include contour tracking [3], opaque cubes [4], marching cubes [5], marching tetrahedra [6], dividing cubes [7].

2

These algorithms fit geometric primitives, such as polygons or patches, to constant- value contour surfaces in volumetric datasets. After extracting this intermediate polygonal representation, hardware-accelerated rendering can be used to display the surface primitives. These methods have lot of challenges. One of the hurdle faced is to have the hardware to handle the size of dataset generated. Also, these methods require deciding for every data sample whether the surface passes through it or not. This can produce false positives (spurious surfaces identified as e.g. cancer) or false negatives

(erroneous holes in surfaces or missing cancerous cells), particularly in the presence of small or poorly defined features [35]. As geometric information of the objects (voxel) is generally not retained, which may inflict difficulties encountered when rendering discrete surfaces [35] specially those obtained from the discretized volume data set.

In response the problems mentioned above, direct volume rendering techniques were developed that attempt to capture the entire 3D data in a single 2D image by considering impression of every point of the given volume data set as applicable. Volume rendering techniques convey more information than surface rendering methods, but at the cost of usually increased algorithm complexity, and consequently increased rendering times.

One of the most basic volume rendering algorithm is the ray casting. It uses the geometric algorithm of ray tracing. A 'ray cast' involves intersecting a ray with the objects in an environment. One of the ray cast techniques is provided by Buchanan and

Semwal [37], to render the volumes of shaded color and opacity known as a front-to- back ray casting.

Other volume rendering technique, provided in Swann and Semwal [39], uses bi-linear interpolation and B-spline approximation to provide an image from multiple slices.

To model the images by 2D scalar fields 2D, isocontouring method is defined

3 in [16]. A 2D isocontouring is used in many image processing and analysis applications to find a boundary description of objects, structures or phenomena, for example from intensity or range images. An isocontour can be defined, given a scalar field f(x,y), as the collection of locations in the field having a particular scalar value cx. The value cx is called the isovalue associated with that isocontour. One advantage of using an isocontouring method for boundary finding in intensity images is that it can directly produce some gap-free description because we must either have a 1 (isocontour value) or 0 (no isocontour present). In contrast, use of edge detection seldom directly produces a gap-free boundary. Isocontouring also produces a boundary with a sub-pixel level of precision (e.g., as opposed to methods like classic chain coding [17] that have only pixel-level precision). Its contours can also be used to estimate properties, such as a region's area or perimeter.

There are multiple ways to find isocontours on a 2D scalar field. One popular way is the Marching Squares (MS) algorithm which will be described in more detail in

Chapter 3. It is restricted to scalar data on 2D rectilinear grids and produces a piecewise linear approximation of the isocontour. The MS algorithm is the 2D scalar field analogue of the Marching Cubes (MC) volume visualization technique. The MC has been incorporated in many ways since its introduction [18]. Many of its extensions

(e.g., [19]) are analogues for 2D scalar fields. One MC extension that have no 2D analogue implementation is Dual Marching Cubes (DMC) as proposed in 2004 [20].

DMC can produce smoother contour than MC for some cases. DMC [21] helps in reducing the MC’s disadvantage of creating a lot of triangles even in flat areas where they are not needed. This algorithm also fixes some more issues of MC, for example, not being able to produce sharp edges and missing thin features. Each of these methods has its pros and cons, and they are often used as is appropriate for the task at hand.

4

Some of these methods extract meshes with too much polygons, even where they are not necessary, some lose any sharp features or small detail in the underlying surface, others suffer from topological inconsistencies as they still use a simple lookup table, self-intersections, and inter-cell dependencies. In our opinion, a robust and standard way, which handles all of those problems is still not truly developed and agreed upon.

There are however a few candidates, which have high claims, one of them is the new

2D analogue of DMC called as the Dual Marching Squares (DMS) [16]. As is discussed in later Chapters, the focus of this thesis is DMS implementation using VTK libraries.

Applications of Volume Visualization

Medical Visualization

One of the primary usages of an isosurface extracting algorithm is extracting a boundary representation from medical volumetric datasets. One can argue that medical visualization has pushed the computer graphics field forward and has played a crucial role in the development of many of the algorithms known to us today. The medical data is in most cases a set of discrete samples in three-dimensional space, which produce a volumetric dataset, as in the case of Magnetic Resonance Imaging (MRI) scans. In most cases, this data is rendered straight away with the help of a volumetric renderer, which usually produces a gray-scale image of the rendered region. Sometimes a boundary representation of a layer should be constructed. This is where an isosurface extraction algorithm can create a polygonal representation of a certain isolevel of the provided discrete scalar field. This model could be used for a detailed 3D view of individual layers or for a better polygonal rendering of certain organs, tissues, or any object of interest, which could be extracted from within the data set.

5

CHAPTER II

BACKGROUND

“An implicit surface {p R 3: f(p) = 0} is a two-dimensional manifold, provided that f is continuous and 0 is a regular value of f (that is, the gradient is defined for all points p on∈ the surface). This means that the surface may be triangulated.” (Bloomenthal and Wyvill (1997), p.128[22])

Overview

This chapter describes the underlying concepts, techniques and algorithms commonly used for the visualization of volumetric data.

Volume Data

Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value contained inside a cubic cell or a voxel.

Typical scalar volume data is composed of a 3-D array of data and three coordinate arrays of the same dimensions. The coordinate arrays specify the x, y, and z coordinates for each data point. The units of the coordinates depend on the type of data. For example, flow data might have coordinate units of inches and data units of psi.

According to Lichtenbelt et al [13], we understand a voxel as a point in space

(with an infinitesimal size). Elvins [14] describes volumetric datasets as rectilinear/curvilinear and regular/irregular: a rectilinear dataset is one where the voxels are arranged on a Cartesian grid (as opposed to curvilinear); and a regular dataset is one where the spacing between voxels is constant (as opposed to irregular).

We can also view a volumetric dataset as a stack of 2D slices (as typically happens in medical applications) - in this case, even with regular datasets, the distance between slices can be different than within a slice: this characterizes an anisotropic dataset, as opposed to isotropic where the distance between x, y, and z direction between sample data set is constant.

6

Isosurface Extraction

Earliest examples are from 1970 Keppel [23]. But this field has exploded with activity during the past two decades. Marching Cubes (MC) by Lorensen and Cline [8] is probably, one of the most well-known algorithms in Computer Graphics and by far the most cited resource in the field, according to the ACM Digital Library ACM (as of

June 2016 it has 1,658 citations). Lorensen and Cline lay the foundation for what would become, one of the most prominent technique for extracting a polygonal mesh from function of space i.e., a 3-dimensional discrete scalar field. This led to a great number of techniques virtually based on MC’s principles of spatial partitioning. When it comes to isosurface extraction there are a few distinct methodologies, which take a very different approach to the same problem. They can be classified into three main groups:

Spatial Partitioning

All such algorithms (e.g. MC) have constrained domain in 3D space, they begin by applying spatial decomposition on that domain into sub-domains (also referred to as: cells, cubes, voxels, etc.), discarding sub-domains which do not contain (intersect, straddle) the isosurface. These algorithms then proceed, to reproduce the original isosurface by approximating it with polygons in every surface-containing sub-domain.

This means that the cells where the surface is not passing through are identified as either insider or outside, and then there are cells intersecting the surface.

Surface Tracking

Surface Tracking methods could be grouped into three classes based on the approach they take. “Cellular approaches” in Allgower and Gnutzmann [24], given a cell which is straddling the given initial surface as input, procedurally finds other such neighboring cells, thus tracking down the surface. “Delaunay-based or Particle approaches”, as described in Szeliski and Tonnesen [25], Witkin and Heckbert [26], are

7 two examples of such approach. They generate particles on the boundary of the implicit surface and then polygonise the particles to create the approximating mesh. In most cases, Delauney triangulation is employed, hence the name. One such algorithm is the original Marching Triangles (MT) technique Hilton et al. [27] proposed, along with its more recent improvements such as adaptive meshing as explained in Akkouche et al.

[28]. Also, a third approach is provided as “3D morphing of data” by Semwal and

Chandrashekher [36].

Surface Fitting

Surface Fitting techniques sequentially approximate an initial ’seed mesh’ to the implicit surface. Such algorithms could be further classified under two main types.

The first being the Element Driven approaches Desbrun et al. [28] and Crespin et al.

[29], which provide a base mesh enclosing for each primitive, then approximate the surface and combine all resultant meshes into one global one. This technique offers efficiency and robustness due to its hierarchical approach; however, it also suffers from many drawbacks, of which, possible inconsistent topology and wrong tessellation, are only some. Shrink Wrap approaches, as proposed by Overveld and Wyvill [30] presents an improved version which handles arbitrary geometry. On the other hand, Bottino et al. [32] provides a global surface algorithm, unlike the element driven ones. A base mesh is supplied surrounding the implicit surface, this mesh then systematically converges towards the implicit surface. This procedure results in loss of any concavity details. Further improvements on the technique add, so called, critical points which are established on the boundary of the implicit surface, so that such details are detected and preserved.

8

CHAPTER III

TECHNICAL BACKGROUND

The goal of this thesis is to investigate into the field of robust polygonization of a scalar field and creating a VTK (Visualization Tool Kit, ++) pipeline for isosurface extraction. Other similar algorithms that are worth mentioning are: Marching Square,

Marching Cube, Dual Marching Cubes (DMC) Nielson [20], and Dual Marching

Square (DMS) [16]. There are some others, which claim more or less similar results to the above mentioned, but those mentioned above stand out from the rest. They all have their pros and cons, and none of them is perfect. This thesis is implementing DMS using

VTK pipeline.

Marching Square

Before proceeding it is important that some of the main concepts of the foundational techniques are briefly described. MS is a special case of the MC algorithm, restricted to two-dimensional space. Therefore, it is used for the extraction of isocurves and isolines. This method can be used to give a piecewise-linear approximation to a two-dimensional object.

Figure 3.1: A marching square. The points at the corners denote the sample points. In this example, red points are outside the isoline (or isoband) and yellow points inside the isoline. The dotted line, marks the isoline, and the blue color defining the 'inside (Drawn using draw.io)'.

R2 space is sampled on a regular grid f (x, y), each square is defined by four of those sampling points, as denoted in Figure 3.1. An image is, usually, electronically scanned and the bitmap pixel values then used to give function values over a grid. A

9 shape may also be defined by a series of equations over intervals; these equations can then be used to determine the two-dimensional object they enclosed, which in turn can be used to define the pixel values over a uniform grid. In the manner analogous to that of marching cubes, we assign each vertex of each square a value of either 0 or 1; one corresponding to a vertex on the boundary or in the interior of the shape with which we are concerned, and zero corresponding to all other vertices. These values can then be used to assign a value to each square that denotes the way the shape boundary passes through the square. If we allow the lower left vertex of a square to contribute 0 or 1 to the square's value, the lower right to contribute 0 or 2, and so on round the four vertices, then the binary numbers 0000 to 1111 can be used to uniquely identify this intersection.

We store the decimal equivalent of these binary numbers. The 16 possible ways in which the boundary and square intersect are given in Fig. 3.2, along with the relevant index.

The surface then is approximated by creating lines between the edges corresponding to the correct configuration in each cell which crosses the isoline. There are two cases from those sixteen in Figure 3.2, which are ambiguous’ as they could potentially produce different line results. This is known as a face ambiguity.

Figure 3.2: The 16 configurations of the Marching Squaresalgorithm (Drawn using draw.io).

There are techniques which deal with the disambiguation of face ambiguities, however, staying consistent in a concrete line configuration, guarantees a curve which

10 would not be ruptured. These sixteen cases could further be simplified into only four unique cases, using symmetry and mirroring, one of which would have the mentioned face ambiguity, Figure 3.3.

Figure 3.3: The 4 unique configurations of the Marching Squares algorithm, which are necessary to reproduce all others (Drawn using draw.io).

Marching Cube

In 1987 Lorensen and Cline [8] present an algorithm that creates a triangular mesh for medical data. Known as "marching cubes" due to the way it “marches” from one to the next, the algorithm is considered to be the basic method for surface rendering in applications. They use Marching Cubes (MC15) to process computer tomography slices in scan line order, while maintaining inter-slice connectivity. A lookup table,

Figure 3.4, containing 15 possible surface intersections allows to be fast. MC15 can be applied in other areas, as for example, in visualization of implicitly specified functions or even the visualization of calculation results. They also note that the generated surface contains a large number of triangles.

Figure 3.4: The 15 configurations into which the 256 variations of straddling cubes, decompose, in the MC algorithm (Drawn using draw.io).

11

Nielson et al. [9] found that MC15 has no topological guarantees for consistency and produces visual hull surfaces containing small holes or cracks due to certain voxel face ambiguities. Using definitions of separated and non-separated voxel vertices, they proposed a modification to MC15 that implements face tests to resolve the ambiguities.

This modification still does not guarantee the correct topology either, and propose an internal test to resolve other ambiguities. In 1995, Chernyaev [10] showed that there are 33 topologically intersections and not 15, Chernyaev’s algorithm is referred to as

MC33. Chernyaev found ambiguous cases that are not visibly obvious and uses the suggested internal tests to resolve the remaining ambiguous voxel arrangements.

Although elegant in 2D, MC algorithm does not transform well in 3D as discussed above.

Montani et al. noted the topological inconsistency, computational efficiency and excessive data fragmentation as disadvantage of MC15. They propose a method to minimize the number of triangular patches specified in the marching cube surface lookup table, reducing the amount of data output and improving the computational efficiency.

In a paper by Lewiner et al. [11], an efficient and robust implementation

Chernyaev's MC33 algorithm is described. Detailed information, covering lookup tables, voxel labelling systems and tests to resolve voxel ambiguities, is also provided.

Their system guarantees a consistent surface without holes or cracks. Tarini et aI. [12] developed a fast and efficient version of the marching cubes algorithm, called marching intersections, to implement a volumetric based visual hull extraction technique. Using a large 3D mesh configuration, scans of the target are made. The point at which the ray intersects with the target is stored. The paper also provides a good definition of the visual hull computed from images captured with a turntable.

12

Dual Marching Cube

The DMC [20] algorithm, bases its structure on the MC algorithm, but improves it in many ways. In DMC, the dual of an octree is tessellated via the standard marching cubes method. This algorithm eliminates or reduces poorly shaped triangles and irregular or crooked specular highlights.

The DMC always generates topological manifold surfaces. Nielson unifies MC surface fragments to polygonal patches where the vertices of these patches are located on the lattice edges. Since each lattice edge is adjacent to four cells, each patch vertex is touched by four patches. The dual surface is now defined (1) by replacing each patch by a vertex and (2) by replacing each patch vertex by a quadrilateral face. In contrast to

DC, this approach results in a classification of 23 cell configurations that are dual to the

23 MC configurations required for extracting topological manifolds. Each configuration may create up to 4 vertices and the connectivity is well defined via the lattice edges. More precisely, when a lattice edge intersects the isosurface, this edge is associated with four vertices forming a quadrilateral surface fragment.

Dual Marching Square

DMS is the 2D analogue of DMC and is an elegant extension of 2D MC algorithm. Dual Marching Squares can be considered as a post-processing of the segments produced by Marching Squares. DMS appears to improve the curvedness for at least the objects with smoothly curved boundaries as shown in Figure 3.5.

It does so by considering the dual graph of the quadtree which is one of the basic data structure in 2D graphics. Quadtree is hierarchical. In a quad tree a 2D region is recursively divided into four quadrants. Each quadrant is either a leaf cell or subdivided further.

Quad Tree contains one type of hierarchical node and three terminal nodes (leaf,

13 empty, full).

• Root nodes contain four subtrees.

• Leaf nodes are minimum-size and contain the contour

• Empty and Full nodes are collapsed cells that are empty or full respectively.

In a quadtree, the root node represents the entire image. QuadTree building (pre-stage)

• All grid cell becomes a leaf node

• Allocate a parent node to each set of four nodes

• Calculate the {min, max} of the parent node using the children nodes

The basic steps to implement the quadtree algorithm are: Start with the entire grid:

• If the {min, max} of all grid values does not enclose the iso-value, means node

are empty or full, stop.

• Else, split the grid into four sub-grids and repeat the check with each sub-grid.

After implementing the quadtree, create a dualgrid and perform the marching square over the grid. This is an example where the DMS is working in higher resolution than MS. So, we have finer data and hence better surface results.

Figure 3.5: The curve achieved by Dual Marching Square (a) compared to Marching Square (b) (Drawn using draw.io).

14

CHAPTER IV

ANALYSIS AND IMPLEMENTATION

This chapter explains the backbone of the technique and lays the foundation of the implementation of DMS algorithm. This algorithm is implemented using the

Visualization Toolkit (VTK).

Dual Marching Square Analysis

DMS is the 2D analogue of Dual Marching Cubes. Its contour is the dual of the contour produced by Marching Squares.

For DMS analysis the first step is to generate a image using Constructive Solid

Geometry (CSG). The “h” shape is generated using CSG operators.

Figure 4.1: Testing shape for the algorithm.

The test data “h” consist of boundaries comprised largely of smooth curve. The use of test data helps in analysis of performance of the algorithms.

The first part of DMS is generation of quad tree. This quadtree is responsible for having less triangles in plane areas and more in curved. The recursive quad trees generated are shown below.

15

16

Figure 4.2: First five quad tree level for test data “h”

After the quad trees are generated, the cells are merged whenever a cell's distance field is entirely determined by a single line, using the concept of Adaptively

Sampled Distance Fields [41]. Using this concept larger leaf cells are created for a single flat edge as shown in Figure 4.3 and 4.4.

17

Figure 4.3: Merging of cells to form larger leaf (green color) after ADFs.

18

Figure 4.4: Quad tree created for test shape “h” after ADFs. Leaf cells (Green Color), Empty cells (Grey Color), and Full cells (Back Color).

The next step is to derive dual-grid from the quadtree where each vertex of quadtree is placed at the c enter of its square. This grid is topologically dual to the

19 quadtree.

Figure 4.5: Dual grid (Black color) over a primary quadtree (Grey color).

The vertices are generated by the process defined in DMC [21]. According to

DMC the vertices of dual grid from which surface will be extracted are generated using feature isolation. For each vertex of the quad tree, a dual grid cell whose vertices are the feature vertices inside of each square of quadtree, will be created. This technique

20 helps to preserve the sharp features, e.g. Vertex will be placed on the corner for the cell that contains a corner. Figure 4.4 shows the topology of the example quad tree where each vertex is placed in the center of a square. For the test data”h” the vertices generated for each green square are shown in Figure 4.5.

The feature extraction is based on the local information field and its gradients as given by Kobbelt [42]. This can be achieved by finding the position and normal for all the points along the cells side that intersect with the isocontour. Then perform least- squares fit to find the feature position(vertex).

Figure 4.6: Vertices (Red color) of the test shape “h” for Dual Marching Square. Leaf cells (Green Color), Empty cells (Grey Color), and Full cells (Back Color).

Once the vertices are generated the next step is to join them together to form the contour. This can be done by marching using three functions similar to ones defined in

DMC [21].

1. faceProc: It is called on one cell. It returns nothing, if the cell is a leaf. For the root

21 cell, it calls itself on each subtree, on each horizontal pair of cells it calls hEdgeProce and on each vertical pair of cells it calls vEdgeProc.

2. vEdgeProc: It is called on vertical pair of cells. It the cells are both leafs, it creates contour between two cells, else calls itself again.

3. hEdgeProc: It is same as vEdgeProc, but works on horizontal pair of cells.

Here are the results generated by MS and DMS.

Figure 4.7: Left “h” generated using Marching Square. Right “h” generated using Dual Marching square.

The test data shows that the contour generated by the DMS is faithful to original shape and the point density along the contour is directly proportional to the curvedness.

Using this analysis DMC is implemented using VTK on each slice and the 3D isosurface is constructed by connecting points on an isoline with the closest points on isoline from previous and next slice.

Dual Marching Square Algorithm

DMS is the 2D analogue of DMC. DMS algorithm is implemented using VTK.

Algorithm-

Step 1: Start.

22

Step 2: Read the DICOM file.

Step 3: Extract the region of interest.

Step 4: Generate Quad tree and dualgrid.

Step 5: Provide geometry to the grid generated.

Step 6: Generate Isosurface.

Step 7: Take the isosurface data and create geometry.

Step 8: Create Renderer.

Step 9: Create a window for the renderer of size 250x250.

Step 10: Set a user interface interactor for the render window.

Step 11: Start the initialization and rendering.

Step 12: End.

VTK

The Visualization Toolkit is open-source software. It is free and is used for 3d graphics, image processing and visualization. It consists of C++ libraries and has several filter classes for manipulating various data representation.

VTK includes reader classes that import various data file formats. VTK can be accessed through scripting languages like tcl or python. To implement this algorithm VTK pipeline is built.

VTK Pipeline for this algorithm: vtkDICOMImageReader | +-- vtkExtractVOI | +-- vtkHyperTreeGridSource() | +-- vtkHyperTreeGridGeometry() | +-- vtkDiscreteMarchingSquare vtkDICOMImageReader Class

vtkDICOMImageReader class is used to read the data from a file that already

23 have image in dcm format with regular ordering. DICOM (Digital Imaging in

Communications and Medicine) is a medical image file format widely used to exchange data, provided by various modalities. The data used here is from Visible Human Project.

It contains digital image dataset of complete human female cadaver in MRI mode.

DICOM

DICOM stands for “Digital Imaging and Communications in Medicine”. It was developed jointly by the National Electrical Manufacturers Association (NEMA) and American College of Radiology (ACR) to permit interoperability between imaging equipment as well as with other devices. This standard takes care of image format. It also provides various network protocols required for broadcasting of medical image information generated during the many healthcare-related imaging “modalities” such as magnetic resonance, nuclear medicine, computed tomography and ultrasound.

All DICOM files are the collection of serialized versions of DICOM objects

(famously known as IODs or “Information Object Definitions”). The DICOM file consist of the data belonging to these IODs that are stored in the form of elements (or tags).

For the analysis of DCM image, a java code is written using PixelMed Java

Tool kit [40]. PixelMed is a stand-alone DICOM toolkit. It implements code for

• Reading and creating DICOM data.

• DICOM network and file support.

• A database of DICOM objects.

• Support for display of directories, images, reports and spectra, and DICOM object

validation.

Attributes obtained from running this code for one of the DICOM file

(“vhf.1501.dcm”) is:

24

Figure 4.8: DICOM file attributes.

To successfully read the directory following functions are used:

• void SetDirectoryName(const char * dn) This function is used to specify the

directory path to be read by the class.

• void update () This function brings this algorithm's outputs up-to-date.

• vtkImageData* getOutput() This function gets the output data object for a port

on this algorithm. The output object of this function is vtkImageData type which

is the superclass of vtkDICOMImageReader. The output of this class is set as an

input of another object, vtkExtractVOI. vtkExtractVOI class

As the female cadaver dataset is large, some sort of focus is needed.Working with whole dataset might lead the program to read in a lot of unnecessary data or it might not able to handle structures needed to hold the data in the computer’s memory. vtkExtractVOI is a filter that selects a portion of an input structured points dataset, or subsamples an input dataset in addition to resampling the data from the image file. The selected portion of interested is called Volume Of Interest, or VOI. The output of this filter is a structured points dataset.

To resample the data and select a volume of interest, the following functions are used:

• void SetVOI(int,int, int,int, int,int ) This function is used to specify the region of

25

interest that needs to be extracted from the directory. For this function specify x-y-

z (min, max) pairs to extract.

• void SetSampleRate (int, int, int) This function allows resampling of the input

data. It set the sampling rate in the x, y, and z direction. According to VTK rate

must be 1 or greater. According to the VTK documentation, if the sample rate

parameters were 2,2,2, every other point will be selected, resulting in a volume

1/8th the original size.

• Void SetInputConnection (vtkDICOMImageData*getOutputportc ()

) vtkExtractVOI gets an input an object of type vtkImageData which is superclass

of vtkDICOMImagedata. vtkHyperTreeGridSource class

This class is used to generate a quad tree and dualgrid. The functions used to generate the quad tree and dualgrid are:

• void setDimension( int) Sets the dimensionality of the tree.

• SetGridSize (unsigned int, unsigned int, unsigned int) Sets the grid size.

• void SetValue(isovalue) This function sets the isovalue for the quad tree.

• void SetInputConnection( vtkImageData * GetOutputPort() ) The output of

vtkExtractVOI is passed as parameter to this function.

• Void setOrigin(0,0,0) This function sets the origin. vtkHyperTreeGridGeometry class

This class provides the geometry to grid generated. vtkDiscreteMarchingSquare class

It is a filter that takes as input a structured point sets and generates on output a

26 model. Below are marching square functions used:

• void SetInputConnection( vtkImageData * GetOutputPort() ) The output from

vtkHyperOctree is passed as a parameter to this function. The superclass of this

function is vtkStructuredPointsToPolyData filter.

• void SetValue (int i, double value) This function resamples the directory. The

index “i” ranges from 0<=i

• void ComputeNormalsOn() This function is used to improve the model

presentation as it takes normal into account. In VTK documentation it’s stated that

the computation of normal is expensive. So it can be disabled by using

ComputeNormalsOff() to get a better responsive time. vtkPolyDataMapper class

This class is found at the end of the visualization pipeline. It receives vtkPolyData object and converts it to geometric primitives in computer graphics. The functions used are below:

• void SetInputConnection( vtkPolyData * getOutputPort() ) The output from

vtkMarchingSquare is the input for this function. This function terminates the VTK

object visualization pipeline.

• void ScalarVisibilityOff() This function avoids the scalars from impacting the

appearance of isosurface, wrt colors. Opposite of this function is

ScalarVisibilityOn() that permits the scalar values to be impacted by the color and

appearance of the visualization. vtkLODActor class

This class serves as a link to temporarily for objects that are to be rendered. vtkLODActor calculates the load of the renderer and on that basis, decides how detailed

27 the object is to be passed to renderer. Following are the vtkLODActor class function used:

• void SetMapper (vtkMapper *) This function sets the vtkPolyDataMapper as the

actor of this class.

• void SetNumberOfCloudPoints(int ) This function sets the number of random

cloud points to show the lower level of detail when complete geometry is not

visible.

• vtkProperty * GetProperty().SetOpacity( float ) In this function the amount of

opacity of vtkProperty is modified. The floating point value ranges from 0 to 1, 0

for completely transparent object and 1 making the object opaque. vtkRenderer class

This class implements the rendering process. To display the result of the rendering process the object(vtkLODActor) is supplied to vtkRenderWindow. The vtkRenderer sets the background color, add actor objects and resets the camera settings.

To configure the rendering object following functions are used.

• void SetBackground (float, float, float) This function is defined in the superclass

vtkViewPort of vtkRendere. It allows the user to set the back ground color by

defining the red, green and blue values.

• void ResetCameraClippingRange() This function alters the clipping range of the

actors adder to the renderer.

• void AddActor( vtkProp * p ) This function is used to add actor to the renderer. vtkRenderer class

This class implements the rendering process. To display the result of the

rendering process the object(vtkLODActor) is supplied to vtkRenderWindow. The

vtkRenderer sets the background color, add actor objects and resets the camera

28

settings. To configure the rendering object following functions are used.

• void SetBackground (float, float, float) This function is defined in the superclass

vtkViewPort of vtkRendere. It allows the user to set the background color by

defining the red, green and blue values.

• void ResetCameraClippingRange() This function alters the clipping range of the

actors adder to the renderer.

• void AddActor (vtkProp * p) This function is used to add actor to the renderer. vtkRenderWindow class

This class displays the renderer in the window. The following functions are used for vtkRenderWindow configuration:

• void AddRenderer (vtkRenderer *) This function adds the renderer to the

window. This allows the actors added to renderer window to be displayed on

rendering window.

• void PolygonSmoothingOn () This function is used to make polygons smoother.

• void Render () This function updates the objects from the pipeline into the

rendering window. This function is needed to update the changes.

The code for Marching Square and Dual Marching Square is in Appendix d.

29

CHAPTER V

RESULTS AND DISCUSSION

This section demonstrates the results obtained by implementing Marching

Square and Dual Marching Square algorithms. Here 3D isosurface is constructed by assembling the set of 2D contours located on a set of parallel slices. The visible human dataset is dense, i.e., the slice planes are close to each other and doesn’t exhibit too- sharp variations. So, the 3D isosurface is constructed by connecting points on an isoline with the closest points on isoline from previous and next slice.

Following are the results obtained by implementing Marching Square and Dual

Marching Square.

Figure 5.1: Left Female head_Front Marching Square (duration 0:00:00.246355) Right Female head_Front Dual Marching Square (duration 0:00:00.247965)

Figure 5.2: Left Female Head_side Marching Square (duration 0:00:00.246355) Right Female Head_side Dual Marching Square (duration 0:00:00.247965)

30

Figure 5.3: Left Female Head_back Marching Square (duration 0:00:00.246355), Right Female Head_back Dual Marching Square (duration 0:00:00.247965)

Figure 5.4: Left Female Eye_ Front Marching Square (duration 0:00:00. 213779), Right Female Eye_Front Dual Marching Square (duration 0:00:00. 223355)

Figure 5.5: Left Female Right_Eye_WireFrame Marching Square Square (duration 0:00:00. 213779), Right Female Right_Eye_WireFrame Dual Marching Square (duration 0:00:00. 223355)

Figure 5.6: Left Female_Ear_Front Marching Square(duration 0:00:00.244979), Right Female_Ear_Front Dual Marching Square(duration 0:00:00. 253785)

31

Figure 5.7: Left Female_Ear_Front Marching Square Wireframe (duration 0:00:00.244979), Right Female_Ear_Front Dual Marching Square WireFrame (duration 0:00:00. 253785) Does not include rendering time, only the mesh generation time is specified in the above table

DISCUSSIONS

The implementation resulted in a VTK library for dual marching square.

From the results depicted above, it is visible that the DMS algorithm is better than MS.

Seeing the wireframe images for eyes and ear it is visible that DMS produces more geometry to capture the curvedness and corners than MS.

Also, the comparison to run both the algorithms is done. The runtime is almost comparable.

Data Set Dual Marching Square Marching Square Time(sec) Time(sec) Female Head 247965 246355 Female Head_Side 247965 246355 Female Head_Back 247965 246355 Female Eye 223355 213779 Female_Eye_Wireframe 223355 213779 Female_Ear 253785 244979 Female_Ear_Wireframe 253785 244979

Table 5.1 Time taken to run the algorithm to generate the 3D dataset.

The comparisons are made throughout this thesis between MS and DMS. They are mainly aiming to portray the benefit of the DMS algorithm qualitatively based on eyeing the images produced with our implementations. Results from an analyzing DMS and MS contours suggests DMS appears to produce a contour that captures better

32 corners, has a curvedness that has less extreme deviation from the actual object boundary, at least for objects with smoothly-curved boundaries and the corners.

Measures needs to be defined in future which can measure the curvedness, corners and smoothness of these generated surfaces.

The implementation done for this thesis show that there is in fact great potential in the DMS algorithm as it provides the curvedness for the areas with the irregular geometry, like eyes, ears etc. as its contour is the dual of the contour produced by

Marching Squares. Visually the DMS algorithm provides more accurate contour for these regions. Overall it offers a robust isosurface extractor which could potentially outperform most other algorithms or combinations of algorithms, in terms of mesh quality. As the ideal surface for the data is not known, we could not measure the actual curvedness/effectiveness of DMS over DMC, this may be an extension for future MS or PhD thesis.

33

CHAPTER VI

CONCLUSION AND FUTURE WORK

We implemented the DM isosurface extraction algorithm. This is a variant of the standard MS algorithm. The DMS technique is described in detail so that DMS technique could be reproduced solely through this document. Dual Marching Square

(DMS) is compared with Marching Squares (MS) on identical input data. The DMS produces contour segments to approximate an isocontour. Its contour is the dual of the contour produced by Marching Squares. Results suggests DMS appears to produce a contour that has a curvedness that has less extreme deviation from the actual object boundary, at least for objects with smoothly-curved boundaries.

In future work, we hope to further study other aspects of DMS behavior. One of the most basic extensions to the work is adding GUI to the library to make it user friendly, and implement the DMS in 3Dimensions.

Certainly, the improvement of the code quality and the optimization of some of the algorithms, in terms of data structures and programming practices, is a matter of future work. Some of the optimization suggestion like adaptive sampling, similar to the

ADFs [38], and anotther optimization is the GPU implementation. Lastly, we also need measures for generated surface quality for the given volume data, so that we can compare the surface generated by DMS and DMC to the desired outcome, and provide more quantitative analysis of our results.

34

REFERENCES

1. Colin Ware. Information Visualization: Perception for Design. Morgan Kaufmann, 2nd edition, 2004.

2. Brian M. Collins. - has it all been seen before? In R. A. Earnshaw and D. Watson, editors, Animation and Scientific Visualization–Tools and Applications, chapter 1, pages 3–28. Academic Press, London, 1st edition, 1993.

3. E. Keppel. Approximating complex surfaces by triangulation of contour lines. IBM Journal of Research and Development, 19(1): 2–11, 1975.

4. G. T. Herman and H. K. Liu. Three-dimensional display of human organs from computed tomograms. Computer Graphics and Image Processing, 9: 1–21, 1979.

5. W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. Computer Graphics, 21(4): 163–168, 1987.

6. P. Shirley and A. Tuchman. A polygonal approximation to direct scalar volume rendering. Computer Graphics, 24(5): 63–70, 1990

7. H. E. Cline, W. E. Lorensen, S. Ludke, C. R. Crawford, and B. C. Teeter. Two algorithms for the reconstruction of surfaces from tomograms. Medical Physics, 15(3): 320–327, 1988.

8. William E. Lorensen and Harvey E. Cline. Computer graphics. Marching Cubes: A high Resolution 3D Surface Reconstruction, 21 (4): 163-166, July 1987

9. Gregory M. Nielson and Bernd. Hamann. The asymptotic decider: Resolving the ambiguity in marching cubes. In Proceedings of Visualization '91, pages 29-38, October 1991.

10. Evgeni V. Chemyaev. Marching cubes 33: Construction of topologically correct isosurfaces Technical report, 1995. CN 95-17, CERN.

11. Thomas Lewiner, Hlio Lopes, Antnio Wilson Viera, and Geovan Tavares. Efficient implementation of marching cubes cases with topological guarantees. Journal of graphics Tools, 8(2):1-15, 2003

12. M. Tarini, M. Callieri, C. Montani, C. Rocchini, K. Olsson, and T. Persson. Marching intersections: An efficient approach to shape-from-silhouette. In 7th International Fall Workshop on Vision Modeling, and Visualization, November 2002.

13. Barthold Lichtenbelt, Randy Crane, and Shaz Naqvi. Introduction to volume rendering. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1998.

14. T. Todd Elvins. A survey of algorithms for volume visualization. Computer Graphics, 26(3):194–201, 1992.

15. Ken Brodlie and Jason Wood. Recent advances in volume visualization. Computer Graphics Forum, 20(2):125–148, 200. 35

16. S. Gong and T. S. Newman, "Dual Marching Squares: Description and analysis," 2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI), Santa Fe, NM, 2016, pp. 53-56.

17. H. Freeman, "Computer processing of line-drawing images;' ACM Computing Surveys, vol. 6, no. 1, pp. 57-97, 1974.

18. T. Newman and H. Yi, "A survey of the marching cubes algorithm," Computers and Graphics, vol. 30, no. 5, pp. 854-879, 2006.

19. G. Treece, P. Prager, and A. Gee, "Regularised marching tetrahedra: Improved iso- surface extraction," Computers and Graphics, vol. 23, no. 4, pp. 583-598, 1999.

20. G. Nielson, "Dual marching cubes," Proc., Vis. '04, pp. 489-496, 2004.

21. S. Schaefer and J. Warren, "Dual marching cubes: primal contouring of dual grids", 12th Pacific Conference on Computer Graphics and Applications. 2004. Proceedings, pp. 70-76.

22. Bloomenthal J. and Wyvill B., editors, Introduction to Implicit Surfaces. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1997.

23. Keppel E., January. Approximating complex surfaces by triangulation of contour lines. IBM J. Res. Dev., 19(1), 2–11. 1975.

24. Allgower E. L. and Gnutzmann S. Simplicial pivoting for mesh generation of implicitly defined surfaces. Comput. Aided Geom. Des., 8(4), 305–325. 1991.

25. Szeliski R. and Tonnesen D., Surface modeling with oriented particle systems. In Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’1992, New York, NY, USA. ACM, 185–194.

26. Witkin A. P. and Heckbert P. S. Using particles to sample and control implicit surfaces. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’94, New York, NY, USA. ACM, 269–277. 1994.

27. Hilton A., Stoddart A. J., Illingworth J. and Windeatt T., 1996. Marching triangles: range image fusion for complex object modelling. In ICIP (2), 381–384.

28. Akkouche S., Galin E. and Centrale E., 2001. Adaptive implicit surface polygonization using marching triangles. COMPUTER GRAPHICS FORUM, 20, 67–80.

29. Desbrun M., Tsingos N. and paule Gascuel M., 1995. Adaptive sampling of implicit surfaces for interactive modeling and animation. In Computer Graphics Forum, 171–185.

30. Crespin B., Guitton P. and Schlick C., 1998. Efficient and accurate tessellation of implicit sweep objects. In In Constructive Solid Geometry, 49–63.

36

31. Wyvill G., Kunii T. L. and Shirai Y., April 1986. Space division for ray tracing in csg. IEEE Comput. Graph. Appl., 6(4), 28–34.

32. Bottino A., Nuij W. and Overveld K. V.,1996. How to shrinkwrap through a critical point: an algorithm for the adaptive triangulation of isosurfaces with arbitrary topology. In Proc. Implicit Surfaces ’96, 53–72.

33. D. S. Ebert, R. Yagel, J. Scott and Y. Kurzion, "Volume rendering methods for computational fluid dynamics visualization," Visualization, 1994., Proceedings., IEEE Conference, Washington, DC, 1994, pp. 232-239, CP26.

34. K. Kase , Y. Teshima , S. Usami , H. Ohmori , C. Teodosiu , A. Makinouchi, Volume CAD, Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume graphics, July 07-08, 2003, Tokyo, Japan

35. "Volume Graphics" IEEE Computer, Vol. 26, No. 7 July 1993 pp. 51-64

36. SK Semwal and K Chandrashekher, 3D Morphing for Volume Data, pp 1-7, The 18th conference in Central Europe, on Computer Graphics, Visualization, and Computer Vision, WSCG 2005 Conference, January 2005.

37. Buchanan D.L., Semwal S.K. (1990) A New Front to Back Composition Technique for Volume Rendering. In: Chua TS., Kunii T.L. (eds) CG International ’90. Springer, Tokyo.

38. Frisken S. F., Perry R. N., Rockwood A. P. and Jones T. R., 2000. Adaptively sampled distance fields: A general representation of shape for computer graphics. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’00, 78 New York, NY, USA. ACM Press/Addison-Wesley Publishing Co., 249–254.

39. Paul Gene Swann and Sudhanshu Kumar Semwal. 1991. Volume rendering of flow-visualization point data. In Proceedings of the 2nd conference on Visualization '91 (VIS '91), Gregory M. Nielson and Larry Rosenblum (Eds.). IEEE Computer Society Press, Los Alamitos, CA, USA, 25-32.

40. DICOM Structured Reporting, Dr. David A. Clunie.

41. Sarah F. Frisken, Ronald N. Perry, Alyn P. Rockwood, and Thouis R. Jones. 2000. Adaptively sampled distance fields: a general representation of shape for computer graphics. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (SIGGRAPH '00). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 249-254.

42. Leif P. Kobbelt, Mario Botsch, Ulrich Schwanecke, and Hans-Peter Seidel. 2001. Feature sensitive surface extraction from volume data. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH '01). ACM, New York, NY, USA, 57-66.

37

Appendix

B. Steps to install VTK-Python on Mac

• Prerequisites and Preinstallation o Download and unzip latest source code tar.gz from VTK site. (Note: We

downloaded to --> /Users/manugarg/Downloads) o Check if Python is installed on your mac (else install it) o Install latest , tcl and

▪ On my machine --> sudo port install -v cmake (same for tcl and then tk)

▪ You might need Macports installed if you don't already have it.

• Installation tasks o manugarg~$ cd Downloads/VTK/ o manugarg ~/Downloads/VTK$ mkdir build o manugarg ~/Downloads/VTK$ cd build/ o Running cmake in interactive mode

manugarg ~/Downloads/VTK/build$ ccmake

** Note: To configure Hit 'c' , and when it comes up with options to toggle, make

sure you toggle BUILD_SHARED_LIBS to ON, and also PYTHON_WRAPPING

to ON !! This is very important to do or else you will miss Python support. Press

'c' a few times and when the 'g' option appears at the bottom, press it to generate. o Make

manugarg ~/Downloads/VTK/build$ make

... [100%] Built target ChartsCxxTests

• Testing the installation o manugarg ~$ export LD_LIBRARY_PATH =

/Users/manugarg/Downloads/VTK/build/bin/

38

o manugarg ~$ export DYLD_FALLBACK_LIBRARY_PATH =

/Users/manugarg/Downloads/VTK/build/bin/

o manugarg ~$ export PYTHONPATH =

/Users/manugarg/Downloads/VTK/build/bin/

o manugarg ~$ export PYTHONPATH =

$PYTHONPATH:/Users/manugarg/Downloads/VTK/Wrapping/Python/

Manu:Downloads manugarg$ python

Python 2.6.1 (r261:67515, May 11 2017, 00:51:29) [GCC 4.2.1 (Apple Inc. build

5646)] on darwin Type "help", "copyright", "credits" or "license" for more

information.

>>> import vtk

B. Codes

Code for Marching Square

import vtk from datetime import datetime startTime = datetime.now()

readerVolume = vtk.vtkDICOMImageReader() readerVolume.SetDirectoryName('/Users/manug/Desktop/head') readerVolume.Update()

# Extract the region of interest voiHead = vtk.vtkExtractVOI() voiHead.SetInputConnection( readerVolume.GetOutputPort() ) voiHead.SetVOI(150,250, 230,300, 80,180) voiHead.SetSampleRate( 2,2,2)

# Generate an isosurface

contourBoneHead = vtk.vtkDisctreteMarchingSquares() contourBoneHead.SetInputConnection( voiHead.GetOutputPort() ) contourBoneHead.ComputeNormalsOn() contourBoneHead.SetValue( 0, 40 ) # Bone isovalue

# Take the isosurface data and create geometry geoBoneMapper = vtk.vtkPolyDataMapper() geoBoneMapper.SetInputConnection( contourBoneHead.GetOutputPort() ) geoBoneMapper.ScalarVisibilityOff()

# Take the isosurface data and create geometry actorBone = vtk.vtkActor() #vtkLODActor()

39

#actorBone.SetNumberOfCloudPoints( 100000 ) actorBone.SetMapper( geoBoneMapper ) actorBone.GetProperty().SetRepresentationToWireframe() #SetColor( 1, 1, 1 )

# Calculate and print time to generate the model print (datetime.now() - startTime)

# Create renderer ren = vtk.vtkRenderer() ren.SetBackground( 0.329412, 0.34902, 0.427451 ) #Paraview blue ren.AddActor(actorBone)

# Create a window for the renderer of size 250x250 renWin = vtk.vtkRenderWindow() renWin.AddRenderer(ren) renWin.SetSize(250, 250)

# Set an user interface interactor for the render window iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin)

# Start the initialization and rendering iren.Initialize() renWin.Render() iren.Start()

Code for Dual Marching Square

import vtk from datetime import datetime startTime = datetime.now()

readerVolume = vtk.vtkDICOMImageReader() readerVolume.SetDirectoryName('/Users/manug/Desktop/head') readerVolume.Update()

# Extract the region of interest voiHead = vtk.vtkExtractVOI() voiHead.SetInputConnection( readerVolume.GetOutputPort() ) voiHead.SetVOI(150,250, 230,300, 80,180) voiHead.SetSampleRate( 2,2,2)

# start of Dual Marching Square # Generate Quad tree voiQuadTree = vtk.vtkHyperTreeGridSource() voiQuadTree.SetInputConnection( voiHead.GetOutputPort()) voiQuadTree.SetGridSize (10, 10, 10)

40

voiQuadTree.setValue(0,40) voiQuadTree.setOrigin(0,0,0)

# Create the geometry voiQuadGeometry = vtk.vtkHyperTreeGridGeometry() voiQuadGeometry.setInputConnection(voiQuadTree.GetOutputPort())

# Generate an isosurface contourBoneHead = vtk.vtkMarchingSquares() contourBoneHead.SetInputConnection( voiQuadGeometry.GetOutputPort() ) contourBoneHead.ComputeNormalsOn() contourBoneHead.SetValue( 0, 40 ) # Bone isovalue

# Take the isosurface data and create geometry geoBoneMapper = vtk.vtkPolyDataMapper() geoBoneMapper.SetInputConnection( contourBoneHead.GetOutputPort() ) geoBoneMapper.ScalarVisibilityOff()

# Take the isosurface data and create geometry actorBone = vtk.vtkActor() #vtkLODActor() #actorBone.SetNumberOfCloudPoints( 100000 ) actorBone.SetMapper( geoBoneMapper ) actorBone.GetProperty().SetRepresentationToWireframe() #SetColor( 1, 1, 1 )

# Calculate and print time to generate the model print (datetime.now() - startTime) # End of Dual Marching Square

# Create renderer ren = vtk.vtkRenderer() ren.SetBackground( 0.329412, 0.34902, 0.427451 ) #Paraview blue ren.AddActor(actorBone) # Create a window for the renderer of size 250x250 renWin = vtk.vtkRenderWindow() renWin.AddRenderer(ren) renWin.SetSize(250, 250)

41

# Set an user interface interactor for the render window iren = vtk.vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) # Start the initialization and rendering iren.Initialize() renWin.Render() iren.Start()

C. Code for getting attributes for DICOM file

package dicom_basic; import com.pixelmed.dicom.Attribute; import com.pixelmed.dicom.AttributeList; import com.pixelmed.dicom.AttributeTag; import com.pixelmed.dicom.OtherWordAttribute; import com.pixelmed.dicom.TagFromName; import com.pixelmed.display.SourceImage; public class dicom_extract { private static AttributeList list = new AttributeList(); public static void main(String[] args) { String dicomFile = "/Users/manug/desktop/head/vhf.1501.dcm"; try { list.read(dicomFile); System.out.println("Transfer Syntax:" + getTagInformation(TagFromName.TransferSyntaxUID)); System.out.println("SOP Class:" + getTagInformation(TagFromName.SOPClassUID)); System.out.println("Modality:" + getTagInformation(TagFromName.Modality)); System.out.println("Samples Per Pixel:" + getTagInformation(TagFromName.SamplesPerPixel)); System.out.println("Photometric Interpretation:" + getTagInformation(TagFromName.PhotometricInterpretation)); System.out.println("Pixel Spacing:" + getTagInformation(TagFromName.PixelSpacing)); System.out.println("Bits Allocated:" + getTagInformation(TagFromName.BitsAllocated)); System.out.println("Bits Stored:" + getTagInformation(TagFromName.BitsStored)); System.out.println("High Bit:" + getTagInformation(TagFromName.HighBit)); SourceImage img = new com.pixelmed.display.SourceImage(list); System.out.println("Number of frames " + img.getNumberOfFrames()); System.out.println("Width " + img.getWidth());//all frames will have same width System.out.println("Height " + img.getHeight());//all frames will have same height System.out.println("Is Grayscale? " + img.isGrayscale()); System.out.println("Pixel Data present:" + (list.get(TagFromName.PixelData) != null)); OtherWordAttribute pixelAttribute = (OtherWordAttribute)(list.get(TagFromName.PixelData)); //get the 16 bit pixel data values short[] data = pixelAttribute.getShortValues(); } catch (Exception e) { e.printStackTrace(); } } private static String getTagInformation(AttributeTag attrTag) { return Attribute.getDelimitedStringValuesOrEmptyString(list, attrTag); }

42

} Note: For running this code, Download the PixelMed toolkit library from here. Ensure that the PixelMed.jar library is included in your Java project’s class path.

43