SOFTWARE DEVELOPER’S QUARTERLY Issue 12• Jan 2010

MIDAS 2.4 Released as OPEN SOURCE Editor’s Note...... 1 Kitware is proud to announce the release of MIDAS 2.4, a major release implementing more than 20 new features. We Recent Releases ...... 1 are also happy to announce that the MIDAS source-code is now freely available under an unrestricted (BSD) license. A Synthetic LiDAR Scanner for VTK ...... 3

New Variational Level-Set Classes with Region Fitting Energy in ITK ...... 6

Alternative Memory Models for ITK...... 9

N3 Implementation for MRI Bias Field Correction ...... 11

Exporting Contours to DICOM-RTSTRUCT ...... 13

Kitware News ...... 15

Kitware is pleased to present a special edition of the Source which features several of the strongest Insight Journal submissions from 2009. The Insight Journal was designed Improved image gallery with color selection to provide a realistic support system for disseminating sci- entific research in the medical image processing domain. For the past year MIDAS, Kitware’s digital archiving and Recognizing the need for a mechanism whereby the medical distributed processing system, has been generating a lot of image analysis community can collectively share their ideas, interest from the research community and we believe that code, data and results, Dr. Luis Ibàñez has worked with the opening its source code will lead to a better archiving and ISC and championed the open-science cause to make the processing system. We encourage users to download the A Synthetic lidar scanner for Insight Journal a reality. latest release and join the mailing list. By providing a platform for open-access scientific publica- Among the new features of the 2.4 release: tion, Kitware continues to foster its commitment to the • Redesigned image gallery open-science community. To continue celebrating this cause • Improved LDAP support we will annually publish this special Insight Journal Edition • Support for plug-ins of the Source. Anyone may submit their work to the Insight • Improved server-side upload Journal by registering, for free, at insight-journal.org; perhaps your work will be featured here next year! • Custom upload workflows • Better BatchMake integration with custom reporting The Kitware Source is just one of a suite of products and services that Kitware offers to assist developers in getting Kitware’s public instance of MIDAS is available at http:// the most out of its open-source products. Each project’s insight-journal.org/midas, and is currenlty host to hundreds website contains links to free resources including mailing of freely available scientific and medical datasets. lists, documentation, FAQs and Wikis. In addition, Kitware MIDAS can be downloaded from kitware.com/midas. supports its open-source projects with technical books, user’s guides, consulting services, support contracts and training CDASH 1.6 courses. For more information on Kitware’s suite of products CDash, the open-source, web-based software testing server and services, please visit our website at www.kitware.com. has had another major release. CDash aggregates, analyzes and displays the results of software testing processes sub- In contrast to the traditional undo/redo stack, which is mitted from clients around the world, conveying the state cleared whenever new actions are performed, the plugin of a software system to continually improve its quality. This captures the complete exploration trail as a user explores release adds more than 20 new features including: different parameters and techniques. A tree- • Support for persistent login based view of the history of actions allows a user to return • Better database compression to a previous version in an intuitive way, undo bad changes, compare different visualizations, and be reminded of the • New graphs and reports actions that led to a particular result. • Improved coverage visualization • Better CTest communication There is no limit on the number of operations that can be • Expected submission time based on historic average undone, no matter how far back in the history of the visual- ization they are, and the history is persistent across sessions. • Faster load of the main dashboard page • Improved navigation links The VisTrails plugin can save all of the information needed • Remote build management (beta) to restore any state of the visualization in .vt files, which can be reloaded across ParaView sessions and shared among To get started with CDash, host your project for free at collaborators. This also allows multiple visualizations to my.cdash.org. For more info on CDash 1.6 visit cdash.org. be shared with a single file. For more information on the VisTrails Provenenance Explorer Plugin see page 8 of the July ITK 3.18 2009 Kitware Source. ITK 3.18 will be released in early 2010. The main changes in this release include improvements in the following areas: LANL’s cosmo plug-in is now distributed with ParaView. This • Support for mathematical routines in itkMath plug-in allows ParaView to read and process *.cosmo format • Management of integer rounding and casting files, in which particles are described by mass, velocity and identification tags. These particles typically represent stellar • 64-bit platform support, particularly Windows and Mac masses. The halo finder filter is a friend-of-a-friend particle • Consistency in streaming large image files clustering algorithm. It creates groups containing particles • Infrastructure for running many tests in parallel (ctest -jN) that satisfy a tolerance/threshold linking distance criterion. in addition to: The cosmology data format, halo finding algorithm, and related (experimental) filter implementations are made • Added options for installing ITK in a flat directory. This will possible by the LANL cosmology researchers, the LANL visu- facilitate the use of ITK from external projects. alization team, and international collaborators. • Fixes for the Mattes Mutual Information Metric enabling this metric to work on binary images. Mac application bundle and comand line tools are now built as universal binaries (PPC and Intel i386). This simplifies man- For up-to-date information on ITK 3.18, please visit the ITK aging ParaView on Mac as now there is only a single binary mailing lists or search the ITK Wiki for “Release 3.18”. to download for any architecture. ParaView 3.6.2 As always, we rely on your feedback to improve ParaView. Kitware, Sandia National Laboratories and Los Alamos We are experimenting with a new user-feedback mecha- National Lab are proud to announce the release of ParaView nism. Please use http://paraview.uservoice.com/ or click on 3.6.2. The binaries and sources are available for download the “Tell us what you think” link of .org to leave from paraview.org. ParaView 3.6.2 contains the following your feedback and vote for new features. new features and improvements. The Python interface has been revamped, an exciting new extension to the Paraview Python interface is Python trace. The goal of trace is to generate human readable, not overly verbose, Python scripts that mimic a user’s actions in the GUI. See the “Python Trace” article on page 6 of the October 2009 Kitware Source for more details. A collection of statistics algorithms is now available. Using these statistics algortihms, you can compute descriptive statistics (mean, variance, min, max, skewness, kurtosis), compute contingency tables, perform k-means analysis, examine correlations between arrays, and perform princi- pal component analysis on arrays. More information about these filters is available on the ParaView Wiki by searching “Statistical Analysis”. This release also Includes the VisTrails Provenance Explorer plugin in Windows and Linux packages. VisTrails is an open- source, scientific workflow and provenance management ParaView 3.6.2 demonstrating the VisTrails Provenance Explorer system developed at the University of Utah that provides plugin for scientific workflow and provenance management. support for data exploration and visualization. The VisTrails Plug-in brings provenance tracking and the benefits of prov- enance to ParaView users. It automatically and transparently tracks the steps a user follows to create a visualization.

2 vtkRay A Synthetic lidar scanner This is a container class to hold a point (ray origin) and a for vtk vector (ray direction). It also contains functions to: • Get a point along the ray a specified distance from the In recent years, Light Detection and Ranging (LiDAR) scanners origin of the ray have become more prevalent in the scientific community. (double* GetPointAlong(double)) • Determine if a point is in the half-space that the ray is They capture a “2.5-D” image of a scene by sending out pointing “toward“ thousands of laser pulses and using time-of-flight calcula- (bool IsInfront(double*)) • Transform the ray tions to determine the distance to the first reflecting surface (void ApplyTransform in the scene. However, they are still quite expensive, limiting (vtkTransform* Trans)) the availability of the data which they produce. Even if a vtkLidarPoint LiDAR scanner is available to a researcher, it can be quite time consuming to physically set up a collection of objects This class stores all of the information about the acquisition and scan them. of a single LiDAR point. It stores: • The ray that was cast to obtain the point Here, we present a set of classes which allow researchers to (vtkRay* Ray) • The coordinate of the closest intersection of the ray with compose a digital scene of 3D models and ”scan” the scene the scene by finding the ray and scene intersections using techniques (double Coordinate[3]) • The normal of the scene triangle that was intersected to from ray tracing. This allows researchers to quickly and easily produce the point produce their own LiDAR data. The synthetic LiDAR scan data (double Normal[3]) can also be used to produce datasets for which a ground • A boolean to determine if the ray in fact hit the scene at truth is known. This is useful to ensure algorithms are behav- all (bool Hit) ing properly before moving to non-synthetic LiDAR scans. If more realistic data is required, noise can be added to the vtkLidarScanner points to attempt to simulate a real LiDAR scan. This class does all the work of acquiring the synthetic scan. SCANNER MODEL Coordinate System We have based the synthetic scanner on the Leica HDS3000 The default scanner is located at (0,0,0) with the following LiDAR scanner. This scanner acquires points on a uniform orientation: angular grid. The first point acquired is in the lower left of • Z axis = (0,0,1) = up the grid. The scanner proceeds bottom to top, left to right. • Y axis = (0,1,0) = forward The order of acquisition is usually not important, but the • X axis = (1,0,0) = right (a consequence of maintaining a code could be easily changed to mimic a different scanner right handed coordinate system) if desired. INPUT PARAMETERS Positioning (“aiming”) a vtkLidarScanner It is necessary to set several parameters before performing a A rigid transformation can be applied to the scanner via synthetic scan. These parameters are: the SetTransform function which positions and orients the scanner relative to the scene. • A scene to scan (triangulated 3D mesh) • Scanner position (3D coordinate) Casting Rays • Min/Max ϕ angle (how far ”up and down” the scanner Rays cast by the scanner are relative to the scanner’s frame should scan) after the transformation is applied to the default frame. • Min/Max θ angle (how far ”left and right” the scanner should scan) • Scanner “forward” (the ϕ = 0, θ = 0 direction) • Number of θ points. • Number of ϕ points. OUTPUTS Two outputs are possible depending on the user’s require- ments. The first is a PTX file. A PTX file implicitly maintains the structure of the scan. The scan points in the file are listed in the same order as they were acquired. This point list, together with the size of the scan grid, is sufficient to repre- sent the grid of points acquired by the synthetic scanner. The second type of output is simply an unorganized point cloud stored in a VTP file. This representation is useful for Figure 1: 3D view of the scan. algorithms which do not assume any structure is known about the input. To demonstrate the angles which must be specified, a ϕ( = 5; θ = 4) scan of a flat surface was acquired, as shown in NEW CLASSES Figure 1. Throughout these examples the red sphere indi- We introduce three new classes to implement a synthetic cates the scanner location, cylinders represent the scan rays, LiDAR scanner. The first two, vtkRay and vtkLidarPoint are and blue points represent scan points (intersections of the supporting classes of vtkLidarScanner, which is the class that rays with the object/scene). does the majority of the work.

3 Order of Acquisition Normals The points were acquired in the order shown in Figure 2. Each LiDAR point stores the normal of the surface that it intersected. A scan of a sphere is shown in Figure 5 to dem- onstrate this. The normal vector coming from the scanner (red sphere) is the “up” direction of the scanner. OUTPUTS vtkPolyData A typical output from a VTK filter is a vtkPolyData. The vtkLidarScanner filter stores and returns the valid LiDAR returns in a vtkPolyData. If the CreateMesh flag is set to true, a Delaunay triangulation is performed to create a tri- angle mesh from the LiDAR points. vtkDelaunay2D is used to triangulate the 3D points utilizing the grid structure of the scan. Figure 6 shows the setup of a scan, the result with CreateMesh = false, and the result with CreateMesh = true. Figure 2: Scan points labeled in acquisition order.

Theta angle The angle in the Forward-Right plane (a rotation around Up), measured from Forward. Its range is -π to π. – is left, is right. Figure 3 shows a top view of the scan of a flat surface. The min and max θ angles are labeled.

Figure 5: Scene intersections and their normals

PTX file Figure 3: Diagram of Theta angle settings. All of the LiDAR ray returns, valid and invalid, are written to an ASCII PTX file. The PTX format implicitly maintains Phi angle the order of point acquisition by recording the points in the The elevation angle, in the Forward-Up plane (a rotation same order in which they were acquired. A “miss” point is around Right), measured from Forward. Its range is – recorded as a row of zeros. Upon reading the PTX file (not (down) to (up). This is obtained by rotating around the covered by this set of classes), the best test to see if a row of “right” axis (AFTER the new right axis is obtained by setting the file is valid is checking if the intensity of the return is 0. Theta). This prevents corner cases, such as a valid return from (0;0;0), from creating a problem or confusion. Figure 4 shows a side (left) view of the scan of a flat surface. The min and max ϕ angles are labeled Scanner coordinate frame Using void vtkLidarScanner::WriteScanner(const std::string &Filename) const, a VTP file of the scanner can be written. A coordinate frame indicates the location and orientation of the scanner. The green axis is “up”, the yellow axis is “forward” and the red axis is “right”. Figure 7 shows the synthetic scan of a sphere along with the scanner that was used to perform the scan. DATA STRUCTURE SPEED UP Instead of intersecting each ray with every triangle in the scene, this problem immediately lends itself to using a spatial data structure to achieve an enormous speedup. We originally tried an octree (vtkOBBTree), but we found that a modified BSP tree (vtkModifiedBSPTree) gives a 45x speedup, even over the octree! The current implementation includes Figure 4: Diagram of Phi angle settings. this speed up and is therefore very fast.

4 Gaussian distribution. This distribution is zero mean and has a user specified variance (double LOSVariance). An example of a synthetic scan with LOS noise added is shown in Figure 9. The important note is that the orange (noisy) rays are exactly aligned with the gray (noiseless) rays.

Figure 8: A noiseless synthetic scan

Figure 9: A synthetic scan with line-of-sight noise added

Orthogonal Noise Orthogonal noise models the angular error of the scanner. Figure 6: Effect of CreateMesh flag. It is implemented by generating a vector orthogonal to the scanner ray whose length is chosen from a Gaussian distri- bution. This distribution is also zero mean and has a user specified variance (double OrthogonalVariance). An example of a synthetic scan with orthogonal noise added is shown in Figure 10. Note that the green (noisy) rays are not aligned with the gray (noiseless) rays, but they are the same length.

Combined Noise A simple vector sum is used to combine the orthogonal noise vector with the LOS noise vector.

Figure 7: Synthetic scan of a sphere with the scanner displayed

NOISE MODEL By default, a synthetic scan is “perfect” in the sense that the scan points actually lie on a surface of the 3D model as in Figure 8. In a real world scan, however, this is clearly not the case. To make the synthetic scans more realistic, we have modeled the noise in a LiDAR scan using two independent noise sources: line-of-sight and orthogonal.

Line-of-Sight (LOS) Noise Line-of-Sight noise reads as an error in the distance mea- surement performed by the scanner. It is a vector parallel Figure 10: A synthetic scan with orthogonal noise added to the scanner ray whose length is chosen randomly from a

5 EXAMPLE SCENE Scanner->MakeSphericalGrid(); As an example, a car model with 20k triangles was scanned with a 100x100 grid. On a P4 3GHz machine with 2GB of ram, Scanner->SetCreateMesh(true); the scan took 0.6 seconds. Figure 11 shows the model and the resulting synthetic scan. Scanner->SetInput(reader->GetOutput()); Scanner->Update();

//create a writer and write the output VTP file vtkSmartPointer writer = vtkSmartPointer::New(); writer->SetFileName(“test_scan.vtp”); writer->SetInput(Scanner->GetOutput()); writer->Write();

David Doria is a PhD student in Electrical Engineering at Rensselaer Polytechnic Institute. He received his BS in EE in 2007 and his MS in EE in 2008, both from RPI. David is currently working on 3D object detection in LiDAR data. He is passionate about reducing the barrier of entry into 3D data processing. Find out more about David on his website rpi.edu/~doriad or email him at [email protected].

NEW VARIATIONAL LEVEL-SET CLASSES WITH REGION FITTING ENERGY IN ITK

Figure 11: A car model and the resulting synthetic scan. The level-set methodology involves numerically evolving a EXAMPLE CODE contour or surface according to a rate change partial differ- An example (TestScanner.cpp) is provided with the code zip ential equation (PDE). The contour or surface is embedded file, but the basics are demonstrated here below with hard as the zero level-set of a higher dimensional implicit func- coded values: tion, also called the level-set function (x,t). Other techniques in the literature with the same objec-tive include snakes //read a scene and parametric surfaces. The main advantage of using level- vtkXMLPolyDataReader* reader = set techniques over other methods is that they can handle vtkXMLPolyDataReader::New(); complex shapes and topological changes, such as merging reader->SetFileName(InputFilename.c_str()); and splitting, with ease. Different constraints on the contour reader->Update(); smoothness, speed, size and shape are easily specified. //construct a vtkLidarScanner vtkLidarScanner* Scanner = Level-sets are very popular in the image analysis community vtkLidarScanner::New(); for use in image segmentation, smoothing and tracking problems. There are two main classes of the level-set meth- //Set all of the scanner parameters odology - explicit and implicit methods. The explicit methods have a generic PDE to determine the temporal rate of Scanner->SetPhiSpan(vtkMath::Pi()/4.0); Scanner->SetThetaSpan(vtkMath::Pi()/4.0); change in the level-set function. It is expressed as a sum of speed functions that contain propagation, smoothing, and Scanner->SetNumberOfThetaPoints(5); advection terms. Each of these terms is constructed from the Scanner->SetNumberOfPhiPoints(6); image intensity function and involves the usage of image Scanner->SetStoreRays(true); gradients. Segmentation is accomplished by detecting edges in the object of interest. Hence, these techniques are also //”Aim” the scanner. This is a very simple referred as edge-based level-sets. //translation, but any transformation will work vtkTransform* transform = vtkTransform::New(); Implicit level-sets are driven by PDE that is derived by using transform->PostMultiply(); a variational formulation. Here, an energy functional is first //this is so we can specify the operations in the constructed and then minimized. An example of such a //order they should be performed (i.e. this is a variant is that proposed by Chan and Vese [1]. In this variant, //rotation followed by a translation) objects are segmented in images where edge information //(any of these 4 lines can be ommited if they may be poor or entirely absent. The core idea is that the //are not required) region statistics of the foreground region is much different transform->RotateX(0.0); from the background region. The region mean intensity is a transform->RotateY(0.0); transform->RotateZ(1.0); popularly used statistic for segmentation as shown in Figure transform->Translate(Tx, Ty, Tz); 1. These techniques are also referred to as region-based or active contours without edges. The current level-set frame- Scanner->SetTransform(transform); work in ITK is completely explicit with no support for the //indicate to use uniform spherical spacing 6 implicit techniques; for more information, see our submis- are segmented and tracked with constraints placed on the sions to the Insight Journal [4], [5] and [6]. area, volume and shape of each individual cell. Each cell has a unique fluorescence intensity level, therefore it is best to have a unique level-set function per cell. This is the approach adopted in the popular work by Dufour, et al. [3].

itk::MultiphaseFiniteDifferenceImageFilter< TInputImage, TOutputImage > (a) Dense filter iterations at 0, 10, 20 and 30

itk::DenseSegmentationFiniteDifferenceImageFilter< TinputImage, Image< TOutputPixel, ::itk::GetImageDimension< TinputImage >::ImageDimension >>

(b) Sparse filter iterations at 0, 10, 20 and 30. itk::DenseMultiphaseLevelSetImageFilter< Figure 1: Implicit segmentation of cells using mean intensity as a TinputImage, TFeatureImage, TFunction, TOutputPixel > region statistic to separate foreground and background (a) Top row showing dense filter iterations and (b) bottom row showing Figure 2: Inheritance model of the dense solver the sparse filter iterations. Note the difference in the structures local to initialization in the sparse case. itk::MultiphaseFiniteDifferenceImageFilter< Normally, the solution of the level-set update equation TInputImage, TOutputImage > can be done on the entire image domain. The dense filter iteratively updates every single pixel in the image regard- less of its position with respect to the zero level-set. As a result, the convergence is much quicker and structures far itk::MultiphaseSparseFieldLevelSetImageFilter< TinputImage,Image< TOutputPixel, from the zero level-set spawn contours for segmentation. ::itk::GetImageDimension< The sparse implementation of the level-set solver maintains TinputImage >::ImageDimension >> a band of pixels around the zero level-set which are itera- tively updated. The PDE is solved exactly on those pixels that are on the zero level-set or are immediate neighbors of the itk::SparseMultiphaseLevelSetImageFilter< zero level-set. A user-defined bandwidth of pixels is used TinputImage, TFeatureImage, in maintaining the level structure and for calculating the TFunctionType, TOutputPixel > derivatives. Pixels in the image beyond this band around the Figure 3: Inheritance model of the sparse solver zero level-set are not considered. As a result, the evolution gradually proceeds from the initialization and converges on local structures. Structures far off from the zero level-set may remain unaffected. itk::RegionBasedLevelSetFunction< TInputImage, TFeatureImage, TSharedData > MULTI OBJECT SEGMENTATIONS WITH LEVEL- SETS AND KD-TREE OPTIMIZATIONS In biomedical image analysis, we are often interested in itk::ScalarRegionBasedLevelSetFunctionBase< segmenting more than a single object from a given image. TinputImage, TFeatureImage, TSharedData >> This especially happens when the objects to be segmented are adjacent to each other and the delineation of one object automatically affects the neighboring object. In such situations, it makes sense to concurrently process their seg- itk::ScalarChanAndVeseLevelSetFunction< mentation in order to optimally segment the objects. As a TinputImage, TFeatureImage, TSharedData > simple example, there is a significant interest in segmenting nuclei or cells in microscopy images as shown in Figures 5 Figure 4: Inheritance model of the region-based level-set function and 6. Each image is acquired at high resolution and could contain thousands of cells. These cells often cluster in regions and appear to overlap. The challenge is to split these cells into individual components. An iterative (or linear) cell extraction procedure using level-sets can cause inconsistent splits since each level-set function does not compete with the neighboring cells. There could be an overlap of the level- set functions. In such cases, it is necessary to use multiphase methods for segmentation. There are several research papers devoted to multiphase methods that optimize the number of level-set functions used for a generic case of N phases or objects [2]. We are Figure 5: Space optimizations. (a) ROI defined around individual largely motivated by microscopy applications where cells cells. (b) kd-tree structure constructed from ROI centroid.

7 MEMORY OPTIMIZATION USING KD-TREES levelSetFilter->GetDifferenceFunction(i)-> Computationally, it is memory intensive to have N level-set SetVolume( volume ); functions defined on the image domain. For a large image } with many small objects (such as cells in microscopy images), it becomes a memory-intensive problem. Hence, we make TRACKING MULTIPLE OBJECTS USING LEVEL-SETS the implementation robust by defining region-of-interest Many biological experiments that involve microscopic (ROI), using spatial data structures such as the kd-trees and imaging require segmentation and temporal tracking of cached lookup tables. cells as part of the analysis protocol. For example, develop- ment biologists are interested in reconstructing cell lineages Each level-set function is first defined in a region-of-interest during embryonic development. Migratory behavior and (ROI) within the image domain. This is illustrated in Figure 5(a). rearrangement of cells is a fascinating topic of research. The ROI should encompass the object to be segmented Cancer researchers track cells in colonies to determine and its extent is specified by the attributes’ origin and size. growth kinetics and the effects of different chemical agents. The spacing is the same as the feature or raw intensity image. The cell forms the fundamental biological entity of interest This saves us considerable computational memory space. The and its tracking is essential in these applications. centroid of each ROI region is then placed in a kd-tree structure. In the update for each level-set function, the overlaps of Dufour, et al. [3] proposed a solution to the tracking problem ROI regions are calculated by querying the kd-tree for the by using coupled active surfaces. In this method, each cell is k-nearest neighbors as illustrated in Figure 5(b). This saves represented by a unique level-set function. Energy functions considerable computational time. Note that there is a cost involving the level-set contours are defined to partition the associated with building the kd-tree that can be avoided for image into constant intensity background and constant a small number of phases. We only instantiate the kd-tree intensity foreground components. The foreground compo- mechanism of search upon user-request, or when there are nents are regularized, in terms of their area and length, for more than 20 phases involved. smoothness. Several other properties such as continuity in volume and shape across time-points are maintained. Their USAGE EXAMPLE solution as proposed is robust and elegant for small datasets We consider an example for using multiphase level-set filters. since each cell only requires a unique level-set function of After initialization, it is imperative that the user specify the the same size as the image domain. In our implementation number of level-set functions and set the feature image and of the method in [6], we make use of performance optimiza- initialization for each level-set function. We illustrate our tion using ROI bounding boxes that now make the tracking example for N = 3. filter scale up to larger datasets containing many cells.

MultiLevelSetType::Pointer levelSetFilter = MultiLevelSetType::New(); levelSetFilter->SetFunctionCount( 3 ); levelSetFilter->SetFeatureImage( featureImage ); levelSetFilter->SetLevelSet( 0, contourImage1 ); levelSetFilter->SetLevelSet( 1, contourImage2 ); levelSetFilter->SetLevelSet( 2, contourImage3 );

Appropriate global settings of the level-set include the number of iterations, maximum permissible change in RMS values and whether to use image spacing. levelSetFilter->SetNumberOfIterations(nb_iteration); levelSetFilter->SetMaximumRMSError( rms ); levelSetFilter->SetUseImageSpacing( 1 );

Using a for-loop over all the level-set functions, we call the i-th difference function and set the corresponding attributes of that level-set function. Figure 6: 3D confocal images of a developing zebrafish embryo. (a-c) Raw images at 1, 5 and 10 time-points. (d-f) Tracking results at for ( unsigned int i = 0; i < 3; i++ ) 1, 5 and 10 time-points. { levelSetFilter->GetDifferenceFunction(i)-> SetDomainFunction( &Heaviside ); CONCLUSION AND FUTURE WORK levelSetFilter->GetDifferenceFunction(i)-> In this work, we developed region-based methods for varia- SetCurvatureWeight( mu ); tional contour evolution using the level-set strategy. We levelSetFilter->GetDifferenceFunction(i)-> extended the methods to simultaneously segment and track SetAreaWeight( nu ); levelSetFilter->GetDifferenceFunction(i)-> multiple objects in the images and thereby use a kd-tree SetLambda1( l1 ); spatial portioning to be efficient. Our goal was to use these levelSetFilter->GetDifferenceFunction(i)-> methods in cell segmentation and tracking analysis. We SetLambda2( l2 ); obtained very good results in our work. The main limitation levelSetFilter->GetDifferenceFunction(i)-> SetOverlapPenaltyWeight( gamma ); of multi-object segmentations using the current level-set for- levelSetFilter->GetDifferenceFunction(i)-> mulation is their interactions are not well defined. We often SetLaplacianSmoothingWeight( eta ); observe absorption of one level-set function by a neighbor- levelSetFilter->GetDifferenceFunction(i)-> ing one. We plan on further investigating these methods to SetVolumeMatchingWeight( tau ); make these methods robust.

8 REFERENCES [1] T. Chan and L. Vese. An active contour model aLTERNATIVE MEMORY MODELS without edges. In Scale-Space Theories in Computer Vision, FOR ITK pages 141–151, 1999. [2] L. Vese and T. Chan. A multiphase level-set framework for An ITK image employs an implicit contiguous memory image segmentation using the Mumford and Shah model. International Journal of Computer Vision, 50:271–293, representation. Implicit means the representation cannot 2002. be configured or changed. Contiguous means the pixel [3] A. Dufour, V. Shinin, S. Tajbakhsh, N. Guillen-Aghion, elements are stored as a 1D array, where each element is J. C. Olivo-Marin, and C. Zimmer. Segmenting and tracking adjacent in memory to the previous element. Unfortunately, fluorescent cells in dynamic 3-d microscopy with coupled in certain situations a contiguous memory representation is active surfaces. IEEE Transactions on Image Processing, not desirable. 14(9):1396–1410, 2005. This article describes three alternative memory models for ITK [4] K. Mosaliganti, B. Smith, A. Gelas, A. Gouaillard, and S. images: slice contiguous, sparse, and single-bit. A slice con- Megason. Level-set segmentation: Active contours without edges. The Insight Journal, 2008. tiguous image is a three-dimensional image whereby each [5] K. Mosaliganti, B. Smith, A. Gelas, A. Gouaillard, and S. slice is stored in a contiguous 1D array, but the slices are not Megason. Segmentation using coupled active surfaces. The necessarily adjacent in memory. Slice contiguous images are Insight Journal, 2008. well suited for interoperability with applications represent- [6] K. Mosaliganti, B. Smith, A. Gelas, A. Gouaillard, and S. ing images using DICOM. A sparse image is an n-dimensional Megason. Cell Tracking using Coupled Active Surfaces for image in which each pixel is stored in a hash table data Nuclei and Membranes. The Insight Journal, 2008. structure. This memory model is well suited for images with very large dimensions, but few pixels that are actually rel- ACKNOWLEDGEMENTS evant. A single-bit binary image is an n-dimensional image This work was funded by a grant from the NHGRI that internally stores a boolean as a single-bit, rather than (P50HG004071-02) to found the Center for in-toto genomic the typical eight-bits. Single-bit images allow very compact analysis of vertebrate development. Benjamin Smith at representations for on-off binary masks. Simon Fraser University, Vancouver, Canada first developed a ITK images are tightly coupled with their underlying memory working prototype of the code using ITK. It was significantly representation. Performing a quick "grep" over the code improved with bug corrections and enhanced with newer base reveals numerous classes which directly access the C++ constructs and optimized for its speed by the team image pixel array via itk::Image::GetBufferPointer(). comprising of Kishore Mosaliganti, Arnaud Gelas, Alexandre Gouaillard and Sean Megason at Harvard Medical School. Some of these include: We are currently working on the development of GoFigure2 • Image iterators – an open source application for biomedical image analysis, • Image adapters visualization and archival using ITK, VTK and Qt. • Image file readers/writers Kishore Mosaliganti is a Research Fellow in • VTK image export filter the Megason Lab at Harvard Medical School • Octree where he He is currently developing algo- • Watershed segmenter rithms for the extraction of zebrafish ear • BSpline deformable transform lineages using confocal microscopy images • Optimized Mattes mutual information metric to be included in GoFigure2. The existence of these classes, as well as ITK’s strict backward Arnaud Gelas is a Research Associate in the compatibility policy, makes it difficult—but fortunately not Megason Lab at Harvard Medical School impossible — to introduce images with alternative memory where he is in charge of the development of models. This article describes three new image memory GoFigure2. Currently, he manages the soft- models which use a similar mechanism to the existing ware development process of GoFigure2. itk::VectorImage. The new image types do not require changes to existing code, and they function with iterators Alexandre Gouaillard is the President of and the majority of existing filters. However they will not CoSMo software which proposes software function with classes that directly access the pixel array. For development services in medical image pro- example, it is not possible to directly read or write the pro- cessing and modeling. Formerly he was a posed images to/from disk, export the images to VTK, or use Research Associate at the Megason Lab at the images with the optimized registration framework. Harvard Medical School where he was in charge of the design and development of GoFigure2. He is now a PI at the Singaporean Immunology Network (SIgN) in a Systems Immunology research group using Complex Systems Modeling. (a) Contiguous Memory Model Sean Megason is an Assistant Professor in the Department of Systems Biology at Harvard Medical School where he overlooks the development of GoFigure2. He is working in systems biology research on the develop- (b) Slice Contiguous Memory Model mental circuits of zebrafish. Figure 1: Contiguous and slice contiguous memory models.

9 Slice Contiguous Images which are actually relevant. It should be noted that (pres- A slice contiguous image is implicitly three-dimensional. Each ently) filters using sparse images must be single-threaded. slice is stored in a contiguous 1D array, but the slices are not A sparse image can be created in the typical fashion: necessarily adjacent in memory. This representation is shown in Figure 1. The image can be realized by creating a new //Typedefs image class with custom pixel and neighborhood accessor const unsigned int Dim = 2; functions. The pixel and neighborhood accessor functions typedef unsigned short Pixel; typedef itk::SparseImage ImageType; provide a layer of indirection for iterators to access the image pixel data. Given the incoming offset for a contigu- //Create image ous memory model, the new accessors compute the index ImageType::Pointer image = ImageType::New(); of the slice containing the pixel and the offset within that ImageType::IndexType start; slice. The new image class is templated over the pixel type; start.Fill( 0 ); ImageType::SizeType size; it is not templated over the number of dimensions as it is size.Fill( 1000000 );//try this with a normal image! always three. ImageType::RegionType region( start , size ); image->SetRegions( region ); One important different between itk::Image and image->Allocate( ); itk::SliceContiguousImage is that for the later A sparse image requires a “background” value to be speci- GetBufferPointer() always returns NULL (i.e., the pixel buffer fied for undefined pixels: makes no sense). Ideally this method would not even exist for slice contiguous images, but unfortunately too many image->FillBuffer( 100 ); other classes assume its existence. Pixels can be retrieved or set directly (using Get/SetPixel) A slice contiguous image is created in a very similar fashion or via an iterator: to a “normal” image: ImageType::IndexType indexA = {100 , 100}; Pixel pixelA = image->GetPixel( indexA ); //Typedefs ImageType::IndexType indexB = {10000 , 10000}; typedeffloatPixel; image->SetPixel( indexB , 5 ); typedef itk::SliceContiguousImage ImageType; Pixel pixelB = image->GetPixel( indexB ); // Create image ImageType::Pointer image = ImageType::New (); Single-bit Binary Images ImageType::IndexType start; It is very common in image processing to create mask images start.Fill( 0 ); which indicate “on” or “off” regions. Currently, the smallest ImageType::SizeType size; pixel element available in ITK for representing such images size[0] = 256; size[1] = 256; size[2] = 4; ImageType::RegionType region( start , size ); is unsigned char which is stored in a single byte (8-bits). image->SetRegions( region ); Even though a boolean can only represent 0 or 1, it too is image->Allocate( ); stored in a single byte—check this yourself using std::cout << sizeof(bool) << std::endl. An additional step requires the pixel container to be config- ured with a list of slice pointers: Single-bit binary images internally represent each pixel as a single-bit. As such, the memory footprint for on-off masks // Set slice pointers can be lowered by (nearly) a factor of eight. Similar to slice ImageType::PixelContainer::SliceArrayType slices; contiguous images, single-bit binary images provide custom slices.push_back( slice2->GetBufferPointer () ); pixel accessor functions which convert the incoming offset slices.push_back( slice4->GetBufferPointer () ); slices.push_back( slice3->GetBufferPointer () ); to the relevant bit mask for the underlying data storage. slices.push_back( slice1->GetBufferPointer () ); Unlike slice contiguous images, single-bit binary images ImageType::PixelContainerPointer container = fit slightly better within the existing ITK framework so ImageType::PixelContainer::New (); GetBufferPointer() makes sense in this context. container->SetImportPointersForSlices ( slices, size[0]* size[1], false ); A single-bit binary image can be created as follows: image->SetPixelContainer ( container ); The slice contiguous image is now able to be used like any //Typedefs other image: const unsigned int Dim = 2; typedef bool PixelType; typedef itk:: SingleBitBinaryImage ImageType; // Use the image with a filter typedef itk::ShiftScaleImageFilter FilterType; ImageType::Pointer image = ImageType::New (); FilterType::Pointer filter = FilterType::New (); ImageType::IndexType start; filter->SetInput( image ); start.Fill( 0 ); filter->SetShift( 10 ); ImageType::SizeType size; filter->Update( ); size.Fill( 65 ); Sparse Images ImageType::RegionType region( start , size ); image->SetRegions( region ); A sparse image is an n-dimensional image in which each pixel image->Allocate( ); is stored in a hash table data structure. Each time a pixel is image->FillBuffer( true ); set, a new offset-value pair is added to the hash table. Such a memory model means that little or no memory is allocated The bits are stored in blocks of 32, so the above code will when the image is created, but the memory footprint grows actually allocate a buffer with size 96×96. As with all the as more and more pixels are set. This memory model is well- presented image memory models, the binary image can be suited for images with very large dimensions, but few pixels used with any iterator and/or filter which does not directly access the pixel buffer via GetBufferPointer().

10 Performance the itk::ImageToImageFilter class (as is the related class Although the memory models presented above may be itk::MRIBiasFieldCorrectionFilter) since its operation takes advantageous for reducing memory usage in certain sce- the MR image (with an associated mask) corrupted by a bias narios, they have a performance penalty. Accessing pixels field as input, and outputs the corrected image. For the user stored in a contiguous array can be highly efficient, whereas that wants to reconstruct the bias field once the algorithm the three new images require additional computation terminates, we demonstrate how that can be accomplished whenever a pixel is accessed. A simple performance test was with the additional class itk::BSplineControlPointImageFilte undertaken to compare the new memory models. The test r which is included with this submission. Note that it is only measured four properties: (1) time to allocate the buffer, (2) needed if the bias field is to be reconstructed after the N3 time to fill the buffer, (3) time to set all pixels using an itera- algorithm terminates. tor, and (4) time to get all pixels using an iterator. Each test was run on a 256×256×256 image, executed on my notebook Algorithmic Overview (Intel Core 2 Duo, T7250 @ 2GHz, 3GB RAM, Windows Vista The steps for the N3 algorithm are illustrated in Figure 4 SP1 32-bit) a total of 5 times with mean times (in seconds) of [3]. Initially, the intensities of the input image are trans- reported in the following table. formed into the log space and an initial log bias field of all zeros is instantiated. In N3MNI, an option is given whereby All of the images have similar values for buffer allocation the user can provide an initial bias field estimate but, to and getting pixels using an iterator. The sparse image was keep the options to a minimum, we decided to omit that the fastest for filling the buffer because no memory is actu- option. However, given the open-source nature of the code, ally set at this moment, only a single “background” value. the ITK user can modify the code according to preference. However, it was also the slowest (by quite a margin) for After initialization, we iterate by alternating between esti- setting pixels using an iterator, because each pixel being set mating the unbiased log image and estimating the log of must be added to the hash table. The single-bit binary image the bias field. was also fast for filling the buffer because the pixels are set in groups of 32-bits, rather than individual elements. Parameters One of the attractive aspects of the N3 algorithm is the minimal number of parameters available to tune and the relatively good performance achieved with the default

parameters which we tried to maintain, where we could, for both N3MNI and [3]. The available parameters are: • m_MaskLabel (default = 1): The algorithm requires a Performance timings (in seconds) for a 256×256×256 image. mask be supplied by the user with the corresponding mask label. According to Sled, mask generation is not crucial Conclusion and good results can be achieved with a simple scheme This article discussed three images with alternative memory like Otsu thresholding. models: slice contiguous, sparse, and single-bit binary images. • m_NumberOfHistogramBins (default = 200): One The proposed images should work with most of the existing of the steps of N3 requires intensity profile construction ITK filters—assuming they access the pixel data using itera- from the intensities of the uncorrected input image and a tors rather than GetBufferPointer(). triangular parzen windowing scheme. The default value is the same as in N3MNI. Acknowledgments • m_WeinerFilterNoise (default = 0.01): Field esti- Thanks to Karthik Krishnan (Kitware) and Glen Lehmann mation is performed by deconvolution using a Wiener (Calgary Scientific) for helpful discussions. filter which has an additive noise term to prevent division by zero (see Equation (12) of [3]). This is identical to the Dan Mueller writes software to analyze and noise variable in N3MNI and equal to Z2 in [3]. view images. At the moment he’s doing this • for digital pathology images at Philips m_BiasFieldFullWidthAtHalfMaximum (default = 0.15): A key contribution to N3 is the usage of a simple Healthcare in Best, Netherlands. When he’s Gaussian to model the bias field. This variable character- not working with images, Dan enjoys making izes that Gaussian and is the same as the FWHM variable them with his camera. This year he’s taken in both N3MNI and [3]. over 6000 photos in 14 different countries. • m_MaximumNumberOfIterations (default = 50): Optimization occurs iteratively until the number of itera- N3 ITK IMPLEMENTATION FOR MRI tions exceeds the maximum specified by this variable. BIAS FIELD CORRECTION • m_ConvergenceThreshold (default = 0.001): In [3], the authors propose the coefficient of variation between We forego theoretical discussions of MRI bias field correction the ratio of subsequent field estimates as the convergence and defer to relevant references [2, 7, 1]. Instead, we discuss criterion. However, in both N3MNI and N4ITK, the standard our implementation and how it relates to both Sled’s paper deviation of the ratio between subsequent field estimates [3] and the original N3 public offering. For notational pur- is used. poses in this article, we denote the MNI N3 implementation • m_SplineOrder (default = 3): A smooth field estimate as ‘N3MNI’ and the ITK implementation we offer as ‘N4ITK’. is produced after each iterative correction using B-splines. In both N3MNI and [3], cubic splines are used. Although IMPLEMENTATION any feasible order of spline is available, the default in The N4ITK implementation is given as a single class itk::N N4ITK is also cubic. 3MRIBiasFieldCorrectionImageFilter. It is derived from

11 • m_NumberOfControlPoints Since the itkN3MRIBiasFieldCorrectionImageFilterTest field is usually low frequency, by default we set the number imageDimension inputImage outputImage [shrinkFactor] [maskImage] [numberOfIterations] of control points to the minimum m_SplineOrder+1. [numberOfFittingLevels] [outputBiasField] • m_NumberOfFittingLevels (default = 4): The This class takes the input image, subsamples it according to B-spline fitting algorithm [6] is different from what is used the optional shrinkFactor option, and creates the bias field in N3MNI and proposed in [3]. The version we use was corrected output image. Other optional parameters are already available in ITK as one of our earlier contributions the maskImage (if not available, one is created using the [5] and is not susceptible to ill-conditioned fitting matri- itk::OtsuThresholdImageFilter), the number of iterations ces. One of the parameters for that fitting is the number (default = 50), the number of fitting levels (default = 4), and of hierarchical levels to fit where each successive level a file name for writing out the resulting bias field. doubles the B-spline mesh resolution. SAMPLE RESULTS We demonstrate usage with two MR images—a 2D brain slice and a volume from a hyperpolarized helium-3 image. We use ITK-SNAP to visualize the results. 2D Brain Slice Figure 1(a) is the uncorrected image used in our 2D brain test. Close inspection demonstrates a darkening in the white matter toward the upper right of the image. This darkening is corrected in Figure 1(c).

itkN3MRIBiasFieldCorrectionImageFilterTest 2 t81slice.nii.gz t81corrected.nii.gz 2 t81mask.nii.gz 50 4 t81biasfield.nii.gz 3D Hyperpolarized Helium-3 Lung MRI Figure 2(a) is the uncorrected image used in our 3D helium-3 MR image volume. Close inspection demonstrates a darken- ing in the white matter toward the upper portion of the given axial slice. This darkening is corrected in Figure 2(c).

Figure 1: (a) Uncorrected image. (b) Mask image. (c) Bias field corrected image. (d) Uncorrected image with the calculated bias field superimposed.

Bias Field Generation Oftentimes, the user would like to see the calculated bias field. One of the more obvious reasons for this wouldbe when the bias field is calculated on a downsampled image Figure 2: (a) Uncorrected image. (b) Mask image. (c) Bias field cor- (suggested in [3] and given as an option in N3MNI and rected image. (d) Uncorrected image with the calculated bias field included in the testing code). One would then like to recon- superimposed. struct the bias field to estimate the corrected image in full resolution. Since the B-spline bias field is a continuous object itkN3MRIBiasFieldCorrectionImageFilterTest defined by the control point values and spline order, we can 3 he3volume.nii.gz he3corrected.nii.gz reconstruct the bias field of the full resolution image without 2 he3mask.nii.gz 50 4 he3biasfield.nii.gz loss of accuracy. We demonstrate how this is to be done in the test code itkN3MRIBiasFieldCorrectionImageFilterTest. References cxx. Note that the control points describe a B-spline scalar [1] R. G. Boyes, J. L. Gunter, C. Frost, A. L. Janke, T. Yeatman, D. field in log space so the itk::ExpImageFilter has to be used L.G. Hill, M. A. Bernstein, P. M. Thompson, M. W. Weiner, N. after reconstruction. Schuff, G. E. Alexander, R. J. Killiany, C. DeCarli, C. R. Jack, N. C. Fox, and A. D. N. I. Study. Intensity non-uniformity cor- Test Code rection using n3 on 3-t scanners with multichannel phased array coils. Neuroimage, 39(4):1752–1762, Feb 2008. The test code included with this submission, itkN3MRIBias- [2] Zujun Hou. A review on mr image intensity inhomogene- FieldCorrectionImageFilterTest.cxx, is designed to allow the ity correction. Internation Journal of Biomedical Imaging, user to immediately apply the N4ITK classes to their own 2006:1–11, 2006. images. Usage is given as follows: [3] J. G. Sled, A. P. Zijdenbos, and A. C. Evans. A nonparametric

12 method for automatic correction of intensity nonunifor- identify and describe the piece of equipment that pro- mity in MRI data. IEEE Transactions on Medical Imaging, duced a series of composite instances. 17(1):87–97, Feb 1998. 5. “Structure Set Module” defines a set of areas of- sig [4] M. Styner, C. Brechbuhler, G. Szckely, and G. Gerig. Parametric nificance. Each area can be associated with a frame of Estimate of Intensity Inhomogeneities Applied to MRI. IEEE reference and zero or more images. Information which Transactions on Medical Imaging, 19(3):153–165, March can be transferred with each ROI includes geometrical and 2000. display parameters, and generation technique. [5] N. J. Tustison and J. C. Gee. N-d Ck B-spline scattered data approximation. The Insight Journal, 2005. 6. “ROI Contour Module” is used to define the ROI as a set [6] N. J. Tustison and J. C. Gee. Generalized n-d Ck B-spline of contours. Each ROI contains a sequence of one or more scattered data approximation with confidence values. In contours, where a contour is either a single point (for a Proc. Third International Workshop Medical Imaging and point ROI) or more than one point (representing an open Augmented Reality, pages 76–83, 2006. or closed polygon). [7] U. Vovk, F. Pernus, and B. Likar. A review of methods for cor- 7. “RT ROI Observations Module” specifies the identification rection of intensity inhomogeneity in mri. IEEE Transactions and interpretation of an ROI specified in the Structure Set on Medical Imaging, 26(3):405–421, March 2007. and ROI Contour modules, and [8] A listing of several relevant algorithms compiled by Finn A. 8. “SOP (Service-Object Pair) Common Module” defines the Nielsen at the Technical University of Denmark is provided attributes which are required for proper functioning and at http://neuro.imm.dtu.dk/staff/fnielsen/bib/ and clicking on the folder "Nielsen2001BibSegmentation/". identification of the associated SOP Instances. They do not specify semantics about the real-world object represented [9] http://www.bic.mni.mcgill.ca/software/N3/ by the IOD. [10] A complete discussion of ‘N4’ is provided at hdl.handle. net/10380/3053. [2] contains a comprehensive documentation of the DICOM standard covering all the modules. Refer to [1] for a brief summary of the RTSTRUCT. Nick Tustison borders his moments of soft- ware-writing serenity at the Penn Image Information Entity Mandatory Modules Computing and Science Lab (PICSL) with attempts at integrating Heidegger’s notion Patient (1) Patient of “Dasein” with the open source software Study (2) General Study paradigm---oh, and battling ninjas, too. Series (3) RT Series Equipment (4) General Equipment Structure Set (6) Structure Set Exporting Contours to (6) ROI Contour DICOM-RTSTRUCT (7) RT ROI Observations (8) SOP Common The “radiotherapy structure set” (RTSTRUCT) object of the DICOM standard is used for the transfer of patient structures Table 1: Mandatory Modules of RTSTRUCT. and related data, between the devices found within and outside the radiotherapy department. It contains mainly the Implementation information for regions of interest (ROIs) and points of inter- Figure 1 illustrates the pipeline that we use for exporting est (e.g., dose reference points). In many cases, rather than the automated segmentation results to RTSTRUCT format. manually drawing these ROIs on the CT images, one can It mainly contains three steps: Automated Segmentation, indeed benefit from the wealth of automated segmentation Mask to Contour Conversion and RTSTRUCT-Exporter. algorithms available in ITK. But at present, it is not possible DICOM Additional to export the ROIs obtained from ITK to RTSTRUCT format. CT-Image Information In order to bridge this gap, we have developed a framework for exporting contour data to RTSTRUCT [1]. Segmentation Tool Mask to Contour RTSTRUCT RTSTRUCT RTSTRUCT (ITK or similar one) Converter Explorer File The mandatory modules contained by the RTSTRUCT are presented in Table 1. These modules are grouped based on Figure 1: Block diagram illustrating the pipeline for exporting the the type of information entity that they represent. Here is a automated segmentation results to RTSTRUCT. brief description of each of these modules: • Automated Segmentation: The input DICOM CT images 1. “Patient Module” specifies the attributes that describe are converted into a convenient image format (if required) and identify the patient who is the subject of a diagnostic and an automated segmentation is performed using ITK study. This module contains attributes of the patient that or similar tools. The output ROIs from this tool should be a are needed for diagnostic interpretation of the image and mask. There can be multiple masks corresponding to differ- are common for all studies performed on the patient. ent structures of interest and the current program indeed 2. “General Study Module” specifies the attributes that allows for the export of multiple masks. It is also possible describe and identify the study performed upon the to export the ROIs obtained on images that are cropped patient. along z-axis; in such cases, the information of starting- 3. “RT Series Module” has been created to satisfy the require- slice-number and the number of slices used should be later ments of the standard DICOM query/retrieve model. passed to the RTSTRUCT-exporter module. The output of 4. “General Equipment Module” specifies the attributes that this module is passed to the “mask to contour converter”.

13 • Mask to Contour Conversion: We first extract axial slices of is currently tested only on the DICOM CT images acquired the mask using ExtractImageFilter of ITK. We then use the from a GE Medical System (Model: LightSpeedRT16). A ContourExtractor2DImageFilter [3] from ITK, for obtain- thorough testing on more images, acquired from various ing contours on each of these slices. We finally create an manufacturers and models, will make it more robust. output text file containing the information of total number of contours, coordinates of each contour-point along with the corresponding slice number, number of contour points for each contour and type of geometry of each contour (open or closed). • RTSTRUCT-Exporter: Exporting the contours to RTSTRUCT format requires the implementation of RTSTRUCT-Writer. We implemented this in a class called “RTSTRUCTIO”. For Figure 2: Sagittal, Coronal and Axial views of the 3-D CT image of creating instances of RTSTRUCTIO objects using an object the Patient. This is a cropped image containing only the first 83 factory, “RTSTRUCTIOFactory” class is also implemented. slices of the original image. Refer to [1] for a detailed description of class-design and key implementation issues. The inputs to the RTSTRUCT-Exporter are: • An axial slice of the DICOM CT image of the patient (for extracting the information that is common to both CT image and RTSTRUCT, as described in [1]). • Output(s) of the “Mask to Contour Converter” (multiple contours can be exported, as described in [1]). Figure 3: Sagittal, Coronal and Axial views of the mask for external- • Few additional inputs like starting slice number with contour of the 3-D CT image. respect to the original image, total number of slices to

be considered, ROI interpreted types and the colors to be assigned to each ROI. All of these parameters are passed to the RTSTRUCT-Exporter through a text file. Example The DICOM CT image used in this paper is acquired during Figure 4: Sagittal, Coronal and Axial views of the mask for bones of routine clinical practice at Divisions of Radiotherapy, Fribourg the 3-D CT image. Hospital (HFR) in Switzerland. The image is acquired on GE Medical System (Model: LightSpeedRT16). The size of each slice is 512 × 512 pixels with a spacing of 1.27 mm × 1.27 mm; the inter-slice distance is 2.5 mm. There are 116 slices in total. Since we are interested only in the first 83 slices of the patient’s image, the original DICOM image is cropped in the Z-direction to contain only these slices, and a new image file (with .mhd extension) is created. The image is then thresh- olded in selected regions for removing the bed and other Figure 5: A screen-shot showing the contours of the external-con- immobilization devices. Figure 2 shows the thresholded tour and bones in the RTSTRUCT file, superposed over the original image. We created separate masks for the external-contour DICOM CT image. and bones through simple windowing of the image, as shown in Figure 2. These masks are shown in Figures 3 and 4. Acknowledgments The contours of these masks are obtained using the “Mask This work is supported in part by the Swiss National Science to Contour Convertor”. The contour data, along with a slice Foundation under Grant 3252B0-107873, 205321-124797, of the DICOM CT image and other information, is passed and by the Center for Biomedical Imaging (CIBM) of the to the RTSTRUCT-Exporter using a parameter-file. Figure 5 Geneva--Lausanne Universities and the EPFL, as well as the shows the resultant RTSTRUCT file superposed over the origi- foundations Leenaards and Louis-Jeantet. We thank Dr. A. nal DICOM CT image. S. Allal, Dr. Pierre-Alain Tercier and Dr. Pachoud Marc for providing us the data and helping us in testing. We thank Conclusions & Future Work Mathieu Malaterre for his valuable suggestions. An ITK implementation of the RTSTRUCT-Exporter is pre- sented. The details of the pipeline used and description of References each module in the pipeline is presented. The implementa- [1] S. Gorthi, M. Bach Cuadra, and J.-P. Thiran, “Exporting con- tours to DICOM-RT Structure Set,” Insight Journal, 2009. tion is validated on a 3D CT image, by exporting the ROIs of [Online] hdl.handle.net/1926/1521 the external-contour and bones to RTSTRUCT format. [2] “DICOM home page.” [Online] http://medical.nema.org/ We would also like to mention the recent work of Dowling et [3] Z. Pincus, “ContourExtractor2DImageFilter: A subpixel-pre- al. [4] that presents a method to do the reverse, i.e., import- cision image isocontour extraction filter,” Insight Journal, ing the contours from the RTSTRUCT. It would be interesting 2006. [Online] hdl.handle.net/1926/165 to integrate these two implementations. RTSTRUCT-Exporter [4] J. Dowling, M. Malaterre, P. B. Greer, and O. Salvado,

14 “Importing contours to DICOM-RT Structure Sets,” Insight from 106 different computers. The following users submit- Journal, 2009. [Online] hdl.handle.net/10380/3132 ted the largest number of builds: Subrahmanyam Gorthi is currently pursuing • Gaëtan Lehmann (INRA) his PhD at Swiss Federal Institute of • Kevin Hobbs (Ohio University) Technology (EPFL), Lausanne, Switzerland. • Oleksander Dzyubak (Mayo Clinic) His research interests include medical image • Arnaud Gelas (Harvard) registration, segmentation and variational • Alexandre Gouaillard (Harvard) methods in image analysis. • Kishore Mosaliganti (Harvard) Dr. Meritxell Bach Cuadra is currently with • Hans Johnson (Iowa) the Signal Processing Core of the Biomedical • Kent Williams (Iowa) Imaging Center (CIBM), responsible for • Bradley Lowekamp (NLM/Lockheed Martin) signal processing research at the Lausanne • Sean McBride (Rogue Research) University Hospital (CHUV). Her main • Mathieu Coursolle (Rogue Research) research interests are related to magnetic • Christian Haselgrove (NITRC) resonance (MR) and diffusion MR imaging, atlases, registration, segmentation, and classification. • Steve Pieper (Brigham and Women's Hospital) • Iván Macía (Vicomtech) Dr. Jean-Philippe Thiran is currently an • Tom Vercauteren (Mauna Kea Technologies) Assistant Professor at Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland. North carolina grows again His research interests include image segmen- Kitware’s North Carolina office has expanded into another tation, prior knowledge integration in image 600 sq. ft. of office space in their current location. The office analysis, partial differential equations and will now occupy over 4500 sq. ft. of modern office space in variational methods in image analysis, multi- downtown Carrboro. The increase in space provides adding modal signal processing, medical image analysis, including seating for eight new employees. Kitware's North Carolina multimodal image registration, segmentation, computer- office has grown from two employees to twelve in just over assisted surgery, and diffusion MRI. two years. And new leaders within the group, such as Julien Jomier and Brad Davis, ensure that the southern office will continue to rapidly expand. Kitware News RSNA 2009 Kitware supported three events at the Radiological Society of North America's (RSNA) annual meeting in Chicago. Developer's Training week The first was an educational exhibit at the first “Toward Kitware’s Developer’s Training Week will be held May 3 - 6, Quantitative Imaging: Reading Room of the Future”. Our in Albany, NY. The course will cover VTK, ParaView, ITK, and showcase consisted of a large poster describing a Kitware's CMake. The course is a hands-on experience suitable for open-source Lesion Sizing Toolkit, a partnership with the both new users of these open source projects as well as more Optical Society of America (OSA), and a computer which ran advanced developers. Basic knowledge of C++ programming live examples. Our display was available in our exhibit area is recommended. The course is appropriate for both new and on the OSA website as an interactive Science Publishing users wishing to quickly gain proficiency and experienced presentation (http://www.opticsinfobase.org/isp.cfm). developers requiring advanced customization skills. Stephen Aylward and Rick Avila taught a course on “Open- Additional course information is available on our website or Source Tools for Medical Research and Applications”. Course by emailing [email protected]. In addition to the upcom- attendance was high and evidenced the growing interest in ing course offered at Kitware's site, we also offer customized open source solutions for medical imaging applications. courses at either our site or yours. If you specific training needs that are not met by the standard course or the Spring Rick also gave a scientific presentation on “Quantitative dates do not work for your organization, a customized course Estimation of Individual Lung Cancer Risk”. The presentation may be right for you. Please contact a Kitware representa- provided evidence that risk of developing lung cancer can be tive to discuss training options for your organization. monitored well in advance of the onset of a malignancy. We now have preliminary study data showing good risk strati- Kitware sponsors "this is git" talk at unc fication performance, quantitative imaging and functional Kitware sponsored a "This is GIT" talk for the University of trends that have never before been reported. North Carolina, Chapel Hill's undergraduate and graduate computer programming clubs. Kitware provided pizza and drinks to about 25 students, faculty, and staff at a talk by Jason Sewall, a UNC CS Ph.D. candidate. The talk lasted nearly 2 hours, with questions resulting in live demonstra- tions and extended discussions. ITK Dashboard fest a huge success Dashboard Fest 1.0 was a huge success thanks to your con- tributions. Our goal was to hit 200 experimental builds, and In the Reading Room of the Future, Rick Avila, Senior Director of by the end of the day on Friday November 6, 2009, 1,033 Healthcare Solutions, speaks about Kiware’s open-source toolkit for experimental builds had been submitted to the Dashboard quantitative lesion sizing. (Photo by Ray Whitehouse)

15 LATE FALL/WINTER Conferences His thesis work involved researching, designing and develop- If you’re interested in meeting with a Kitware representative ing frameworks to integrate simulation with visualization in at one of these events, email us at [email protected]. virtual reality environments. Aashish is also a contributor to the Minerva Open Source Project. SPIE Medical Imaging February 13 - 18, at the Town and Country Resort and Theresa Vincent Convention Center in San Diego, CA. SPIE is the premier Theresa joined Kitware in September 2009 as an Accounting conference for medical scientists, physicists, and practitio- Specialist. Prior to joining Kitware, Theresa worked as an ners in the field of imaging. Stephen Aylward, Kitware's Accounting Associate for a non-profit company in Clifton Director of Medical Imaging, is highly involved in this year's Park. Theresa received her Associate's Degree in Accounting SPIE Conference. Stephen is serving as a Program Committee from Schenectady County Community College in 2002. Member for the Computer-Aided Diagnosis Conference, Session Chair for the Breast Imaging Session, Workshop Internship opportunities Co-Chair for the Computer-Aided Diagnosis Workshop and is Kitware Internships provide current college students with an invited expert for the "Dessert with the Experts" student the opportunity to gain hands-on experience working with networking event. http://spie.org/medical-imaging.xml leaders in their fields on cutting edge problems. Our busi- ness model is based on open source software—an exciting, Symposium on Interactive 3D Graphics and Games rewarding work environment. February 18 - 21, at the Hyatt Regency in Bethesda, MD. I3D is the leading-edge conference for real-time 3D com- Kitware interns assist in the development of foundational puter graphics and human interaction, and 2010 marks the research and leading-edge technology across five business 24th year since the first conference gathering. Dr. Stephen areas: supercomputing visualization, computer vision, medical Aylward, Kitware’s Director of Medical Imaging, will be in imaging, data publishing and quality software process. We attendance. http://www.i3dsymposium.org offer our interns a challenging work environment and the opportunity to attend advanced software training. To apply New Hires send your resume to [email protected]. Jacob Becker Jacob joined Kitware's Computer Vision Group in October employment opportunities 2009 as an R&D Engineer. Jacob received his B.S. in Computer Kitware has an immediate need for talented Software Science and Archaeology from the University of Wisconsin - Developers, especially those with experience in Computer La Crosse (UW-L) in 2001 and his M.S. in Computer Science Vision, Scientific Computing and Biomedical Imaging. from RPI in 2009. While at RPI he researched aligning a 2D Qualified applicants will have the opportunity to work with image to a 3D model and filling in holes in LiDAR data using leaders in computer vision, medical imaging, visualization, evidence provided by a single aligned 2D image. 3D data publishing and technical . Aasish Chaudhary We offer comprehensive benefits including: flex hours; six Aashish joined Kitware's Scientific Computing Group in weeks paid time off; a computer hardware budget; 401(k); October 2009 as an R&D Engineer. Aashish received his health and life insurance; short- and long-term disability, visa B.S. (Honors) in Mechanical Engineering from Devhi Ahilya processing; a generous compensation plan; profit sharing; University (India) in 2000 and his M.S. in Industrial Engineering and free drinks and snacks. Interested applicants should with minor in Computer Science from Iowa State University. forward a cover letter and resume to [email protected].

In addition to providing readers with updates on Kitware Kitware’s Software Developer’s Quarterly is published by product development and news pertinent to the open Kitware, Inc., Clifton Park, New York. source community, the Kitware Source delivers basic infor- Contributors: Lisa Avila, Rick Avila, Utkarsh Ayachit, Stephen mation on recent releases, upcoming changes and detailed Aylward, Katie Cronen, Meritxell Bach Cuadra, David Doria, technical articles related to Kitware’s open-source projects. Arnaud Gelas, Subrahmanyam Gorthi, Alexandre Gouaillard, These include: Luis Ibáñez, Julien Jomier, Steve Jordan, Sean Megason, Dan • The Visualization Toolkit (www.vtk.org) Mueller, Kishore Mosaliganti, Dave Partyka, Jean-Philippe • The Insight Segmentation and Registration Toolkit Thiran, and Nick Tustison. (www.itk.org) • ParaView (www.paraview.org) Design: Melissa Kingman, www.elevationda.com • The Image Guided Surgery Toolkit (www.igstk.org) Editor: Niki Russell • CMake (www.cmake.org) • CDash (www.cdash.org) Copyright 2010 by Kitware, Inc. or original authors. • MIDAS (www.kitware.com/midas) The material in this newsletter may be reproduced and dis- • BatchMake (www.batchmake.org) tributed in whole, without permission, provided the above • VTKEdge (www.vtkedge.org) copyright is kept intact. All other use requires the express Kitware would like to encourage our active developer permission of Kitware, Inc. Kitware, ParaView, and VolView community to contribute to the Source. Contributions may are registered trademarks of Kitware, Inc. All other trade- include a technical article describing an enhancement you’ve marks are property of their respective owners. made to a Kitware open-source project or successes/lessons learned via developing a product built upon one or more To contribute to Kitware’s open source dialogue in future of Kitware’s open-source projects. Authors of any accepted editions, or for more information on contributing to specific article will receive a free, five volume set of Kitware books. projects, please contact the editor at [email protected]. 16