Issue 31 •Issue October 24 • Jan2013 2014

TABLE OF CONTENTS RECENT RELEASES

MAP-TK 0.4.1 RELEASED Editor's Note...... 1 The Motion-Imagery Aerial Phtogrammetry Toolkit (MAP- Tk) is an open-source ++ collection of libraries and tools for making measurements from aerial video. Its initial capabil- Recent Releases...... 1 ity focuses on estimating the camera flight trajectory and sparse 3D point cloud of a scene. Integration of 3D Slicer with SOFA...... 2 This project has similar goals as projects such as Bundler and VisualSFM. However, the focus is on efficiently processing Introducing GeoJS...... 5 aerial video instead of community photo collections. Special attention has been given to cases where the variation in VTK's Second Google Summer of Code...... 8 depth of the 3D scene is small in comparison to the distance to the camera. In these cases, planar homographies can be used to assist feature tracking, stabilize the video, and aid in Volume Rendering Improvements in VTK...... 12 solving loop closure problems.

The MAP-Tk software architecture is highly modular and Kitware News...... 16 provides an algorithm abstraction layer that allows seam- less interchange and run-time selection of algorithms from various other open-source projects such as OpenCV, VXL, VisCL, and PROJ4. The core is lightweight with minimal EDITOR'S NOTE dependencies. The tools are written to depend only on the MAP-Tk core library. Additional capabilites are provided in At Kitware, we are pleased to share our latest developments add-on modules that use third party libraries to implement in the creation and support of open-source software and various abstract algorithm interfaces defined in the core. state-of-the-art technology. Among our recent releases are MAP-Tk can be found on https://github.com/kitware/maptk. versions of CMake, ITK, MAP-Tk, and ParaView. In addition, we published an InfoVis website depicting a network of participants in the Ice Bucket Challenge, won Best Industry- Related Paper at ICPR 2014, made the Albany Business Review’s “The List: Software Developers in the Albany New York Area,” presented a briefing at the 2014 Meeting of the Military Sensing Symposium (MSS) Specialty Group on Active E-O Systems, and unveiled our new CMake.org website design. For additional details and information, please visit www.kitware.com/news.

Furthermore, we attended events including MICCAI 2014, The Science of Multi-INT Workshop, and 2014 Strategies in Biophotonics. A list of our upcoming events can be found on www.kitware.com/events. If you would like to set up a time to meet with us at any of the events listed on our site to discuss employment opportunities, potential collaboration, or consulting services, please contact [email protected]. CMAKE 3.0.2 RELEASED CMake 3.0.2 is the second bug fix release since the 3.0.0 release. For details on the major 3.0 release, please see the Kitware blog. The improvements since 3.0 include making sure ExternalProject now properly handles file download hash mismatches, correcting static library creation with Xcode 6, adjusting QtAutogen to allow for the use of mul- tiple UI files in a single target, correcting build failures with This article presents an integrated, open-source pipeline the Cray compiler, and adding support for OpenRISC 1000. for medical simulation research. The pipeline integrates 3D Slicer [1], a state-of-the-art image analysis application, with For more information and to download the software, please SOFA [2], an open source framework targeted at real-time visit the CMake download page. simulation. Using the cross-platform build tool, CMake [3], PARAVIEW 4.2.0 RELEASED we integrated these two powerful frameworks to develop ParaView 4.2.0 is now available to download. The complete an application for anatomical model posing and morphing. list of over 200 issues resolved for this release can be found the ParaView Bug Tracker. Some of major highlights of this release include the expansion of the Properties panel and the introduction of ParaView Cinema. In addition, improve- ments have been made to NumPy integration, Python scripting and tracing, ParaView Catalyst, and ParaViewWeb. Additional color legend options have also been added.

For more information, please read http://www.kitware.com/ blog/home/post/744. ITK 4.6.1 RELEASED On behalf of the Insight Toolkit Community, we are pleased to announce the release of ITK 4.6.1. It follows the release of version 4.6.0, which was a major milestone. The 4.6.1 release fixes DICOM, MetaIO, TIFF, and PNG IO issues; 32-bit WrapITK build errors; Python wrapping warnings; CMake configuration with COMPONENTS; performance and memory consumption of the ITKv4 implementation of the mutual information matching metric; and many other issues.

Congratulations and thank you to everyone who contributed to this release. We are particularly grateful for collaborations with the 3D Slicer, Debian, GDCM, and MINC communities in addressing these issues.

Questions and comments are welcome on the ITK mailing lists. To download ITK 4.6.1, please visit http://itk.org/ITK/ resources/software.html.

INTRODUCTION 3D Slicer is a free, open-source C++ software application widely used in the medical computing research community. Slicer was developed, in part, by Kitware, and it utilizes our software processes. Additionally, Slicer is built upon VTK and ITK for visualization and image processing. Slicer facilitates

2 INTEGRATION OF 3D SLICER WITH SOFA: AN OPEN-SOURCE PIPELINE FOR END-TO-END MEDICAL SIMULATION RESEARCH Ricardo Ortiz (Kitware), Julien Finet (Kitware), Andinet Enquobahrie (Kitware)

the clinical translation of medical computing research ideas labeled voxel-based volumes. In order to use SOFA's physics by providing developers with an easy to use framework for engine, the voxelized anatomical model must be converted image analysis and visualization. Slicer also provides a tem- into a tetrahedral mesh. To do so, we used the open-source plate framework that allows for the creation of customized image-to-mesh meshing library called Cleaver [5]. Cleaver and sophisticated user interfaces. was also integrated into Slicer's build process. Figures 1-4 shows Bender’s repositioning workflow GUI and the various SOFA is a C++ library for rapid prototyping of bio-mechanical stages of SOFA’s simulation. The following figures illustrate simulation applications. Integration of 3D Slicer and SOFA Bender's invocation of SOFA's finite element components. enables a powerful pipeline to rapidly prototype end-to-end simulation applications. This article describes how this inte- gration was accomplished with relative ease. METHOD We employ CMake’s external projects module to download, configure, and compile the SOFA library as part of Slicer's build process. Broadly, CMake external project allows you to pull a project’s source code without packaging it as part of the main project. The ExternalProject_Add function makes it possible to set the directives to download the project from a URL, run its configure step, and then build and install it with just the addition of a few lines of code into the CMakeLists. txt file. Figure 1. Initialize force loads to drive the repositioning SOFA's external project command is listed below.

include(ExternalProject) set(TAG "b79944acb7b6d7cea074db7321068f5265c2bde9") ... ExternalProject_Add(SOFA SOURCE_DIR SOFA BINARY_DIR SOFA-Build GIT_REPOSITORY "git://public.kitware.com/Bender/SOFA.git" GIT_TAG ${TAG} CMAKE_ARGS ${CMAKE_OSX_EXTERNAL_PROJECT_ARGS} -DCMAKE_BUILD_TYPE:STRING=${CMAKE_BUILD_TYPE} Figure 2. Apply forces to mesh vertices ${SOFA_ADITIONAL_ARGS} … DEPENDS ${SOFA_DEPENDENCIES} )

DEMONSTRATION APPLICATION: BENDER The computation pipeline consists of several Slicer modules working in unison and uses SOFA components and other libraries. This integrated pipeline was used to build an interactive software application for anatomical model repo- sitioning, called Bender [4]. Bender allows users to change the pose of anatomical models that are represented as Figure 3. Finalize and send result back to Bender

3 ACKNOWLEDGEMENTS The Bender development effort has been funded, in part, by the AFRL "Efficient Model Posing and Morphing Software" SBIR FA8650-13-M-6444. In addition, research reported in this publication was partially supported by the Office Of The Director, National Institutes Of Health of the National Institutes of Health under Award Number R44OD018334. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Ricardo Ortiz is an R&D Engineer on the Figure 4. Update the mesh positions with the new ones computed by SOFA Medical Computing team at Kitware. He earned his Ph.D. in applied mathematics and A loadable CLI module was developed to facilitate this computational sciences from the University interaction. As shown in Figure 5, the SimulatePose module of Iowa and served as a Postdoctoral Fellow provides an interface to load the required files and control Associate in the mathematics department at the simulation within the module. the University of North Carolina at Chapel Hill.

Julien Finet is an R&D Engineer at Kitware. He is involved in several medical computing projects. Julien is notably a lead developer for the Slicer, CTK, and MSVTK projects.

Andinet Enquobahrie is the Assistant Director of Medical Computing at Kitware. Andinet is responsible for technical contri- Figure 5. Simulate Pose Loadable Module bution and management of image-guided intervention and surgical simulation proj- CONCLUSION ects. His recent efforts have focused on the We presented an application that was developed by using use of PET-CT imaging to improve the clinical effectiveness an integrated open-source pipeline. 3D Slicer and SOFA are of lesion biopsy, laparoscopic surgical procedures, and tools open-source frameworks widely used in their respective for image-guided intervention application development and communities. The integration of 3D Slicer with SOFA pro- bioinformatics analysis. vides the medical simulation community with support by two active and dynamic communities. This pipeline could potentially be used as a template platform to facilitate the UPCOMING SLICER TALK development of complex medical simulations. Stephen Aylward and Jean-Christophe Fillion-Robin will REFERENCES present a talk on "Open-source practices and 3D Slicer for 1. Kitware Inc., 2014. 3DSlicer: A multi-platform free and open medical research" on February 10, 2015, at the College of source package for visualization and medical image com- Charleston. The talk will highlight 3D Slicer. 3D Slicer is a puting. Download from: http://www.slicer.org. free, open-source software package for image analysis and 2. Jérémie Allard, Stéphane Cotin, François Faure, Pierre- Jean Bensoussan, François Poyer, Christian Duriez, Hervé scientific visualization that is partially funded by NA-MIC and Delingette, Laurent Grisoni, et al. Sofa-an open source supported by a very large community. The presentation will framework for medical simulation. In Medicine Meets discuss a variety of research areas and introduce the applica- Virtual Reality, MMVR 15, 2007. tion using driving biological problems. The presentation will 3. Kitware Inc., 2014. CMake: The cross-platform, open-source build system. http://cmake.org/. also cover how Slicer can be extended to facilitate research 4. CIBC, 2014. Cleaver: A MultiMaterial Tetrahedral Meshing and speed-up development. For more information, please Library and Application. Scientific Computing and Imaging contact [email protected]. Institute (SCI), Download from: http://www.sci.utah.edu/ cibc/software.html. 5. Kitware Inc., 2014. Bender: A free, open source software for repositioning voxelized anatomical models. Kitware Inc. Download from: http://public.kitware.com/Wiki/Bender.

4 INTRODUCING GEOJS

Aashish Chaudhary (Kitware), Chris Harris (Kitware), Jonathan Beezley (Kitware)

INTRODUCING GEOJS The map serves as the root of the scene tree and contains GeoJS is a new javascript library for visualizing geospatial georeferencing functions that can be used in converting to data in a browser. It is completely open source and is hosted and from pixel coordinates and geographic coordinates. at https://github.com/OpenGeoscience/geojs. We started the project in response to the need for an open-source # Get the pixel coordinates of the point 40° N, 100° W JavaScript library that can combine traditional geographic >>> map.gcsToDisplay({x: -100, y: 40}) information systems (GIS) and scientific visualization on the {x: 100, y: 100} Web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features # Get the geographic coordinates of the pixel at (100, 100) required to visualize scientific and other geospatial datasets. >>> map.displayToGcs({x: 100, y: 100}) For instance, such libraries are not be capable of render- {x: -100, y: 40} ing climate plots from NetCDF files, and some libraries are Finally, the map provides a uniform looping mechanism for limited in regards to geoinformatics (infovis in a geospatial drawing synchronized animations within its layers. Running environment). While libraries such as d3.js [1] are extremely an animation from a map containing animatable layers is a powerful for these kinds of plots, in order to integrate them simple matter of calling map.animate(). into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, LAYERS AND RENDERERS or the code must somehow be mixed in an unintuitive way. A layer is an abstract representation of content visible We there developed GeoJS with the following motivations: to the user. Layers are like transparent sheets placed in a stack. Visible features on one layer will cover features on the • Create an open-source geovisualization and GIS library that layers below. In addition, only the top layer of the stack can combines scientific visualization with GIS and informatics. directly receive mouse events. The event handling interface • Develop an extensible library that can combine data from described below allows layers to respond to mouse events multiple sources and render from multiple backends. and map navigation in a uniform manner. • Build a library that works well with existing scientific visu- In terms of responding to map navigation events such as alizations tools such as VTK. panning, layers come in two varieties: sticky or non-sticky. A In the following sections, we will discuss the GeoJS API, sticky layer will automatically transform features contained along with a few applications we have developed as part of within it to maintain its position relative to the map. This the ClimatePipes project [2]. is useful for geographic features such as markers placed at a specific latitude/longitude. A non-sticky layer will not API OVERVIEW automatically transform. One might use this sort of layer for GeoJS is supported by a hierarchy of classes, which define legends that remain fixed on the screen or for features that basic interfaces for the objects used in the API. require customized map navigation behaviors.

MAPS Along with each layer is a renderer, which is responsible for The entry point for developer interaction with the API actually drawing the features. Every layer has exactly one is the geo.map class. As with most object constructors in renderer and every renderer has exactly one layer. Renderers GeoJS, the map constructor takes an object containing currently come in two varieties: geo.vglRenderer and geo. optional parameters. d3Renderer. The vgl renderer draws features in a WebGL context via the dependent vgl module [3], while the d3 map = geo.map( { renderer draws features inside an SVG element using the d3 "node": library. Both renderers not only have the same top-level API, "#map",# The DOM node to contain the map but they also contain hooks to obtain low-level contexts for "center": [40, -100], advanced usage. # The initial lat/lng center point "zoom":2 In order to draw and use a map object, a special layer called a # The initial zoom level reference layer must be attached to the map. The reference } layer is responsible for specifying the coordinate system and ); translating mouse and keyboard events into map navigation

5 events. Currently, only one reference layer class exists in TESTING GeoJS: geo.osmLayer. This layer fetches tiles from the open GeoJS is released with a broad range of unit tests, ensuring street maps tile server to render tiles on demand. This layer that changes do not break API calls and that the rendering is created as follows: style and the behavior stay consistent. There are several unit test frameworks, from Jasmine [4] tests run with PhantomJS osm = map.createLayer("osm"); [5] to end-to-end multi-browser tests run with Selenium [6]. The test frameworks allow the developers to add new tests FEATURE LAYERS easily, often just by adding a jasmine spec in a new file. Each Feature layers are a specialization of a layer that allow the test is automatically integrated into the build and the dash- user to create features such as circles and polygons over the board. Each test also has built-in code coverage reporting. map. The feature layer object is instantiated directly from The master branch is run nightly on dashboard machines the map object with an optional parameter specifying the that report to CDash [7]. Continuous integration testing is target renderer. performed for every push using Travis-CI [8], whose results are also summarized by CDash. layer = map.createLayer ("feature", {renderer: "d3Renderer"});

Feature layers contain an interface to create feature objects, which are sets of drawable shapes. Out of necessity, each feature type has its own API for setting coordinate posi- tions and rendering styles. The general scheme, however, is roughly consistent with the following example. layer.createFeature("point") .positions([{x: -100, y: 40}, {x: -110, y: 35}]) .style({color: [1, 0, 0], size: [10]});

Figure 1. Points feature example in GeoJs

OBJECTS AND EVENTS The lowest level class from which all other classes inherit is Figure 2. Examples of applying customizing point and core geo.object. This class provides all objects with a time- graph features using the style API keeping mechanism, as well as with the interface for the internal events system. Another important base class is geo. APPLICATIONS sceneObject. From the geo.sceneObject class, all drawable The DOE ClimatePipes project uses GeoJs as the framework objects are derived. for its client-side visualization. The backend is written in Python and uses CherryPy [9] and MongoDB [10] as a data Meanwhile, the sceneObject defines a tree structure through store. The backend sends data to the client side in GeoJSON which events are propagated. An event in the scene tree [11] that can be parsed and rendered by GeoJs. first propagates up the tree from parent to parent until it reaches the root node. Once the event reaches the root, CLIMATEPIPES ARCHIVES the node calls its own handlers and then triggers the event The archive application enables users to access climate on all of its children. A Parent node can block events from and geospatial datasets hosted on the Earth System Grid propagating up the scene tree, as well as prevent its chil- Federation (ESGF [12]) and local instance of MongoDB [4] dren from receiving events that originated from a different using various keywords, as well as temporal and spatial branch. The scene tree event model is what allows events to constraints. Most of the climate datasets are available in propagate through layers and react together. the NetCDF format. A VTK pipeline is used to read in the

6 dataset and convert it to the GeoJSON [11] format. The JSON geospatial context. We have been working on adding new is then streamed to the client side for rendering. GeoJs also features and improving the API based on feedback from supports animation. Animation is vital, as many of these other developers and from our collaborators. Specifically, datasets have a temporal component. we are working on the selection API and on developing new features that use WebGL as the backend. We are also working on heatmaps and flow visualization features. We would like for developers outside of Kitware to become involved in improving GeoJs and, possibly, in integrating it with other open-source GIS tools.

We would like to thank Berk Geveci and Patrick Reynolds for their support and contributions to GeoJS. We would also like to acknowledge the Department of Energy, as part of this work is supported by the ClimatePipes project (DOE SBIR Phase II (DE-SC0006493)).

REFERENCES 1. http://d3js.org Figure 3. Screenshot of ClimatePipes' Archive application 2. ClimatePipes: http://www.kitware.com/source/home/post/114 3. WebGL visualization library: https://github.com/OpenGeoscience/vgl CLIMATEPIPES FLOODMAPS 4. http://jasmine.github.io This application is designed to illustrate the effects of sea 5. http://phantomjs.org level changes on coastal regions. The application builds on 6. http://docs.seleniumhq.org the elevation data (90 meter resolution) from Shuttle Radar 7. http://my.cdash.org/index.php?project=geojs Topography Mission [13]. In order to achieve a responsive 8. https://travis-ci.org 9. CherryPy — A Minimalist Python Web Framework." http:// experience, the data is aggregated to several levels of detail. www.cherrypy.org Thus, as the user zooms in, the application displays higher- 10. "MongoDB." http://www.mongodb.org resolution data. The application uses lower-resolution data 11. GeoJSON. http://geojson.org when the user zooms out to maintain real-time performance. 12. ESGF. http://esgf.org 13."Shuttle Radar Topography Mission - Jet Propulsion The flood level is calculated using a native selection of all Laboratory." http://www2.jpl.nasa.gov/srtm points with an elevation that is less than the selected rise. A 14. PCL - Point Cloud Library (PCL). http://pointclouds.org PCL [14] outlier filter is used to remove points that are not clustered along the coastline, such as inland bodies of water. Aashish Chaudhary is a Technical Leader on The visualization in GeoJs is completed using point sprites the Scientific Computing team at Kitware. that are colored based on the change in sealevel, which Prior to joining Kitware, he developed a results in a heatmap effect. graphics engine and open-source tools for information and geo-visualization. His interests include software engineering, rendering, and visualization.

Chris Harris is an R&D Engineer at Kitware. His background is in software process and high performance messaging systems. He specializes in Web technologies and distrib- uted computing.

Jonathan Beezley is an R&D Engineer at Kitware and is one of the principle developers of GeoJS. His research interests Figure 4. Screenshot of ClimatePipes Floodmaps application include geospatial visualization, Web tech- nologies, and computational statistics. FUTURE DEVELOPMENTS GeoJs is in a continuous state of development with various projects at Kitware, where it is used to visualize data in a

7 VTK’S SECOND GOOGLE SUMMER OF CODE

Marcus Hanwell (Kitware), Jeff Baumes (Kitware), Aashish Chaudhary (Kitware), Berk Geveci (Kitware)

This year, the Visualization Toolkit (VTK) was accepted to par- line data for a thousand-member vector field ensemble ticipate as a “mentoring organization” in Google Summer of (derived from a two-dimensional lock-exchange simulation Code (GSoC). This is VTK's second time taking part in the with uncertain boundary conditions). Computations were program, which fosters student participation in open source performed on a commodity three-node PC cluster. communities, encouraging them to immerse themselves in The remaining goals of the project were to use the stream- an open-source project for the summer. line data for finite-time variance analysis (FTVA) and VTK first participated in GSoC in 2011, for which Tharindu clustering on particle data. Streamline storage amounted to De Silva's proposal “Implement Select Algorithms from approximately two Terabytes, making further analysis on the IEEE VisWeek 2010 in VTK” and David Lonie's proposal available compute cluster inefficient. While CPU paralleliza- “Chemistry Visualization” were selected. Tharindu focused tion over the ensemble was straightforward and provided on implementing a selection of the most popular algorithms execution time benefits, storage on a single hard-drive from IEEE VisWeek 2010 in VTK, and David improved support caused an I/O bottleneck when parallelizing computation for rendering structural chemical data, including standard over the spatial field for FTVA. Thus, further work is being molecule representations. The year after VTK's first GSoC, conducted by Brad to successfully complete a full-field FTVA David was hired as an R&D engineer on Kitware's Scientific rendering (over multiple integration steps). This work entails Computing team, to continue work on VTK as well as the writing streamline data to storage that minimizes file reads Open Chemistry project. by analytical methods. The streamline data should be split over multiple I/O devices, either in a RAID or other configu- Marcus Hanwell, a Technical Leader at Kitware who took ration, for parallel I/O. part in GSoC 2007 as a student, has served several times in GSoC as a mentor and as VTK’s organization administrator. EXTENSIONS FOR GEOSPATIAL AND CLIMATE As a student, Marcus’ project focused on developing the BASED VISUALIZATIONS IN VTK open-source Avogadro and Kalzium projects to provide For the project “Extensions for Geospatial and Climate 3D molecular editing capabilities in Kalzium, a KDE based visualizations in VTK,” Jatin Parekh worked with educational project. Aashish Chaudhary to extend geospatial and climate data visualization capabilities in VTK. While VTK provides basic This year, 190 organizations were selected out of 371 applica- climate data visualization functionality, such as geospatial tions to participate in the program. Students were asked to transformations using PROJ4 and a few readers, it lacked submit proposals to work with the selected organizations on features ideal for geovisualization or geographic informa- open-source projects. VTK offered students example project tion systems (GIS). ideas, which included Supporting a Visualization Grammar, Biocomputing In Situ Visualization, and Improving Hardware This project added new features to VTK, which are described Volume Rendering Support. Students were also encouraged in the following paragraphs. These features will enable to submit their own project ideas. developers to combine the scientific and informatics capa- bilities of VTK with geospatial context and information. There were many strong proposals, and Kitware’s mentors/ administrators selected three of them. Throughout the GeoJSON is a format for encoding a variety of geographic program, the mentors guided students in developing their data structures. It has become a popular format, especially projects and in participating in the VTK community. in the Web community. GeoJSON supports the following geometry types: Point, LineString, Polygon, MultiPoint, ENSEMBLE VECTOR FIELDS FOR VTK MultiLineString, and MultiPolygon. Kitware was pleased to have Summer of Code student Brad A GeometryCollection represents a list of geometries, while Eric Hollister work on the project “Ensemble Vector Fields geometries that have additional properties are Feature for VTK” with Berk Geveci. objects. As with geometries, a FeatureCollection represents The vtkEnsembleSource class, newly added to VTK, manages a list of features. Since many datasets are now available in a collection of data sources in order to represent a dataset GeoJSON, support for it was added in VTK using the JSONCPP ensemble. It has the ability to provide meta-data about library. This library is already used in VTK for some other the ensemble in the form of a table. Using this new source parsing operations, and the result of this reader is shown as object and Open MPI/NFS, Brad was able to generate stream- a series in Figure 1.

8

Figure 2. Output of LAS reader rendered in VTK

Any GIS solution expects a layer interface with the base layer rendering map tiles from a configured tile server. Depending on the camera position, the appropriate tiles (images) can be selected from a tile server, if not already cached, and then rendered into VTK at the appropriate locations. For Jatin’s work, he tested and utilized the tiles from the Bing Maps and OpenStreetMap Web map tile services (WMTS). Figure 3 shows a screenshot of this work. He intends to develop a full Map API that can be added to existing VTK applications without too much effort.

Figure 1. Output of GeoJSON reader rendered in VTK

Similarly, the LAS format is a well-known format for airborne LIDAR data. "A LAS dataset stores references to one or more LAS files on disk, as well as to additional surface features. It is an industry-standard binary format for storing airborne LIDAR data. The LAS dataset allows for fast and simple Figure 3. Bing Map tiles rendered in the VTK examination of LAS files in their native format, providing detailed statistics and area coverage of the LIDAR data con- Jatin has also worked on contour labelling and adding tained in the LAS files" [1]. support for the PostGIS/Postgres database in VTK. For this project, the LibLAS library was utilized to add SUPPORTING A VISUALIZATION GRAMMAR support for the LAS format in VTK. The output of the LAS Kitware was pleased to have Summer of Code student Marco reader rendered in VTK is shown in Figure 2. Cecchetti work on a significant extension to 2D charting in

9 VTK. For his project, “Supporting a Visualization Grammar,” Marco Cecchetti worked with Jeff Baumes. { "width": 400, The aim of this project was to provide the ability to create "height": 200, plots and charts inside the VTK framework with a simple "padding": declarative language. In order to achieve this goal, the Vega {"top": 10, "left": 30, "bottom": 30, "right": 10}, visualization grammar JSON format was mapped to the "data": [ VTK infrastructure. Vega can be thought of as a file format { for visualizations and a supporting JavaScript rendering "name": "table", engine. It was developed by Jeffrey Heer at the University of "values": [ Washington. To try it out and download it, please visit http:// {"x": 1, "y": 28}, {"x": 2, "y": 55}, trifacta.github.io/vega/. The format rolls together data and {"x": 3, "y": 43}, {"x": 4, "y": 91}, its mapping to graphical elements to concisely describe very {"x": 5, "y": 81}, {"x": 6, "y": 53} diverse visualizations. For example, the JSON to the right ] produces the simple bar chart below: } ], "scales": [ { "name": "x", "type": "ordinal", "range": "width", "domain": {"data": "table", "field": "data.x"} }, { "name": "y", "range": "height", "nice": true, "domain": {"data": "table", "field": "data.y"} } ], "axes": [ {"type": "x", "scale": "x"}, {"type": "y", "scale": "y"} ], "marks": [ Figure 4. Chart produced by the JSON to the right { "type": "rect", "from": {"data": "table"}, The first main component of the GSoC project was to add "properties": { a utility to VTK that enables conversion from Vega JSON to "enter": { the graphical primitives needed to render the visualization. "x": {"scale": "x", "field": "data.x"}, The original Vega JavaScript code was leveraged in Qt's V8 "width": engine to perform this conversion and output the scene. {"scale": "x", "band": true, "offset": -1}, To achieve this outside the context of the normal browser "y": {"scale": "y", "field": "data.y"}, environment, custom handlers were added to support time- "y2": {"scale": "y", "value": 0} outs and other Web-specific functions. The resulting utility, }, QVegaScene, takes a Vega JSON file as command-line input "update": { and outputs a low-level scene graph, also in JSON format. "fill": {"value": "steelblue"} The second component of the project consisted of rendering }, code for the resulting scene format. New classes were added "hover": { to support the Vega graphical primitives inside VTK's render- "fill": {"value": "red"} ing framework including text, symbols, paths, and images. } Subclasses of vtkContextItem were used to implement each } of these graphical types. A utility VegaChart takes a scene } produced by QVegaScene as a command-line argument and ] renders the resulting chart into a VTK render window. }

10 Figure 5. Examples of fully VTK-rendered Vega specifications

To try out this experimental feature, check out Marco’s Marcus D. Hanwell is a Technical Leader on branch on GitHub at https://github.com/mcecchetti/VTK/ the Scientific Computing team at Kitware, commits/feature/qt-vega. For more information on this where he leads the Open Chemistry effort. Summer of Code project, see Marco’s log at [2]. He has a background in open source, open CONCLUSIONS science, physics, and chemistry. He has worked in open source for over a decade. Participating in GSoC has been a valuable experience for Kitware, and the students’ projects have contributed greatly Jeff Baumes is a Technical Leader at Kitware, to VTK. We would like to thank Brad, Jatin, and Marco for where he is focuses on informatics and infor- their hard work and contributions. The GSoC projects have mation visualization. Jeff has a background not only resulted in new contributions to VTK, but they have in computer science and mathematics. His offered us the opportunity to explore new directions with Ph.D. thesis topic was "Discovering Groups talented students. This gives the community a source of in Communications Networks." fresh ideas and exposes our students to a large open-source project. Aashish Chaudhary is a Technical Leader on We hope that Brad, Jatin, and Marco found the experience the Scientific Computing team at Kitware. enriching, and we thank Google for selecting VTK as a men- Prior to joining Kitware, he developed a toring organization. graphics engine and open-source tools for information and geo-visualization. His We will be sending delegates from VTK to the mentor interests include software engineering, summit in Mountain View, CA, and look forward to exchang- rendering, and visualization. ing ideas with other organizations later this year. We also look forward to taking part in future GSoC’s. We wish our Berk Geveci is the Senior Director of Scientific students every success in their future endeavors and will con- Computing at Kitware. He is one of the lead tinue to look for more ways to engage with the wider open developers of the ParaView visualization source and scientific communities to grow the VTK project. application and the Visualization Toolkit REFERENCES (VTK). His research interests include large 1.http://resources.arcgis.com/en/help/main/10.1/index. scale parallel computing, computational html#//015w00000057000000? dynamics, finite elements, and visualization algorithms. 2. https://stackedit.io/viewer#!provider=gist&gistId=7bb5b605a 88ff18a1832&filename=logbook-gsoc2014.

11 VOLUME RENDERING IMPROVEMENTS IN VTK

Lisa Avila (Kitware), Aashish Chaudhary (Kitware), Sankhesh Jhaveri (Kitware)

As reported back in July, we are in the process of a major build VTK from the source are available at this URL: http:// overhaul of the rendering subsystem in VTK. The July article www.vtk.org/Wiki/VTK/Git. To build the new mapper, enable focused primarily on our efforts to move to OpenGL 2.1+ to the Module_vtkRenderingVolumeOpenGLNew module in support faster polygonal rendering. This article will focus cmake, via -DModule_vtkRenderingVolumeOpenGLNew=ON, on our rewrite of the vtkGPURayCastMapper class to provide in ccmake or cmake-gui. Once built, it can be used via a faster, more portable, and more easily extensible volume vtkSmartVolumeMapper or instantiated directly. When suf- mapper for regular rectilinear grids. ficient testing by the community has been performed, this class will replace the old vtkGPURayCastMapper. In addition, VTK has a long history of volume rendering and, unfortu- we are adding this new mapper to the OpenGL2 module. nately, that history is evident in the large selection of classes Availability of the new mapper with OpenGL2 module will available to render volumes. Each of these methods was improve the management of textures in the mapper and, state-of-the-art at the time, but given VTK’s 20+ year history, eventually, benefit both forms of rendering (geometry and many of these methods are now quite obsolete. One goal volume) by sharing common code between them. of this effort is to reduce the number of volume mappers to ideally just two: one that supports accelerated rendering on graphics hardware and another that works in parallel on the TECHNICAL DETAILS The new vtkGPURayCastMapper uses a ray casting technique CPU. In addition, the vtkSmartVolumeMapper would help for volume rendering. Algorithmically, it is quite similar to application developers by automatically choosing between the older version of this class (although with a fairly differ- these techniques based on system performance. ent OpenGL implementation since that original class was first written over a decade ago). We chose to use ray casting due to the flexibility of this technique, which allows us to support all the features of the software ray cast mapper but with the acceleration of the GPU.

Ray casting is inherently an image-order rendering tech- nique, with one or more rays cast through the volume per image pixel. VTK is inherently an object-order rendering system, where all graphical primitive (points, lines, triangles, etc.) represented by the vtkProps in the scene are rendered by the GPU in one or more passes (with multiple passes needed to support advanced features such as depth peeling for transparency).

The image-order rendering process for the vtkVolume is initiated when the front-facing polygons of the volume’s bounding box are rendered with a custom fragment program. This fragment program is used to cast a ray through the volume at each pixel, with the fragment loca- tion indicating the starting location for that ray. The volume and all the various rendering parameters are transferred to the GPU through the use of textures (3D for the volume, 1D for the various transfer functions) and uniform variables. Steps are taken along the ray until the ray exits the volume, and the resulting computed color and opacity are blended Figure 1. Sample dataset volume rendered using the new vtkGPURayCastMapper into the current pixel value. Note that volumes are rendered after all opaque geometry in the scene to allow the ray In this first phase, we have created a replacement for the casting process to terminate at the depth value stored in the vtkGPURayCastMapper. Currently, this is available for testing depth buffer for that pixel (and, hence, correctly intermix from the VTK git repository. General instructions on how to with opaque geometry).

12 The new vtkGPURayCastMapper supports the following features:

Cropping: Two planes along each coordinate axis of the volume are used to define 27 regions that can be inde- pendently turned on (visible) or off (invisible) to produce a variety of different cropping effects, as shown in Figure 2. Cropping is implemented by determining the cropping region of each sample location along the ray and including only those samples that fall within a visible region.

Figure 2. A sphere is cropped using two different configurations of cropping regions Wide Support of Data Types: The vtkGPURayCastMap- per supports most data types stored as either point or cell Figure 3. Top: An example of an oblique clipping plane. data. The mapper supports one through four independent Bottom: A pair of parallel clipping planes clip the volume, components. It also supports two and four component data rendered without (left) and with (right) shading representing IA or RGBA.

Clipping: A set of infinite clipping planes can be defined to Blending Modes: This mapper supports composite blending, clip the volume to reveal inner detail, as shown in Figure 3. minimum intensity projection, maximum intensity projec- Clipping is implemented by determining the visibility of each tion, and additive blending. See Figure 4 for an example sample along the ray according to whether that location is of composite blending, maximum intensity projection, and excluded by the clipping planes. additive blending on the same data.

Figure 4. Blending modes using the new vtkGPURayCastMapper (from left to right): Composite, Maximum Intensity Projection, Additive

13 Masking: Both binary and label masks are supported. With Opacity Modulated by Gradient Magnitude: A transfer func- binary masks, the value in the masking volume indicates vis- tion mapping the magnitude of the gradient to an opacity ibility of the voxel in the data volume. When a label map is modulation value can be used to essentially perform edge in use, the value in the label map is used to select different detection (de-emphasize homogenous regions) during ren- rendering parameters for that sample. See Figure 5 for an dering. See Figure 6 for an example of rendering with and example of label data masks. without the use of a gradient opacity transfer function.

FUTURE WORK At this point, we have a replacement class for vtkGPURay- CastMapper that is more widely supported, faster, more easily extensible, and supports the majority of the features of the old class. In the near future, our goal is to ensure that this mapper works as promised by integrating it into exist- ing applications such as ParaView and Slicer. In addition, we are working toward adding this class to the new OpenGL2 rendering system in VTK, which requires a few changes in how we manage textures. Once these tasks are complete, we have some ideas on new features we would like to add to this mapper (outlined below). We would also like to solicit feedback from folks using the VTK volume mappers. What features do you need? Drop us a line at kitware@kitware. com, and let us know.

Improved Lighting / Shading: One obvious improvement that we would like to make is to more accurately model the VTK lighting parameters. The old GPU ray cast mapper supported only one light (due to limitations in OpenGL at the time the class was written). The vtkFixedPointRayCast- Mapper does support multiple lights, but only with an Figure 5. Example of a label data map to mask the volume approximate lighting model, since gradients are precom-

Figure 6. Volume rendering without (left) and with (right) gradient magnitude opacity-modulation (using the same scalar opacity transfer function)

14 puted and quantized, and shading is performed for each up the color.) We plan to solicit feedback from the VTK com- potential gradient direction regardless of fragment location. munity to understand the sources of labeled data and the This new vtkGPURayCastMapper will allow us to more accu- application requirements for visualization of this data. We rately implement the VTK lighting model to produce high then hope to implement more comprehensive labeled data quality images for publication. In addition, we can adapt volume rendering for both the CPU and GPU mappers. our shading technique to the volume, reducing the impact of noisy gradient directions in fairly homogenous regions of Mobile Support: The move to OpenGL 2.1 means that VTK the data by de-emphasizing shading these regions. will run on iOS and Android devices. Although older devices do not support 3D textures, newer devices do. Therefore, Volume Picking: Currently, picking of volumes in VTK is this new volume mapper should, theoretically, work. We supported by a separate vtkVolumePicker class. The new hope to test and refine our new mapper so that VTK is ready vtkGPURayCastMapper can be extended to work with VTK's to be used for mobile volume rendering applications. hardware picker to allow seamless picking in a scene that contains both volumetric and geometric objects. In addition, ACKNOWLEDGEMENTS supporting the picking directly in the mapper will ensure We would like to recognize the National Institutes of Health that "what you see is what you pick," since the same blend- for sponsoring this work under the grant NIH R01EB014955 ing code would be used both for rendering the volume and “Accelerating Community-Driven Medical Innovation with for detecting if the volume has been picked. VTK.” We would like to thank Marcus Hanwell and Ken Martin, who are tirelessly modernizing VTK by bringing it to 2D Transfer Functions: Currently, volume rendering in VTK OpenGL 2.1 and mobile devices and who have been provid- uses two 1D transfer functions, mapping scalar value to ing feedback on this volume rendering effort. opacity and gradient magnitude to opacity. For some appli- cation areas, better rendering results can be obtained by The Head and Torso datasets used in this article are available using a 2D table that maps these two parameters into an on the Web at http://www.osirix-viewer.com/datasets. opacity value. Part of the challenge in adding a new feature such as this to volume rendering in VTK is simply the number of volume mappers that have to be updated to handle it Lisa Avila is Vice President, Commercial (either correctly rendering according to these new param- Operations at Kitware. She is one of the eters or at least gracefully implementing an approximation). primary architects of VTK's volume visualiza- Once we have reduced the number of volume mappers in tion functionality and has contributed to VTK, then adding new features such as this will become many volume rendering efforts in scientific more manageable. and medical fields ranging from seismic data exploration to radiation treatment planning. Support for Depth Peeling: Currently, VTK correctly inter- mixes volumes with opaque geometry. For translucent Aashish Chaudhary is a Technical Leader on geometry, you can obtain a correct image only if all trans- the Scientific Computing team at Kitware. lucent props can be sorted in depth order. Therefore, no Prior to joining Kitware, he developed a translucent geometry can be inside a volume, as it would graphics engine and open-source tools for be, for example, when depicting a cut plane location with a information and geo-visualization. His 3D widget that represents the cutting plane as a translucent interests include software engineering, polygon. Nor can a volume be contained within a translucent rendering, and visualization. geometric object, as it would be if, for example, the outer Sankhesh Jhaveri is an R&D Engineer on the skin of a CT data set was rendered as a polygonal isosur- Scientific Visualization team at Kitware. He face with volume mappers used to render individual organs has a wide range of experience working on contained within the skin surface. We hope to extend the open-source, multi-platform, medical new vtkGPURayCastMapper to support the multipass depth imaging, and visualization systems. At peeling process, allowing for correctly rendered images with Kitware, Sankhesh has been actively involved intersecting translucent objects. in the development of VTK, ParaView, Slicer4, and Qt-based applications. Improved Rendering of Labeled Data: Currently, VTK sup- ports binary masks and only a couple of very specific versions of label mapping. We know that our community needs more extensive label mapping functionality - especially for medical datasets. Labeled data requires careful attention to the interpolation method used for various parameters. (You may wish to use linear interpolation for the scalar value to look up opacity, but, perhaps, select the nearest label to look

15 KITWARE NEWS

BEST INDUSTRY-RELATED PAPER WON BY AWARD RECEIVED TO DEVELOP RETINAL KITWARE AT ICPR 2014 IMAGE MANAGEMENT SYSTEM (RIMS) The IEEE/IAPR International Conference on Pattern Kitware recently announced a $150,000 Department of Recognition (ICPR) is the premier international conference for Defense Small Business Innovation Research (SBIR) Phase I the latest academic and industry research in pattern recogni- award for the development of a Retinal Image Management tion on images, video, biomedical data, and other domains. System (RIMS). ICPR 2014, the 22nd in the biennial series, was held from Retinal damage is a debilitating condition, which, if left August 24 to August 28, 2014, in Stockholm, Sweden. The untreated, can lead to blindness. Retinal damage is wide- conference featured 792 papers, four of which were selected spread and has many causes, ranging from accidental as finalists for Best Industry-Related Paper. Industry-related exposure to environmental dangers to complications of papers were identified by track chairs as those pertaining disease such as diabetes (diabetic retinopathy). to an application, system, or industry problem and included papers authored solely by academic institutions, as well as While current retinal imaging tools that utilize a single by industry authors. Kitware's winning paper, co-authored retinal image to detect damage provide basic functional- by Sangmin Oh, Megha Pandey, Ilseo Kim, Anthony Hoogs, ity for the investigation of ocular pathophysiology, more and Jeff Baumes, is titled "Personalized Economy of Images detailed information can be derived from using multiple in Social Forms: An Analysis on Supply, Consumption, imaging modalities. Furthermore, the tools typically used and Saliency." in today's research and clinical practice do not allow for comparative, simultaneous visualization and analysis of the Anthony Hoogs and one author from each of the other three damaged area. This places the burden on the researcher finalists discussed the challenges and issues of conducting or clinician to mentally merge information from disparate research in an industry setting during a plenary session panel sources. RIMS is designed to ease this burden by providing discussion on August 26, 2014. The winner of the highly- a single image presentation that fuses all available imaging selective award was announced following the discussion. data, and by linking to other metadata such as visual acuity test results. Kitware is developing a suite of large-scale multimedia analysis tools to advance visual content understanding, con- RIMS will provide significant improvement in the study of tent-based search, online privacy protection, and network physiological functional changes of the retina resulting from modeling. As detailed in the paper, these tools incorpo- light induced damage, as well as the next generation retinal rate the latest techniques in multimedia analysis to detect pathology system for general ophthalmic medical practice. objects, scenes, activities, in-scene text, and audio signals Furthermore, RIMS will simplify patient education and boost embedded in unconstrained images and videos. Patterns in patient awareness by relating patient history and providing behavior were detected with respect to images on Reddit. easy-to-understand visualizations. com through a unique approach based on two behavioral For the project, "Multimodal-Multidimensional image fusion modes: "supply" and "consumption." Supply is defined as for morphological and functional evaluation of the retina," posting multimedia content, and consumption includes Kitware will collaborate with DualAlign LLC, a recognized commenting on or interacting with previously posted multi- expert in ocular registration. Dr. Wesley Turner, a Technical media content. Users were characterized based on the types Leader at Kitware, will serve as the Principal Investigator, of images they consumed and supplied. and Dr. Chuck Stewart will lead the effort at DualAlign.

The results of this research offer new findings to the field of The goal of the Phase I effort is to design a system for the reg- social multimedia analysis. For one, it was observed that the istration and analysis of multi-modality retinal images, and to types of images that many users supplied differed from those create a Software Development Plan (SDP), which identifies they consumed. This challenges the previously held notion I/O formats associated with 2D and 3D imaging modalities that users are likely to post and comment on similar images. and complementary visual function tests; delineates desir- The paper also shows how to quantify and track the content- able preprocessing capabilities for noise reduction in native based behavior of users over time, revealing changes in their images; details technical approaches for image registration patterns. In addition, a significant proportion (15%) of users and data fusion; defines the graphical user interface for data supplied more images than they consumed. management; and outlines verification procedures.

16 This material is based upon work supported by the United image-guidance systems from open-source components" States Air Force under Contract No. FA8650-14-M-6558. tutorial. The hands-on tutorial showcased best practices in Any opinions, findings and conclusions or recommenda- prototyping image-guidance systems for minimally invasive tions expressed in this material are those of the author(s) interventions using open-source software tools. In particular, and do not necessarily reflect the views of the United States the tutorial highlighted 3D Slicer. Air Force. At Kitware, Dr. Aylward directs several research projects that focus on 3D Slicer, including the use of 3D Slicer to perform surface model registrations, which is demonstrated in the recently released video, "Fusing Surface Models and Medical Images, using the Structure Sensor and 3D Slicer."

Furthermore, as an advocate for scientists pursuing influ- ential research in medical computing, Kitware once again sponsored the Young Scientist Publication Impact Award. This award, now in its fourth year, recognizes researchers who are making impactful progress on the field of medical image analysis while early in their careers.

Meanwhile, during the 7th Annual Image-Guided Therapy workshop, Ricardo Ortiz presented work on "Finite element A montage of retinal thickness over a wide field obtained based biomechanical analysis of cranial shapes for craniosyn- from the Zeiss Cirrus OCT and i2k Retina. Contributed by Joe ostosis surgical correction," which he co-authored with Andy Carroll, Wisconsin College of Medicine. Bauer, Andinet Enquobahrie, Nabile Safdar, Gary Rogers, and Marius Linguraru. The goal of this work is to shift the WINDOWS SUPPORT FOR CMAKE ANNOUNCED surgical correction procedure for pediatric patients with cra- BY MICROSOFT niosynostosis from subjective visual assessments to a virtual In a Windows Blog post by Adam Denning, Microsoft recently surgery system for optimal treatment planning. announced Windows support for CMake. Microsoft’s fork of FRAMEWORK BEING DEVELOPED FOR IMAGE CMake, CMakeMS, provides developers with the ability to GUIDANCE FOR ORTHOGNATHIC SURGERY target Windows Store and Windows Phone apps. It is avail- Kitware is developing real-time image guidance to help able on CodePlex. According the blog Microsoft plans to address pressing challenges in the orthognathic surgical “incorporate feedback and integrate it soon in the public community. Orthognathic surgery is used to treat severe cra- CMake repository” in collaboration with the CMake com- niofacial anomalies and dentofacial deformities such as cleft munity and Kitware. lip and palate, under bites, open-bites, and sleep apnea. It MEDICAL ADVANCES HIGHLIGHTED AT MICCAI involves the surgical repositioning of the facial skeleton to Kitware exhibited recent work in medical computing at restore the proper anatomic and functional relationship of the Medical Image Computing and Computer Assisted the jaws. If left uncorrected, misalignment of the jaws can Intervention (MICCAI) 2014 conference in Boston, MA. lead to psychological distress, as well as impaired mastica- MICCAI 2014 is one of the premier conferences in the medical tory, speech, and respiratory functions, which can affect a computing field, highlighting topics such as medical image person's overall quality of life. computing, computer-assisted intervention, visualization, While current 3D computer aided surgical simulation and computer-aided diagnosis, and new imaging applications. 3D surgical splint modeling tools can help improve surgical Kitware actively participated in MICCAI 2014 and in the 7th outcomes for craniofacial deformities, they do not address Annual Image-Guided Therapy workshop, which was held in the issue of surgical relapse. In addition, although they allow conjunction with the conference. surgeons to visualize the positions of instruments relative to Kitware's involvement spanned presentations, tutorials, an immobile jaw, these tools do not offer real-time visual- and panel participation. Collaborative research with the ization of a movable jaw. In order to address these issues, University of North Carolina on atlas-based segmentation Kitware is developing a more advanced visualization tech- and atlas building was presented in the paper "Low-Rank nique is necessary. to the Rescue - Atlas-based Analyses in the Presence of The team will develop an intra-operative visualization Pathologies," which was co-authored Xiaoxiao Liu, Marc technique that incorporates freehand ultrasound imaging Niethammer, Roland Kwitt, Matthew McCormick, and technology and 3-D image registration techniques to assist Stephen Aylward. oral maxillofacial surgeons as part of a Phase I Small Business Stephen Aylward, Kitware's Senior Director of Medical Innovation Research (SBIR) project funded by the National Research, served as an invited speaker for the "Building Institute Of Dental and Craniofacial Research.

17 For this project, Kitware is collaborating with Dr. Tung tary, intelligence, and civil applications. It was held in tandem Nguyen, Director of the Dentofacial Deformities Clinic, and with the MSS Electro-Optical and Infrared Countermeasures Dr. Beatriz Paniagua, Research Assistant professor at the Specialty Group and provided technical and strategic infor- University of North Carolina School of Dentistry. The team mation to the combined community through presentations will begin by creating algorithms for orthognathic surgery and detailed discussions. planning, navigation, and visualization. These algorithms Rusty Blue, a Technical Leader on the Computer Vision team will be integrated into the 3D Slicer (Slicer) framework to at Kitware, presented a brief on "Fusing 3D Point Data and create a prototype navigation system. The team will then Video for GPS-Denied Navigation" in the "Active Systems evaluate the effectiveness, feasibility, and reproducibility of for Surveillance and Reconnaissance" session on August 27, the protocol in simulated operating room conditions. 2014. During the presentation, he addressed some of the The project will leverage and extend best-of-breed tools work in which Kitware has been participating that has the including Slicer, which incorporates the Visualization Toolkit potential, when combined, to aid the soldier on the ground (VTK) in its infrastructure. Slicer is an open-source cross- in a GPS-Denied environment. Such work includes Google platform toolkit for medical image processing and analysis Project Tango, for which Kitware is a partner. that supports applications such as tractography, endoscopy, tumor volume estimation, and image-guided surgery. Slicer is ICE BUCKET CHALLENGE WEBSITE CREATED in active use by the biomedical imaging research community Kitware announced the development of an open-source as a vehicle to translate innovative algorithms into clinical website that depicts a network of participants in the popular research applications. As an engineering core member of the Ice Bucket Challenge, which has raised funds for The ALS National Alliance for Medical Image Computing (NA-MIC), Association. The website is located at http://infovis.kitware. Kitware has been instrumental in developing new technolo- com/ice-bucket/. gies for Slicer. Members of Kitware’s Scientific Computing team found that Research reported in this publication was supported by despite the unending stream of news reports of celebrities the National Institute Of Dental & Craniofacial Research performing the ALS Ice Bucket Challenge, no one report of the National Institutes of Health under Award Number captures the true scale and proliferation of the challenge R43DE024334. The content is solely the responsibility of the or highlighted the interesting connections between the authors and does not necessarily represent the official views participants. of the National Institutes of Health.

PROMISING SOCIAL MULTIMEDIA BREAK- THROUGHS ADDRESSED AT WORKSHOP The Science of Multi-INT (SOMI) Workshop brought together hundreds of technologists and government program man- agers interested in the fusion and exploration of sensor and data. Dr. Anthony Hoogs gave a presentation at the workshop entitled "Concept for Fusing Social Multimedia with Overhead Sensing for Situational Awareness in Denied Areas." Social media has become a rich source of intelligence as it is readily available in many denied areas, widely used, and nearly instantaneous. Overhead sensing and social media are highly complementary, and fusing them would enable the corroboration or refutation of information about the same person, event, or place, or lead to new insights that would not be possible from either on its own. Promising Accordingly, Kitware created a simple open-source website breakthroughs in automated visual content extraction from on the Ice Bucket Challenge to show the breadth of par- social multimedia were described, with a framework for ticipation in the fundraiser. The information visualization fusing them with NTM content to solve hard A2AD problems. (InfoVis) on the website also addresses how and why par- BRIEFING DELIVERED AT MSS SPECIALTY ticular people became involved in the challenge by linking GROUP SYMPOSIUM participants to those who challenged them, as well as to those who they, in turn, challenged. Kitware participated in this year's 2014 Meeting of the Military Sensing Symposium (MSS) Specialty Group on Active For the website, a simple data format was developed that is E-O Systems, which was held from August 26 to August 28, read by the site to list the participants in a grid, show details 2014, in Springfield, VA. The focus of the symposium was on and related individuals, and link to posted videos of partici- active Electro-Optical (EO) and Infrared (IR) systems for mili- pants taking the challenge.

18 To further promote community involvement, Kitware is joining Kitware, Shawn was a Research Assistant in UNC's encouraging additions and corrections to the website from Computer Science department. anyone who would like to participate. Those who would Betsy McPhail like to add or propose corrections to the website can submit Betsy joined the Kitware team at the Clifton Park, NY office information through the issues page. For the developer- as an R&D Engineer on the Software team. Betsy received a inclined, Kitware urges others to fork its repository on B.S. in Computer Engineering with a minor in mathematics GitHub and submit pull requests with additions to its JSON from Union College and an M.Eng. in Computer and Systems data file and image thumbnails. Engineering from Rensselaer Polytechnic Institute. Prior to CMAKE.ORG WEBSITE REDESIGNED joining Kitware, Betsy was a Software Engineer at Vistec Kitware announced the rollout of the new cmake.org Lithography, Inc. website. The new website design is intended to more easily David Manthey provide dynamic and up-to-date content to the commu- David joined the Kitware team at the Clifton Park, NY, nity, be mobile-friendly, and make information on how to office as an R&D engineer on the Scientific Computing use CMake more accessible. The website includes a "Get team. David received a B.S. in Mechanical Engineering from Involved" section, where the community can learn how to Rensselaer Polytechnic Institute. Prior to joining Kitware, he contribute to the open-source toolkit; resources and devel- was a Senior Programmer for ReQuest Serious Play, LLC; a oper resources menus; and a quick access download button Programmer/Product Designer/Electronics Technician for on the main page. CMake follows ParaView, Tangelo, and Hitchcock-Manthey, LLC; and a Head Programmer/Product Open Chemistry in the transition of Kitware's open-source Designer for CamSys, Inc. solutions websites to the new design. Deepak Chittajallu GEODESIC REGRESSION PAPER PRESENTED Deepak joined the Kitware team at the Carrboro, NC, office AT ECCV 2014 as an R&D engineer on the Medical Computing team. Deepak The paper "Geodesic Regression on the Grassmannian," received a B.Tech. in Computer Science and Information which was written as part of a collaboration between Technology from Jawaharlal Nehru Technological University, Kitware; the University of North Carolina, Chapel Hill; the as well as an M.S. and a Ph.D. in Computer Science from University of Salzburg; and the University of California, the University of Houston. Prior to joining Kitware, he was San Diego, was accepted for presentation at the European a Postdoctoral Research Fellow at Harvard's Laboratory of Conference on Computer Vision (ECCV) 2014. ECCV is one Computational Cell Biology. of the leading conferences for computer vision research. It was held from September 6 to September 12, 2014, in UPCOMING EVENTS AND COURSES Zürich, Switzerland. IEEE Nuclear Science Symposium & Medical Imaging The paper was co-authored by Yi Hong, Roland Kwitt, Nikhil Conference Singh, Brad Davis, Nuno Vasconcelos, and Marc Niethammer. November 8 to November 15, 2014 in Seattle, WA It details a theory for Grassmanian geodesic regression (GGR) that extends linear regression to the Grassmannian. Matt McCormick will teach a scientific Python course. The This work addresses the challenge of regressing data points one-hour course will introduce and refresh participants on the Grassmann manifold over a scalar-valued variable. regarding Pythonic practices from the perspective of a In addition, GGR offers a compact representation of the researcher with a C or C++ background. The topics that complete geodesic path and brings to light the possibility of will be covered in the course include creating a reproduc- statistical analysis on Grassmannian geodesics. ible computational environment with Docker; learning about interactive analysis and literate programming with This research was tested on several vision challenges to dem- the IPython shell and the IPython Notebook; surveying the onstrate its applicability, namely the prediction of traffic fundamental scientific Python packages numpy, matplotlib, speed and crowd counts from dynamical system models scipy, sympy, and pandas; writing efficient, compiled C/ of surveillance videos and the modeling of aging trends Python hybrid code with Cython; and wrapping C and C++ in human brain structures using an affine-invariant shape libraries in Python with XDress. This is a hands-on course that representation. requires a laptop and active participation!

NEW EMPLOYEES Supercomputing 2014 Shawn Waldon November 16 to November 21, 2014 in New Orleans, LA Shawn joined Kitware's Clifton Park, NY office as an R&D Engineer on the Scientific Computing team. He earned his SC14 is a premier conference in the scientific computing B.S. in Computer Science and Mathematics from Appalachian field. Kitware's participation in SC14 includes exhibiting a State University and his M.S. in Computer Science from the ParaView Showcase at its booth (#1354), presenting tutori- University of North Carolina at Chapel Hill (UNC). Prior to als on Large Scale Visualization with ParaView and In Situ

19 Data Analysis and Visualization with ParaView Catalyst, create and run registration filters, understanding how to presenting a paper on extreme scale in situ visualization and create an application using ITK, and learning how to create analysis, taking part in the SC14 Student Job/Opportunity and run segmentation filters. Fair, and authoring a paper on sustainable software ecosys- Computer Vision with OpenCV tems for the second Workshop on Sustainable Software for November 20, 2014 in Lyon, France Science: Practice and Experiences (WSSSPE2). This course will cover the main features of OpenCV, an open- Advanced VTK Course source library dedicated to computer vision, through its core November 17, 2014 in Lyon, France functionalities and advanced image processing modules. The This advanced training is tailored to people who have a basic training mixes theory and application with a set of tutorials knowledge of VTK but want to extend their expertise. The and exercises. The objectives of the course include learning training mixes theory and application with a set of tutori- how to implement your own image processing algorithm, als and exercises. Through the course, attendees will learn understanding the main structuries for image processing, parallel processing using VTK, how to use composite and and learning the possibilities of using OpenCV in computer temporal data, and how to develop new filters and readers. vision applications. Prerequisites for the course include a basic understanding of the VTK library and knowledge of C++. EMPLOYMENT OPPORTUNITIES Kitware is seeking talented, motivated, and creative indi- Advanced ParaView Course viduals to fill open positions. As one of the fastest growing November 18, 201 in Lyon, France companies in the country, Kitware has an immediate need This training course is tailored for ParaView users and VTK for software developers and researchers. In particular, we developers who have a knowledge of the VTK library and are looking for scientific visualization developers who have who want to grasp ParaView deployment and initiate exten- C++ and JavaScript skills, as well as web development skills. sion development. In addition, Kitware internships provide current college Advanced Image Processing with ITK students with the opportunity to gain hands-on experience November 19, 2014 in Lyon, France working with leaders in their fields on cutting-edge prob- lems. We offer our interns a challenging work environment This course will cover the basic features of ITK, from its design and the opportunity to attend advanced software training. to the integration of an ITK algoirthm in an application. The course is geared toward R&D engineers and researchers in For more details, please visit our employment site at jobs. the image processing field. Objectives of this course include kitware.com. Interested applicants are encouraged to submit understanding the basic functions of ITK, learning how to their resumes and cover letters through our online portal.

In addition to providing readers with updates on Kitware Contributors: Lisa Avila, Jeff Baumes, Jonathan Beezley, product development and news pertinent to the open-source Aashish Chaudhary, Andinet Enquobahrie, Julien Finet, Berk community, the Kitware Source delivers basic information Geveci, Marcus Hanwell, Chris Harris, Sankhesh Jhaveri, and on recent releases, upcoming changes, and technical articles Ricardo Ortiz. related to Kitware’s open-source projects. Graphic Design: Steve Jordan For an up-to-date list of Kitware's projects and to learn about areas into which the company is expanding, please Editor: Sandy McKenzie visit the open source pages on the website at http://www. kitware.com/opensource/provensolutions.html. This work is licensed under an Attribution 4.0 International (CC BY 4.0) License. A digital version of the Kitware Source is available in a blog format at http://www.kitware.com/source. Kitware, ParaView, CMake, KiwiViewer, and VolView are Kitware would like to encourage members of our active registered trademarks of Kitware, Inc. All other trademarks developer community to contribute to the Kitware Source. are property of their respective owners. Contributions may include a technical article that describes an enhancement made to a Kitware open-source project or successes/lessons learned via developing a product built on one or more of Kitware’s open-source projects. The Kitware Source is published by Kitware, Inc., Clifton Park, New York.

20