IGSTK: the Book

Total Page:16

File Type:pdf, Size:1020Kb

IGSTK: the Book IGSTK: The Book For release 4.2 Edited by Kevin Cleary, Patrick Cheng, Andinet Enquobahrie, and Ziv Yaniv c 2009 Insight Software Consortium All rights reserved. No part of this book may be reproduced, in any form or by any means, without the express written consent of the copyright holders. An electronic version of this document is available from http://www.igstk.org and may be used under the provisions of the IGSTK copyright found at http://www.igstk.org/copyright.htm Contributors to this project include those listed on the cover page as well as: Cover design: Dave Klemm, Educational Media, Georgetown University Editor: Cynthia Kroger Logo design: Julien Jomier Printed by: Signature Book Printing, Gaithersburg, Maryland. http://www.signature-book.com IGSTK: The Book Edited by Kevin Cleary Patrick Cheng Andinet Enquobahrie Ziv Yaniv Friday 29th May, 2009 Website: http://www.igstk.org Email: [email protected] ”Progress is the life-style of man.” - Victor Hugo, Les Mis´erables About the Covers The front and back covers show several images from the project. The front cover shows the following. • Top left. State diagram for the spatial object component. • Top right. Lung biopsy clinical trial using IGSTK at Georgetown University Hos- pital. The attending physician is Filip Banovac, MD • Bottom right. Architecture diagram showing tracker, spatial objects, spatial object representation, and viewers. The back cover shows the four quadrant display and image reslicing from the Navigator ex- ample application. Abstract The Image-Guided Surgery Toolkit (IGSTK) is an open-source C++ software library that pro- vides the basic components needed to develop image-guided surgery applications. The focus of the toolkit is on robustness using a state machine architecture. IGSTK is implementedin C++. It is cross-platform,usinga build environmentknown as CMake to manage the compilation process in a platform-independent way. Because IGSTK is an open-source project, developers from around the world can use, debug, maintain, and extend the software. IGSTK uses a model of software development referred to as Extreme Programming. Extreme Programming collapses the usual software creation methodol- ogy into a simultaneous and iterative process of design-implement-test-release. The key features of Extreme Programming are communication and testing. Communication among the members of the IGSTK community is what helps manage the rapid evolution of the software. Testing is what keeps the software stable. In IGSTK, an extensive testing process (using a system known as CDash) is in place that measures the quality on a daily basis. The IGSTK Testing Dashboard is posted continuously, reflecting the quality of the software at any moment. This book is a guide to using IGSTK and developing image-guided surgery applications with IGSTK. Contributors The Image-Guided Surgery Toolkit (IGSTK) has been created by the efforts of many individuals and organizations. This book lists a few of these contributors in the following paragraphs. Not all contributors are credited here, so please check the CVS source logs for code contributions. The following is a brief description of the contributors to this software guide. Kevin Cleary is an Associate Professor in the Department of Radiology’s Imaging Science and Information Systems Center at Georgetown University Medical Center. His research focuses on image-guided surgery and medical robotics. He is the IGSTK principal investigator and manages the project. Contact him at [email protected]. Patrick Cheng is a Software Engineer in the Imaging Science and Information Systems Cen- ter at Georgetown University. His research interests include medical imaging, image-guided surgery, and open source software development. He is one of the main developers of IGSTK and his major contribution is developing applications based on IGSTK. Andinet Enquobahrie is a Research and Development Engineer at Kitware Inc. Dr. En- quobahrie has an extensive experience in development of image visualization and analysis tools for computer aided diagnosis and image-guided intervention applications. He has been actively developing various components of the IGSTK toolkit since joining Kitware in 2005. Currently, he is a project lead coordinating various aspects of the IGSTK project at Kitware. Ziv Yaniv is an Assistant Professor in the Department of Radiology, Georgetown University, where he conducts research in image-guided interventions. He obtained his PhD in computer science from The Hebrew University of Jerusalem, Jerusalem Israel, in 2004. From 2004 to 2006 he was a postdoctoral fellow at Georgetown University. His main areas of interest are image-guided interventions, medical image analysis, and computer vision. Dr. Yaniv is a mem- ber of IEEE Engineering in Medicine and Biology and IEEE Computer societies. Stephen Aylward is Chief Medical Scientist at Kitware, Inc. Prior to joining Kitware, Stephen was a tenured Associate Professor of Radiology and director of the Computer-Aided Diagnosis and Display Laboratory at UNC. Dr. Aylward’s research has recently focused on developing model-to-image registration strategies for image-guided surgery, vascular network segmenta- tion for disease diagnosis, and digital library technologies such as the Insight Journal and MI- DAS. M. Brian Blake is a Professor of Computer Science at the University of Notre Dame. His research focuses on service-oriented computing, component-based software engineering, and workflow modeling. He was the main contributor for the requirements development for IGSTK. Kevin Gary is an Assistant Professor in the Division of Computing Studies at Arizona State University’s Polytechnic Campus. His interests are in teaching and applied research in Software Engineering, particularly distributed and web-based software architectures. Prior to academia, he worked in industry on open source solutions for eLearning. David Gobbi is an expert in medical image analysis and visualization. He received a Ph.D. in Medical Biophysics from the University of Western Ontario, and is the original contributor of the Tracker component of IGSTK. Ozg¨ur¨ G¨uler is a computer scientist working as a research assistant at the Medical University Innsbruck, Austria. His research focuses on image-based diagnosis and therapy, visualization of medical imagery, image-guided navigation, and surgery. He is currently undertaking a PhD, researching novel paradigms for quality assurance in 3D-Navigation. Luis Iba´nez˜ is a Senior Research Engineerat Kitware Inc. He is one of the main developers and maintainers of the NLM’s Insight Toolkit ITK. His main interests are medical image analysis and open source software as a mechanism for technology dissemination. He is also an advocate of open access publishing. He is one of the main contributors to the architectural design of IGSTK. Julien Jomier is a Research and Development Engineer at Kitware Inc. He is a developer of the Insight Toolkit and also the main contributor of the Spatial Objects and the Spatial Object Viewer toolkit. His main areas of research include image-guided surgery and multi-modality data fusion as well as computer-aided diagnosis. Hee-su Kim currently works for a game development company in South Korea. He was a graduate student in the Department of Computer Science at Kyungpook National University in Korea. He has interests in computer graphics, medical imaging, and related computer science fields. Frank Lindseth is a research scientist at SINTEF Medical Technology and have been working within the national center for 3D ultrasound in surgery (Trondheim, Norway) since 1995 when the center was established. He has been working with application developmentbased on IGSTK (CustusX), integrating real-time imaging in the toolkit (the VideoImager component) and con- tributing to the IGSTK design discussions (e.g. SurgicalSceenGraph and ImageReslice). Sebastian Ordas is a biomedical engineer at IDEUNO, Argentina. His research focuses on image-guided surgery and medical image analysis. His main contributions to IGSTK are the image reslicing component and application examples. Junichi Tokuda Junichi Tokuda is a research fellow of Radiology, Brigham and Women’s Hos- pital and Harvard Medical School. His research interests include computer assisted intervention especially hardware and software integration for image-guidedtherapy. He is one of the original developers of the OpenIGTLink protocol. Matt Turek is a R&D Engineer at Kitware Inc. Dr. Turek has extensive academic and industry experiencein developing medical applications. He has been involved in the IGSTK project since joining Kitware in 2007. His key contributions are coordinate system and events architecture. Hui Zhang is an imaging research engineer at Accuray Inc. He previously worked at the Imag- ing Science and Information Systems Center at Georgetown University. His research focuses on image-guided surgery, image registration, treatment planning, and visualization. Funding Sources • IGSTK Phase I and II (STTR) was funded by NIBIB/NIH (Georgetown-Kitware) Grant R42EB000374. • The current IGSTK work is funded by NIBIB/NIH grant R01EB00719 under program officer Zohara Cohen, PhD. • Additional support was provided by U.S. Army grant W81XWH-04-1-007, administered by the Telemedicine and Advanced Technology Research Center (TATRC), Fort Detrick, Maryland. The content of this manuscript does not necessarily reflect the position or policy of the U.S. Government. • Ozg¨ur¨ G¨uler’s work on the VideoImager
Recommended publications
  • Unified Framework for Development, Deployment and Robust Testing Of
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Boise State University - ScholarWorks Boise State University ScholarWorks Computer Science Faculty Publications and Department of Computer Science Presentations 3-1-2011 Unified rF amework for Development, Deployment and Robust Testing of Neuroimaging Algorithms Alark Joshi Boise State University Dustin Scheinost Yale University Hirohito Okuda GE Healthcare Dominique Belhachemi Yale University Isabella Murphy Yale University See next page for additional authors This is an author-produced, peer-reviewed version of this article. The final publication is available at www.springerlink.com. Copyright restrictions may apply. DOI: 10.1007/s12021-010-9092-8 Authors Alark Joshi, Dustin Scheinost, Hirohito Okuda, Dominique Belhachemi, Isabella Murphy, Lawrence H. Staib, and Xenophon Papademetris This article is available at ScholarWorks: http://scholarworks.boisestate.edu/cs_facpubs/5 Unified framework for development, deployment and robust testing of neuroimaging algorithms Alark Joshi · Dustin Scheinost · Hirohito Okuda · Dominique Belhachemi · Isabella Murphy · Lawrence H. Staib · Xenophon Papademetris Received: date / Accepted: date Abstract Developing both graphical and command- 1 Introduction line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet Image analysis algorithms are typically developed to their potential only if they can be easily and frequently address a particular problem within a specific domain used by their intended users. Deployment of a large (functional MRI, cardiac, image-guided intervention plan- suite of such algorithms on multiple platforms requires ning and monitoring, etc.). Many of these algorithms consistency of user interface controls, consistent results are rapidly prototyped and developed without consid- across various platforms and thorough testing.
    [Show full text]
  • S O F T W a R E D E V E L O P E R ' S Q U a R T E R
    SOFTWARE DEVELOPER’S QUARTERLY Issue 12• Jan 2010 MIDAS 2.4 RELEASED AS OPEN SOURCE Editor’s Note ........................................................................... 1 Kitware is proud to announce the release of MIDAS 2.4, a major release implementing more than 20 new features. We Recent Releases ..................................................................... 1 are also happy to announce that the MIDAS source-code is now freely available under an unrestricted (BSD) license. A Synthetic LiDAR Scanner for VTK ..................................... 3 New Variational Level-Set Classes with Region Fitting Energy in ITK ......................................................................... 6 Alternative Memory Models for ITK..................................... 9 N3 Implementation for MRI Bias Field Correction ............ 11 Exporting Contours to DICOM-RTSTRUCT ......................... 13 Kitware News ...................................................................... 15 Kitware is pleased to present a special edition of the Source which features several of the strongest Insight Journal submissions from 2009. The Insight Journal was designed Improved image gallery with color selection to provide a realistic support system for disseminating sci- entific research in the medical image processing domain. For the past year MIDAS, Kitware’s digital archiving and Recognizing the need for a mechanism whereby the medical distributed processing system, has been generating a lot of image analysis community can collectively share their
    [Show full text]
  • Journal of Biomedical Engineering and Medical Imaging, Volume 3, No 6, December(2016), Pp 96-104
    ` VOLUME 3 ISSUE 6 Advantages and Disadvantages of using Third-party software in the development of the CAS_Annotate and CAS_Navigate Medical Applications 1João Fradinho Oliveira 1C3i/Instituto Politécnico de Portalegre, Portalegre, Portugal; [email protected] ABSTRACT This paper address the main design decision issues taken when using third party libraries in the creation of two medical applications [1] that specifically require editing or creating geometry from CT images (CAS_Annotate) and interactive 3D visualization (CAS_Navigate). Whilst the purpose of the first application was to research different 3D reconstruction algorithms, the second application was created to research different visual metaphors and the reconstructions themselves. This paper weights aspects such as the learning curve time versus coding in-house time, robustness and possible customization. In theory both applications could have been developed within the same IGTSK [2] framework, but the available project time and the development of different phases of the project made that impossible, instead a black box approach of using IGSTK's 3D Msh format was crucial to import algorithm results tested with a simple GLUT application, thus allowing development to be made in parallel. Keywords: Image guided surgery; 3D reconstruction; IGSTK. 9 Introduction Writing applications with third party software has always had many benefits such as access to functonality that would be prohibitive to implement in the time frame of a project, but also the known drawbacks regarding documentation and indeed the learning curve to be able to master and change those solutions at the required level for specific project needs. This paper outlines the requirements of two medical applications [1] CAS_Annotate (a tool that allows one to manually segment/edit contours of objects of interest in CT images and test-bed different 3D reconstruction algorithms) and CAS_Navigate (a tool that provides a road-map of pre-operative geometry of organs and vascular structures intraoperatively).
    [Show full text]
  • Quantifying Anatomical Shape with Slicersalt
    SOURCEA PUBLICATION FOR SOFTWARE DEVELOPERS Issue 44 Quantifying Anatomical Shape p.3 with SlicerSALT CONTENTS Kitware Source contains information on open source software. Since 2006, its articles have shared first-hand experiences from Kitware team members and those outside the company’s offices who use and/or develop platforms such as CMake, the Visualization Toolkit, ParaView, the Insight Segmentation and Registration Toolkit, Resonant and the Kitware Image and Video Exploitation and Retrieval toolkit. Readers who wish to share their own experiences or subscribe to the publication can connect with the Kitware Source editor at [email protected]. Kitware Source comes in multiple forms. Kitware mails hard p.3 copies to addresses in North America, and it publishes each issue as a series of posts on https://blog.kitware.com. GRAPHIC DESIGNER QUANTIFYING ANATOMICAL Steve Jordan SHAPE WITH SLICERSALT EDITORS Sandy McKenzie Mary Elise Dedicke GRAND OPENING PHOTOGRAPHER p.5 Elizabeth Fox Photography This work is licensed under an Attribution 4.0 International 3D SLICER AND VIRTUAL (CC BY 4.0) License. INSECT DISSECTION Kitware, ParaView, CMake, KiwiViewer and VolView are registered trademarks of Kitware, Inc. All other trademarks are property of their respective owners. COVER CONTENT Stanford Bunny image generated with SlicerSALT’s Shape Analysis Module. See “Quantifying Anatomical Shape with p.8 SlicerSALT,” which begins on page three, for Stanford bunny meshes. KITWARE NEWS 2 QUANTIFYING ANATOMICAL SHAPE WITH SLICERSALT Beatriz Paniagua Two years ago, the National Institute of Biomedical Imaging and Bioengineering funded an initiative to create open source software to enable biomedical researchers to generate shape analysis measurements from their medical images.
    [Show full text]
  • Open-Source Toolkit for Ultrasound-Guided
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 PLUS: open-source toolkit for ultrasound-guided intervention systems Andras Lasso, Member, IEEE, Tamas Heffter, Adam Rankin, Csaba Pinter, Tamas Ungi, and Gabor Fichtinger, Senior Member, IEEE Abstract—A variety of advanced image analysis methods have real-time US images with pre-operative computed tomography been under development for ultrasound-guided interventions. (CT) or magnetic resonance (MR) images ([5], [6], [7]). Unfortunately, the transition from an image analysis algorithm Such image fusion has already been developed for some to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and clinical applications with the help of US tracking, and it has tracking devices, data processing algorithms, and visualization the potential to transform the way many other radiological software. The objective of our work is to provide a freely available interventions are performed. Also, US imaging has a limited open-source software platform - PLUS: Public software Library field of view. US may fail to show the necessary anatomical for Ultrasound - to facilitate rapid prototyping of ultrasound- context for certain procedures, e.g., identification of a spinal guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose segment is difficult from just one US image. Tracked US can and ultrasound image acquisition from a wide range of tracking be extended by stitching together many US image slices and and imaging devices, spatial and temporal calibration, volume reconstructing them in a larger 3D image volume [8]. reconstruction, simulated image generation, and recording and The second major difficulty with US guidance is the co- live streaming of the acquired data.
    [Show full text]
  • Recent Developments in Free Medical Imaging Software
    Recent Developments in Free Medical Imaging Software OrthancCon I, 2019 Andrew Crabb The Johns Hopkins University I Do Imaging Why Free Medical Imaging Software? Why Use It? Why Write It? Medical imaging is well-served by free software Recognition and publicity Benefits from collaborative imaging community Free testing by demanding users Source code often available Contributions and improvements Can address specialist/niche/research needs Sometimes required by sponsor Imaging software is competing for the user’s most valuable asset: time Today’s users are accustomed to high-quality free software Many imaging areas are served by multiple free applications Only the best software becomes self-sustaining Distributions Source Virtual Machines GitHub/BitBucket repo Docker/DockerHub • hg clone bitbucket.org/sjodogne/orthanc • docker run jodogne/orthanc Vagrant/VirtualBox • git clone xnat.git; Platform Specific ./run xnat setup HomeBrew (Mac) • brew install dcmtk apt/yum (Linux) Language Specific • apt-get install Pip (Python) python-dicom • pip search nifti # (12 results) zypper (openSUSE) npm/yarn (Node JS) • zypper install • npm search dicom # (24 results) orthanc Chocolatey (Windows) DICOM Libraries DCMTK (OFFIS) • C++ ‘reference’ DICOM library • Steady enhancements since 2003 • Command line utilities dcm4che (dcm4che.org) • Java DICOM toolkit since ca. 2000 • Many command line applications • Adding DICOMWeb capabilities GDCM (Mathieu Malaterre) • Grassroots DICOM • C++, binds to Python, C#, Java, PHP • SCU network operations DICOM Libraries
    [Show full text]
  • Captura, Visualización Y Extracción Aproximada De Contornos De Imágenes 3D De Arterias Simples Miguel Angel Castañeda Zambra
    CAPTURA, VISUALIZACIÓN Y EXTRACCIÓN APROXIMADA DE CONTORNOS DE IMÁGENES 3D DE ARTERIAS SIMPLES MIGUEL ANGEL CASTAÑEDA ZAMBRANO UNIVERSIDAD DE LOS ANDES FACULTAD DE INGENIERIA DEPARTAMENTO DE SISTEMAS Y COMPUTACIÓN Bogotá, 2004 Miguel Angel Castañeda Zambrano CAPTURA, VISUALIZACIÓN Y EXTRACCIÓN APROXIMADA DE CONTORNOS DE IMÁGENES 3D DE ARTERIAS SIMPLES Tesis de Grado Trabajo de grado presentado como requisito parcial para optar al titulo de Ingeniero de Sistemas y Computación Asesor: Tiberio Hernández UNIVERSIDAD DE LOS ANDES FACULTAD DE INGENIERIA DEPARTAMENTO DE SISTEMAS Y COMPUTACIÓN Bogotá, febrero de 2004 Gracias a mi familia Y a todos los que lo hicieron posible ISC-2003-1-8 CONTENIDO 1. Introducción .......................................................................................... 2 1.1 Descripción del problema .............................................................. 3 1.2 Organización del documento ......................................................... 3 2. Contexto ............................................................................................... 5 2.1 Contexto médico ............................................................................ 5 2.1.1 Arterias principales ............................................................... 5 2.1.2 Enfermedades arteriales: Estenosis .................................... 9 2.2 Contexto mecánico ......................................................................... 9 2.2.1 Modelos computacionales .................................................... 10 2.3
    [Show full text]
  • 3D Slicer Documentation
    3D Slicer Documentation Slicer Community Sep 24, 2021 CONTENTS 1 About 3D Slicer 3 1.1 What is 3D Slicer?............................................3 1.2 License..................................................4 1.3 How to cite................................................5 1.4 Acknowledgments............................................7 1.5 Commercial Use.............................................8 1.6 Contact us................................................9 2 Getting Started 11 2.1 System requirements........................................... 11 2.2 Installing 3D Slicer............................................ 12 2.3 Using Slicer............................................... 14 2.4 Glossary................................................. 19 3 Get Help 23 3.1 I need help in using Slicer........................................ 23 3.2 I want to report a problem........................................ 23 3.3 I would like to request enhancement or new feature........................... 24 3.4 I would like to let the Slicer community know, how Slicer helped me in my research......... 24 3.5 Troubleshooting............................................. 24 4 User Interface 27 4.1 Application overview........................................... 27 4.2 Review loaded data............................................ 29 4.3 Interacting with views.......................................... 31 4.4 Mouse & Keyboard Shortcuts...................................... 35 5 Data Loading and Saving 37 5.1 DICOM data..............................................
    [Show full text]
  • Input Preparation, Data Visualization & Analysis
    Input Preparation, Data Visualization & Analysis June 8, 2013 LA-SiGMA Baton Rouge, LA Dr. Marcus D. Hanwell [email protected] http://openchemistry.org/ 1 Outline • Introduction • Kitware • Open Chemistry • Avogadro 2 • MoleQueue • MongoChem • The Future • Summary 2 Introduction • User-friendly desktop integration with – Computational codes – HPC/cloud resources – Database/informatics resources 3 Introduction • Bringing real change to chemistry – Open-source frameworks – Developed openly – Cross-platform compatibility – Tested and verified – Contribution model – Supported by Kitware experts • Liberally-licensed to facilitate research 4 Open Chemistry Development Team • Inter-disciplinary team at Kitware • The first three worked on open-source chemistry in their spare time • The final two are computer scientists with years of open-source experience • Seeking partners in industry & research, labs 5 Outline • Introduction • Kitware • Open Chemistry • Avogadro 2 • MoleQueue • MongoChem • The Future • Summary 6 Kitware • Founded in 1998 by five former GE Research employees • 118 current employees; 39 with PhDs • Privately held, profitable from creation, no debt • Rapidly Growing: >30% in 2011, 7M web-visitors/quarter • Offices • 2011 Small Business – Clifton Park, NY Administration’s Tibbetts Award – Carrboro, NC • HPCWire Readers – Santa Fe, NM and Editor’s Choice – Lyon, France • Inc’s 5000 List: 2008 to 2011 Kitware: Core Technologies CMake CDash 8 Supercomputing Visualization • Scientific Visualization • Informatics • Large Data
    [Show full text]
  • Implementing the DICOM Standard for Digital Pathology
    [Downloaded free from http://www.jpathinformatics.org on Tuesday, May 7, 2019, IP: 4.16.85.218] Original Article Implementing the DICOM Standard for Digital Pathology Markus D. Herrmann1, David A. Clunie2, Andriy Fedorov3,4, Sean W. Doyle1, Steven Pieper5, Veronica Klepeis4,6, Long P. Le4,6, George L. Mutter4,7, David S. Milstone4,7, Thomas J. Schultz8, Ron Kikinis3,4, Gopal K. Kotecha1, David H. Hwang4,7, Katherine P. Andriole1,4,9, A. John Iafrate4,6, James A. Brink4,10, Giles W. Boland4,9, Keith J. Dreyer1,4,10, Mark Michalski1,4,10, Jeffrey A. Golden4,7, David N. Louis4,6, Jochen K. Lennerz4,6 1MGH and BWH Center for Clinical Data Science, 3Department of Radiology, Surgical Planning Laboratory, Brigham and Women’s Hospital, 4Harvard Medical School, Departments of 6Pathology and 10Radiology, Massachusetts General Hospital, Departments of 7Pathology and 9Radiology, Brigham and Women’s Hospital, 8Enterprise Medical Imaging, Massachusetts General Hospital, Boston, MA, 5Isomics, Inc., Cambridge, MA, USA, 2PixelMed Publishing, LLC, Bangor, PA, USA Received: 30 July 2018 Accepted: 06 August 2018 Published: 02 November 2018 Abstract Background: Digital Imaging and Communications in Medicine (DICOM®) is the standard for the representation, storage, and communication of medical images and related information. A DICOM file format and communication protocol for pathology have been defined; however, adoption by vendors and in the field is pending. Here, we implemented the essential aspects of the standard and assessed its capabilities and limitations in a multisite, multivendor healthcare network. Methods: We selected relevant DICOM attributes, developed a program that extracts pixel data and pixel-related metadata, integrated patient and specimen-related metadata, populated and encoded DICOM attributes, and stored DICOM files.
    [Show full text]
  • Kitware Source Issue 10
    SOFTWARE DEVELOPER’S QUARTERLY Issue 10 • July 2009 PARAVIEW 3.6 Editor’s Note ........................................................................... 1 Kitware, Sandia National Laboratories and Los Alamos National Lab are proud to announce the release of ParaView Recent Releases ..................................................................... 1 3.6. The binaries and sources are available for download from the ParaView website. This release includes several new Why and How Apache Qpid Converted to CMake ............. 3 features along with plenty of bug fixes addressing a multi- tude of usability and stability issues including those affecting parallel volume rendering. ParaView and Python ........................................................... 6 Based on user feedback, ParaView’s Python API has under- Introducing the VisTrails Provenance Explorer Plugin for gone a major overhaul. The new simplified scripting interface makes it easier to write procedural scripts mimicking the ParaView................................................................................. 8 steps users would follow when using the GUI to perform tasks such as creating sources, applying filters, etc. Details on CDash Subprojects ............................................................... 10 the new scripting API can be found on the Paraview Wiki. We have been experimenting with adding support for Kitware News ...................................................................... 14 additional file formats such as CGNS, Silo, Tecplot using VisIt plugins.
    [Show full text]
  • Implementation of the PLUS Open-Source Toolkit for Translational Research of Ultrasound-Guided Intervention Systems Release 0.2
    Implementation of the PLUS open-source toolkit for translational research of ultrasound-guided intervention systems Release 0.2 Andras Lasso, Tamas Heffter, Csaba Pinter, Tamas Ungi, and Gabor Fichtinger August 15, 2012 Laboratory for Percutaneous Surgery, School of Computing, Queen’s University, Kingston, ON, Canada Abstract This document describes the design of the PLUS (Public software Library for Ultrasound) open-source toolkit. The toolkit provides a basic infrastructure for implementing ultrasound-guided intervention systems. Functionalities include collection of synchronized ultrasound and position data from a wide variety of hardware devices, spatial and temporal calibration, volume reconstruction, live streaming to end-user applications, and recording to and replay from file. Source code, documentation, tutorials, application examples are available with a BSD-type license at the project website: www.assembla.com/spaces/plus. Contents 1. Introduction 2 2. Data representation 3 3. Data acquisition 4 4. Volume reconstruction 6 5. Live data streaming 7 6. Implementation 7 7. Results 10 8. Conclusion 11 9. Acknowledgments 11 10. References 12 2 1. Introduction Ultrasound (US) has a great potential in guiding medical interventions, as it is capable of acquiring real- time images, it has low cost, small size, and does not use ionizing radiation. Positioning of interventional tools can often be successfully completed by simple free-hand manipulation of the ultrasound transducer and surgical tools. However, there is a wide range of challenging procedures that require continuous tracking of the pose (position and orientation) of the images and surgical tools are required. Research and development of position tracked (also known as navigated) ultrasound guided intervention systems requires advanced engineering infrastructure, which has not been available in the public domain, and single project-based solutions have not proven to be suitable as reusable platforms.
    [Show full text]