UNIVERSITY OF COLORADO AT COLORADO SPRINGS

APPLYING BLOB DETECTION IN

SLICER-3D

BY

CHAITHANYA KUMAR CHAVA

1 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

A project report submitted to the Graduate Faculty of the University of Colorado at Colorado Springs in the partial fulfillment of the requirements for the degree of Master of Science in Computer Science Engineering Department of Computer Science 2017

This report for the Master of science degree By Chaithanya Kumar Chava has been approved for the Department of Computer Science By

______Advisor: Dr. Sudhanshu Semwal Graduate advisor for MS computer science: Focus GMI program.

______Committee member: Dr. Edward Chow Professor of computer science at University of Colorado at Colorado Springs

______Committee member: Dr. T.S. Kalkur Professor of Electrical and Computer Engineering at University of Colorado at Colorado Springs ______Date

2 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

ACKNOWLEDGEMENTS

This project would not have been possible without the kind support, patience and help of many individuals. First and foremost is my academic advisor, Professor Sudhamshu Semwal, for accepting me into his group. His guidance and constant supervision and providing necessary information regarding the project helped me a lot in successfully accomplishing my goals for the completion of project.

Additionally, I would like to thank my committee members Professor Edward Chow and Professor T.S. Kalkur for their interest in my work. My sincere gratitude for their insightful suggestions and encouragement.

Beside my committee, I would like to thank my parents, my sister and brother-in-law for their continuous support and encouragement throughout my Master’s.

I would like to thank all the people who contributed in some way to the work described in this project.

3 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

Abstract

The main aim of this project is to implement blob detection technique in Slicer3D, a medical application available as opensource. Implementing the blob detection algorithm may allow, we identification of the infected region in the image. Slicer3D is being used mainly on image processing algorithms. Slicer3D has been used in the area cardiovascular, neuro surgery, prostate cancer and applications. Slicer3D works perfectly with these LAYER-

HEIGHT, SHELL THICKNESS, RETRACTION, FILL DENSITY, PRINT SPEED,

SUPPORTS, PLATFORM ADHESION TYPE and INITIAL LAYER THICKNESS. Slicer3D has all the MRI, X-ray scanned, and other images available in its repository. When there is an

MRI scanned image of brain tumor available, one could run the blob detection algorithm by import this image. Perhaps an infected area of the image can be isolated by our algorithm.

Requirements for this project are -Make, GIT, , SilkSVN and . Slicer3D is one of the very useful software for medical image analysis, and has powerful plugin capabilities for adding algorithms. Both development, and maintenance use standard operating procedures and well-documented. Moreover, Slicer3D can be used on multiple operating systems.

4 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

Table of Contents

Table of Contents

Abstract: ………………………………………………………………………………………………………………………………… .4

1. Introduction: ……………………………………………………………………………………………………………………………7

2. Setting up the slicer3D……………………………………………………………………………………………………………..8

3. Existing Techniques Survey: …………………………………………………………………………………………………....10

3.1 Algorithms: ………………………………………………………………………………………………………………...10

3.1.1 LOG (Laplacain of gaussian) ……………………………………………………………………………10

3.1.2 Blob Detection……………………………………………………………………………………………….11

3.2 Goal of the project……….…………………………………………………………………………………………….11

3.3 Expected Output ……………………………………………………………………………………………………….12

4. Process and Implementation …………………………………………………………………………………………………12

4.1 Process Flow: …………………..……………………………………………………………………………………….12

4.2 Implementation: ……………………………………………………………………………………………………….13

4.2.1 Preparation: ………………………………………………………………………………………………….13

4.2.2 Laplacian: …………………………..………………………………………………………………………...14

4.2.3 Implementation of blob detection ………………………………………………………………..15

4.2.4 Code Explanation ………………………………………………………………………………………….17

4.2.5 Filters parameters and their work ……………………………………………………………….17

5. Conversion of images to gray scaled images …………………………………………………………………………..18

6. Accuracy …………………………………………………………………………………………………………………………………19

7. Future Works …………………………………………………………………………………………………………………………20

USERS MANUAL ……………………………………………………………………………………………………………………..21

5 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

CODE …………………………………………………………………………………………………………………………………….21

REFERENCES ………………………………………………………………………………………………………………………… 35

FIGURES:

 Figure 1 Eco system …………………………………………………………………………………………………………… 7

 Figure 2 Process Flow ………………………………………………………………………………………………………… 12

 Figure 3 ExtOpenCv……………………………………………………………………………………………………………. 13

 Figure 4 Sample Data…………………………………………………………………………………………………………. 14

 Figure 5 Filter ……………………………………………………………………………………………………………………..14

 Figure 6 Laplacian applied……………………………………………………………………………………………………15

 Figure 7 Blob Detected………………………………………………………………………………………………………. 16

 Figure 8 Test Laplacian …………………………………………………………………………………………………… 19

 Figure 9 Test Blob Detection ………………………………………………………………………………………… 19

6 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

1. INTRODUCTION:

To advance the role of imaging as a biomarker of treatment, the National Cancer Institute

(NCI) launched the Quantitative Imaging Network (QIN) initiative [1]. In this paper, we

are using Slicer3D. Slicer3D is open source . It is an extensible application

for medical image computing and visualization. There are different image processing

algorithms that can be used in 3D slicer. Our algorithm called Gaussian blob detection

and Laplacian filter. These algorithms are already used to visualize the images like MRI

and X-RAY Scans and planning of possible treatment. As this type of image processing

algorithms are not yet approved to use in an actual surgical/hospital environment. Many

researchers are working on this image processing algorithms to make the results accurate

and reliable to bring it to real time practice.

3D Slicer is one image processing software which has more increased usage in last

decade. 3D Slicer is a cross platform but still it needs certain requirements.

[1]

Figure 1 Eco System

3d slicer follows a modular and layered approach. At the below level are the libraries

which are not included in slicer. And the above level are the libraries that provides higher

functionalities. [1]

7 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

2. SETTING UP THE SLICER3D

First we need to download the slicer codes. After downloading we need to clone the GIT Hub

Repository using Git (2) Bash (create a folder). git clone git://.com/Slicer/Slicer.git

Once the cloning completes, the codes will be downloaded to the folder which is created.

Setting up the developer Environment:

In GIT Bash opens the Slicer Folder that is downloaded and enter the following code

./Utilities/SetupForDevelopment.sh

Which will prompt you to enter your Personal Information (Name and E-mail Address)

8 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

Configure the git and SVN Bridge

By using the following set of commands in GIT Bash.

cd Slicer

git svn init http://svn.slicer.org/Slicer4/trunk

git update-ref refs/remotes/git-svn refs/remotes/origin/master

git checkout master (2)

git svn rebase (2)

This will rebuild the code by connecting the bridge between SVN and GIT.

By doing this all the requirements for starting the slicer is setup.

9 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

3. EXISTING TECHNIQUES SURVEY:

3.1 ALGORITHMS:

Image processing algorithms are the algorithms which are used to create, process,

communicate and display digital images. This may come under several types like

removing noise and reducing blur in the images. For this project, we are using Laplacian

of Gaussian and blob detection.

3.1.1 LOG (Laplacian of Gaussian): [12], [13], [14], [15]

LOG is a filter that is applied to an image. The process of Laplacian starts by

taking an image and applying Laplacian filter to it. Let’s say the filtered image as X and

original Image as O. The result L by adding the Original image and the filter image is the

output for laplacian of Gaussian. And the Laplacian of Gaussian is applied to a noise

image or blur image for getting a reduced bl ur or noisy image. And this is done by

reducing the contrast of the image. For example, see figure below

O------ORIGINAL IMAGE X------FILTERED IMAGE

10 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

L------RESULT IMAGE

3.1.2 BLOB DETECTION:

Blob detection methods or algorithms are used to detect regions in an image that differ in

properties of image. Some of these properties of an image are: color, shape, size, when

compared to their surroundings. Blob detection is used because important changes in

information can be obtained accurately when compared to edge detectors. This blob

detection can be implemented by using a property and using an equivalent function.

3.2 GOAL OF THE PROJECT:

The main aim of this project is to create blobs in an image which identifies the change in

property in the original image.

The objectives of this project can be summed up as:

 To write a python code by using modules and functions from blob detection in 3D

Slicer.

 To show accurate blobs and implement a Laplacian filter.

 Develop a code that implements an algorithm for recognizing infected parts of the

human body.

 Filtering blobs based on shape, color and size.

 Integrate the plugin into the Slicer3D.

11 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

3.3 EXPECTED OUTPUT:

 Several images showing filtered or contrasted image using Laplacian.

 Blobs identified in the image.

4. IMPLEMENTATION DETAILS:

4.1 PROCESS FLOW:

ORIGINAL IMAGE

1) Apply Laplacian 2) ADDITION OF BOTH= RESULT IMAGE

FILTERED IMAGE

12) APPLY BLOB DETECTION

IMAGE WITH BLOBS POINTING THE CHANGE

Figure 2 PROCESS FLOW

STEPS:

 First download the sample image in slicer and apply the Laplacian Filter to

the image.

12 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

 This will result with a contrasted image. Now add both the original and

LOG filtered image. This will give the image to apply blob detection.

 We are getting the Laplaced image or result image so that it will be more

contrasted and gets easy for blob detection to implement.

 Now finally apply the Blob detection algorithm.

4.2 IMPLEMENTATION:

We first start by installing all the modules required for the project. Install the Slicer3D

application, and other requirements as explained in “Setting up slicer 3D”.

4.2.1 PREPERATION:

First we need to create a file with .py as extension in the directory with the name of your

main class. All the code we are using for our project will be stored in this file.

Figure 3: ExtOpenCv

13 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

4.2.2 LAPLACIAN:

To implement Laplacian filter in Slicer3D the process to be followed is first to run the

application Slicer3D. When slicer starts running go to extensions and select add extensions

there you can see c-sharp extension for the slicer. Install that extension as we need both c-

sharp and python. Now create a module in the slicer by searching for “Extension wizard” in

the slicer and then click select extension. After you click the button windows explorer will

open and there by selecting the folder we can add the extension. After successfully creating

an extension, download the sample data like ‘MRHead’ there you can see 3 different

volumes RED, YELLOW and GREEN. Now in the top there will be a scroll down menu with

all module, select examples and your extension name. Give output volume as new volume

and click the button ‘Detect Blob Apply’. After that there will be 3 popup windows showing

all the 3 volumes with Laplacian transformation applied. [15]

Figure 4: Sample Data

14 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

Figure 5: Filter

Figure 6: Laplacian applied

4.2.3 IMPLEMENTATION OF BLOB DETECTION:

For implementing blob detection, the process is when adding extensions select the code

which has class Detect Blob and add it. Again, download the sample data like ‘MRHead’

and you will see 3 different volumes RED, YELLOW and GREEN. Now in the top there

will be a scroll down menu, select examples and your extension name. Give output

15 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

volume as new volume and click the button ‘Detect Blob Apply’. After that there will be

3 popup windows with blobs detecting the changes. Below figure is the final image after

applying blob detection and before blob detection is applied it is mandatory to follow all

the steps from Laplacian as they are same for this too because we need to download the

data and apply extensions.

Figure 7: Blob detected

4.2.4 CODE EXPLANATION:

Cv2 is the that supports image processing because it includes powerful basic

image processing operations. Since this library is good for detecting blobs it was used as

a reference to understand about structure of Slicer 3D extensions. Class

DetectBlob(ScriptedLoadableModule) is the root class for all the extensions.

(Class DetectBlobWidget(ScriptedLoadableModuleWidget) This class is widget class

that represent window of extensions. Code is set path of temporary image files

16 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

Standard data structure is MRML of Slicer 3D. The data type is very complex to

understand and process manually. Images supporte in Slicer 3D are separated into

Red.png, Yellow.png, and Green.png and then applied blob detection algorithm to these

separated images. To separate different images form Slicer 3D image, each area of Red

volume, Yellow volume, and Green volume is saved to a temporary file in “pyFilePath”

path. Capture each volume and then save them as files using “captureVolume” function

declaration. Function’s name blob detection is used for image reading and transforming

of image color space and Laplacian operator in . library.cvimage represents

temporary image saved by captureVolume.Laplacian image express image like with only

contour of image since the image must be added to original.

4.2.5 FILTERS, PARAMATERS AND THEIR WORK: [11]

Filters used in this project mostly works on the image processing techniques. Blob

detection mainly works on the parameters of the program. Parameters are color,

shape, thresholding, grouping, merging and more. They work mainly on some

changes like grouping is all the connected distinct color pixels grouped. Now coming

to filters they are applied to detect regions of the image. Below are the filters from my

project: -

 Filter by Area: - by setting the values for min Area and max Area in this filter

blobs will be filtered based on them. This filter is mainly based on pixel

values like if the max Area is set to 100 it means it will filter all the blobs that

have less than 100 pixels.

 Filters under SHAPE

17 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

1) Filter by Circularity: - Given min circularity for this project for this

filter, as we can see smaller and bigger blobs in the Figure 5.

2) Filter by minConvexity: - it is defined as an area of the blob divided

by the area of the convex’s hull. Min parameters for this filter also can

be selected.

3) Filter by Inertia: for inertia, there will be different values based on the

shapes. Shapes like circle, line, square and they all have different

values which can be specified.

 Filter by color: - Filter by color is one more parameter which is used so that if

we set the blob color value = 0 then all the darker blobs are detected in the

image. As we are working on the MRI scanned images there will be a bone

matter with lighter blob colored structures in the images. We set blob color

value = 255 so that program will recognize pixels with that value.

Before we apply the above mention filters, parameters are set as followed:

 Thresholding is a first method we apply on the image. Thresholding is like

converting the input image to several binary images so that by giving the min

threshold value and max threshold value.

 Grouping all the closest pixels white or dark in the binary image.

5 CONVERSION OF IMAGES TO GRAY SCALED IMAGES:

In this project, as most of my coding is based on open-CV and python, a conversion method

which works more accurately in blob detection is used. The flag, cv2.color_BGR2GRAY, is

used so that the conversion of binary image to gray scale image is performed. The flag

BGR2GRAY is used because it is more effective when compared to RGB on MRI scanned

18 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

images. And after the conversion to gray scaled image we apply the Laplacian filter to the

image. This can be done more effectively only on the gray scaled images.

Cv2.color_BGR2HSV is also one of the most reliable image conversion methods used now a

days.

6 ACCURACY OF DETECTION:

To validate whether the blob detection is accurate or not some sample test on images, which

are not from medical processing, are shown below. For example, please see below

Figure 8 shows an image which has darker blobs like structures in it.

Figure 8 Test Laplacian Figure 9 Test Blob Detection.

7 FUTURE WORKS:

In this project, blob-detection algorithm was successfully implemented. Methods like color

detection and have been implemented. Blob detection methods are used

and can identify and isolate similar areas based on some parameters as explained earlier.

Image segmentation using RGB value of the image and processing them will give more

accurate results.

19 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

USERS MANUAL:

1. First step is to create a DetectBlob.py file with all the code written in it.

2. Next open the slicer 3D interface.

3. Now click the scroll down menu on the top and go to Developer Tools >> Extension

Wizard >> Select Extension.

4. Windows explorer will pop up and there you can choose your DetectBlob.py file.

5. Next click the button “Download Sample Data” or “Add data” for your own data.

6. Now again go to Scroll down menu >> Examples >> Select DetectBlob.py File.

7. Now give Output volume value as “Create New Volume”.

8. Click “Detect Blob Apply”.

9. Make Changes in the code and save the File for The Laplacian of Gaussian Filters and

follow the steps. [15]

10. Using the button “Save Data” the output of the images can be saved. (additional).

20 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

CODE

Main Classes DetectBlob: DetectBlobWidget: DetectBlobTest: DetectBlobLogic: Explanation on code import os import unittest import , qt, ctk, slicer from slicer.ScriptedLoadableModule import * import logging import cv2 #this package is to get position and radious of blobs form gassian of laplace image import as np import inspect

# # DetectBlob # class DetectBlob(ScriptedLoadableModule): """Uses ScriptedLoadableModule base class, available at: https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.py """ def __init__(self, parent): ScriptedLoadableModule.__init__(self, parent)

21 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

self.parent.title = "DetectBlob" # TODO make this more human readable by adding spaces self.parent.categories = ["Examples"] self.parent.dependencies = [] self.parent.helpText = """ This is an example of scripted loadable module bundled in an extension. """ self.parent.acknowledgementText = """ This file was originally developed by Jean-Christophe Fillion-Robin, Inc. and Steve Pieper, Isomics, Inc. and was partially funded by NIH grant 3P41RR013218-12S1. """ # replace with organization, grant and thanks.

# # DetectBlobWidget # class DetectBlobWidget(ScriptedLoadableModuleWidget):#Main process widget """Uses ScriptedLoadableModuleWidget base class, available at: https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.py """

def setup(self): ScriptedLoadableModuleWidget.setup(self)

# Instantiate and connect widgets ...

# # Parameters Area #

22 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

parametersCollapsibleButton = ctk.ctkCollapsibleButton() parametersCollapsibleButton.text = "Parameters" self.layout.addWidget(parametersCollapsibleButton)

# Layout within the dummy collapsible button parametersFormLayout = qt.QFormLayout(parametersCollapsibleButton)

# # input volume selector #

self.inputSelector = slicer.qMRMLNodeComboBox()

self.inputSelector.nodeTypes = ["vtkMRMLScalarVolumeNode"]

self.inputSelector.selectNodeUponCreation = True

self.inputSelector.addEnabled = False

self.inputSelector.removeEnabled = False

self.inputSelector.noneEnabled = False

self.inputSelector.showHidden = False

self.inputSelector.showChildNodeTypes = False

self.inputSelector.setMRMLScene( slicer.mrmlScene )

self.inputSelector.setToolTip( "Pick the input to the algorithm." )

parametersFormLayout.addRow("Input Volume: ", self.inputSelector)

#

# output volume selector

23 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

#

self.outputSelector = slicer.qMRMLNodeComboBox()

self.outputSelector.nodeTypes = ["vtkMRMLScalarVolumeNode"]

self.outputSelector.selectNodeUponCreation = True

self.outputSelector.addEnabled = True

self.outputSelector.removeEnabled = True

self.outputSelector.noneEnabled = True

self.outputSelector.showHidden = False

self.outputSelector.showChildNodeTypes = False

self.outputSelector.setMRMLScene( slicer.mrmlScene )

self.outputSelector.setToolTip( "Pick the output to the algorithm." )

parametersFormLayout.addRow("Output Volume: ", self.outputSelector)

#

# Apply Button

#

self.applyButton = qt.QPushButton("DetectBlob Apply")

self.applyButton.toolTip = "Run the algorithm."

self.applyButton.enabled = False

parametersFormLayout.addRow(self.applyButton)

# connections

self.applyButton.connect('clicked(bool)', self.onApplyButton)

self.inputSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)

24 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

self.outputSelector.connect("currentNodeChanged(vtkMRMLNode*)", self.onSelect)

# Add vertical spacer

self.layout.addStretch(1)

# Refresh Apply button state

# self.onSelect()

def cleanup(self):

pass

def onSelect(self):

self.applyButton.enabled = self.inputSelector.currentNode() and

self.outputSelector.currentNode()

# if self.applyButton.enabled :

# inputVolume = self.inputSelector.currentNode()

# seletionNode = slicer.app.applicationLogic().GetSelectionNode()

# seletionNode.SetReferenceActiveVolumeID(inputVolume.GetID())

# slicer.app.applicationLogic().PropagateVolumeSelection(0)

def onApplyButton(self):

inputVolume = self.inputSelector.currentNode()

outputVolume = self.outputSelector.currentNode()

25 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

if not (inputVolume and outputVolume):

qt.QMessageBox.critical(slicer.util.mainWindow(),'Blob','Input and Output volume

are required for blob DetectBlobion')

return

#display content of red window into image.

# laplacian = vtk.vtkImageLaplacian()

# laplacian.SetInputData(inputVolume.GetImageData())

# laplacian.SetDimensionality(3)

# laplacian.Update()

# ijkToRAS = vtk.vtkMatrix4x4()

# inputVolume.GetIJKToRASMatrix(ijkToRAS)

# outputVolume.SetIJKToRASMatrix(ijkToRAS)

# outputVolume.SetAndObserveImageData(laplacian.GetOutput())

# ################################################

# vtkimages = inputVolume.GetImageData()

# print vtkimages.GetNumberOfScalarComponents()

# print vtkimages.GetScalarSize()

####################################file save

pyFilePath = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))

# script directory where our python coded file is present

26 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

layoutNodeR = 'vtkMRMLSliceNodeRed'

layoutNodeY= 'vtkMRMLSliceNodeYellow'

layoutNodeG = 'vtkMRMLSliceNodeGreen'

#temporary file's path

imagePathR = pyFilePath+"//tmp//Red.png"

imagePathY = pyFilePath+"//tmp//yellow.png"

imagePathG = pyFilePath+"//tmp//Green.png"

#To Capture Red ,Yellow and Green Layout images path self.captureVolume(layoutNodeR, imagePathR)

self.captureVolume(layoutNodeY, imagePathY)

self.captureVolume(layoutNodeG, imagePathG)

#implement of blob detection algorithm

self.blobdetcttion(imagePathR, "Red")

self.blobdetcttion(imagePathY, "Yellow")

self.blobdetcttion(imagePathG, "Green")

########################################################################

########

#set each detectioned image into scene ...not work but save in scene tree

# lm = slicer.app.layoutManager()

# red = lm.sliceWidget('Red')

# redLogic = red.sliceLogic()

# sceneViewsLogic = slicer.modules.sceneviews.logic()

# renderImage = qt.QImage("g:\\3.jpg")

27 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

# imageData = vtk.vtkImageData()

# offset = redLogic.GetSliceOffset()

# slicer.qMRMLUtils().qImageToVtkImageData(renderImage, imageData)

# slicer.qMRMLUtils().vtkImageDataToQImage(imageData, renderImage)

# renderImage.save("g://22.png")

#

#

# sceneViewNode = slicer.vtkMRMLSceneViewNode()

# # view1 = lm.threeDWidget(0).threeDView()

# #

# # w2i1 = vtk.vtkWindowToImageFilter()

# # w2i1.SetInput(view1.renderWindow())

# #

# # w2i1.Update()

# # image1 = w2i1.GetOutput()

# # sceneViewNode.SetScreenShotType(1)

# sceneViewNode.SetScreenShot(imageData)

# sceneViewNode.UpdateScene(slicer.mrmlScene)

# slicer.mrmlScene.AddNode(sceneViewNode)

# sceneViewNode.SetSceneViewDescription("11111111111111")

# sceneViewNode.SetName("Red")

# sceneViewNode.SetScreenShotType(1)

# sceneViewNode.StoreScene()

28 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

########################################################################

#########

seletionNode = slicer.app.applicationLogic().GetSelectionNode()

seletionNode.SetReferenceActiveVolumeID(outputVolume.GetID())

slicer.app.applicationLogic().PropagateVolumeSelection(0)

#################This function is to DetectBlob blobs#####################

def blobdetcttion(self, imageDataPath,imageArea):

# reads temporary image file according to image path

cvimage = cv2.imread(imageDataPath)

# cv2.imshow("cvImage", cvimage)

#converts colored image to gray

cvimage == cv2.cvtColor(cvimage, cv2.COLOR_BGR2GRAY)

# get Laplace image from original image

laplacian = cv2.Laplacian(cvimage, cv2.CV_8UC4)

# cv2.imshow(imageArea+"-Laplace", laplacian)

# print img.shape

29 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

#get width and height of laplace image

width, height, channels = laplacian.shape

for i in range(int(width)):

for j in range(int(height)):

originalpixelb = cvimage.item(i, j, 0)

originalpixelg = cvimage.item(i, j, 1)

originalpixelr = cvimage.item(i, j, 2)

laplacianpixelb = laplacian.item(i, j, 0)

laplacianpixelg = laplacian.item(i, j, 1)

laplacianpixelr = laplacian.item(i, j, 2)

laplacian.itemset((i, j, 0), laplacianpixelb + originalpixelb)

laplacian.itemset((i, j, 1), laplacianpixelg + originalpixelg)

laplacian.itemset((i, j, 2), laplacianpixelr + +originalpixelr)

# cv2.imshow("Laplacian Images", laplacian)

img = cv2.GaussianBlur(laplacian, (3, 3), 0)

# cv2.imshow("Gaussian", img)

################simple blobdetcttor

params = cv2.SimpleBlobDetector_Params()

params.minThreshold = 5;

params.maxThreshold = 200;

# Filter by Area.

30 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

params.filterByArea = True

params.minArea = 10

# Filter by Circularity

params.filterByCircularity = True

params.minCircularity = 0.1

# Filter by Convexity

params.filterByConvexity = True

params.minConvexity = 0.87

# Filter by Inertia

params.filterByInertia = True

params.minInertiaRatio = 0.01

detector = cv2.SimpleBlobDetector_create(params)

keypoints = detector.detect(img)

im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 0, 255),

cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

cv2.imshow(imageArea, im_with_keypoints)

def captureImageFromVolume(self,layoutName,imagePath):#screenshot

widget = slicer.app.layoutManager().sliceWidget(layoutName)

view = widget.sliceView()

image = qt.QPixmap.grabWidget(view).toImage()

image.save(imagePath)

31 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

def captureVolume(self,nodeName,imagePath):#extract volume image

sliceNode = slicer.mrmlScene.GetNodeByID(nodeName)

appLogic = slicer.app.applicationLogic()

sliceLogic = appLogic.GetSliceLogic(sliceNode)

sliceLayerLogic = sliceLogic.GetBackgroundLayer()

realContent = sliceLayerLogic.GetImageData()

captureImage = qt.QImage(256, 256, qt.QImage.Format_RGB32)

slicer.qMRMLUtils().vtkImageDataToQImage(realContent, captureImage)

captureImage.save(imagePath)

#

# DetectBlobLogic

#

class DetectBlobLogic(ScriptedLoadableModuleLogic):

"""This class should implement all the actual

computation done by your module. The interface

should be such that other python code can import

this class and make use of the functionality without

requiring an instance of the Widget.

Uses ScriptedLoadableModuleLogic base class, available at:

32 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

https://github.com/Slicer/Slicer/blob/master/Base/Python/slicer/ScriptedLoadableModule.

py

"""

def hasImageData(self,volumeNode):

"""This is an example logic method that

returns true if the passed in volume

node has valid image data

"""

if not volumeNode:

logging.debug('hasImageData failed: no volume node')

return False

if volumeNode.GetImageData() is None:

logging.debug('hasImageData failed: no image data in volume node')

return False

return True

def isValidInputOutputData(self, inputVolumeNode, outputVolumeNode):

"""Validates if the output is not the same as input

"""

if not inputVolumeNode:

logging.debug('isValidInputOutputData failed: no input volume node defined')

33 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

return False

if not outputVolumeNode:

logging.debug('isValidInputOutputData failed: no output volume node defined')

return False

if inputVolumeNode.GetID()==outputVolumeNode.GetID():

logging.debug('isValidInputOutputData failed: input and output volume is the same.

Create a new volume for output to avoid this error.')

return False

return True

class DetectBlobTest(ScriptedLoadableModuleTest):

"""

This is the test case for your scripted module.

Uses ScriptedLoadableModuleTest base class, available at:

def setUp(self):

slicer.mrmlScene.Clear(0)

def runTest(self):

"""Run as few or as many tests as needed here.

"""

self.setUp()

34 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

REFERENCES:

THEORY LINKS

1. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network*

2. REFERED FOR DYNAMIC LOGIC AND MATHEMATICAL FUNCTIONS. (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3890092/) 3. PROGRAMMING IN SLICER ---- Sonia pujol, Ph.D surgical planning laboratory, Harvard medical school. 4. A.M.R. Schilham, B. van Ginneken, M. Loog, "Multi-scale nodule detection in chest radiographs", in: Medical Image Computing and Computer-Assisted Intervention, Editor(s): R.E. Ellis, T.M. Peters, Springer, 2003, vol. 2878, Lecture Notes in Computer Science, pp. 602-609. 5. B.M. ter Haar Romeny, B. Titulaer, S.N. Kalitzin, G. Scheffer, F. Broekmans, J.J. Staal, E. te Velde, "Computer assisted human follicle analysis for fertility prospects with 3D ultrasound", in: Information Processing in , Editor(s): A. Kuba, M. Sámal, A. Todd-Pokropek, Springer-Verlag, 1999, vol. 1613, LNCS, p. 56–69. 6. https://www.slicer.org/slicerWiki/index.php/Documentation/Nightly/Developers/Build_ Instructions. 7. Chen-Ping Yu1, Guilherme Ruppert4, Robert Collins2, Dan Nguyen3, Alexandre Falcao4, Yanxi Liu2 8. L. Bretzner and T. Lindeberg. Feature tracking with automatic selection of spatial scales. and Image Understanding, 71(3):385–392, 1998. 9. G. Gerig, G. Szekely, G. Israel and M. Berger. Detection and characterization of unsharp blobs by curve evolution. In Proc. of Information Processing in Medical Imaging, 165- 176, 1995. 10. https://www.learnopencv.com/blob-detection-using-opencv-python-c/ 11. R. Haralick and L. Shapiro Computer and Robot Vision, Vol. 1, Addison-Wesley Publishing Company, 1992, pp 346 - 351. 12. B. Horn Robot Vision, MIT Press, 1986, Chap. 8. 13. D. Marr Vision, Freeman, 1982, Chap. 2, pp 54 - 78. 14. D. Vernon Machine Vision, Prentice-Hall, 1991, pp 98 - 99, 214. 15. Ashley Whiteside and Sudhanshu Kumar Semwal, Isolating Bone and Gray Matter in MRI Images using 3D Slicer, internal report for Independent Study CS9600, GMI Program, UCCS, pp. 1-10 (Summer 2016).

35 | P a g e

UNIVERSITY OF COLORADO AT COLORADO SPRINGS

API LINKS

1. https://www.slicer.org/doc/html/classes.html 2. http://viewvc.slicer.org/viewvc.cgi/Slicer4/trunk/ 3. http://www.vtk.org/gitweb?p=VTK.git;a=blob;f=Examples/VolumeRendering/Python/Vo lumePicker.py 4. http://marc.info/?l=vtkusers&m=126477547606837 5. http://mwoehlke-kitware.github.io/Slicer/Base/slicer.html 6. https://git.framasoft.org/OpenAtWork/Slicer/blob/26ccdb766ea73e2fab3aee12bed663 d2d50ea63e/Applications/SlicerApp/Testing/Python/SlicerMRBTest.py 7. https://chaelatten.wordpress.com/2016/03/22/using-simpleblobdetector/

STANDARD DATA SETS REFERENCE:

 CV online: Image Databases, Biological Medical Images, OASIS at Washington University Alzheimer’s Disease Research Center, Dr. Randy Buckner at the Howard Hughes Medical Institute at Harvard University, the Neuro Informatics Research Group (NRG) at Washington University Scholl of , and the Biomedical Informatics Research Network (BIRN).

36 | P a g e