![Using Convolutional Neural Network to Generate Neuro Image Template](https://data.docslib.org/img/3a60ab92a6e30910dab9bd827208bcff-1.webp)
Using Convolutional Neural Network to Generate Neuro Image Template A Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By Songyue Qian, Graduate Program in Department of Electrical and Computer Engineering The Ohio State University 2019 Master's Examination Committee: Prof. Bradley D. Clymer, PhD, Co-Advisor Assistant Professor Barbaros Selnur Erdal, PhD, Co-Advisor c Copyright by Songyue Qian 2019 Abstract Machine learning techniques implemented by Convolutional Neural Network (CNN) have become the state-of-the-art technique for automatic neuro image classification because of their outstanding computing ability. The template model method, a tech- nique used to produce a stable pattern from the position of different anatomical shapes in the brain, is considered as another automatic method that can segment whether a patient is healthy or not. However, most machine learning techniques only focus on detecting specific diseases but overlook the patients health. The template model's architecture may overlook the pattern of the entire brain. Therefore, a bet- ter automatic classification technique that focuses on the patient's health status is in demand. The purpose of this research is to design an efficient CNN architecture sensitive on normal case diagnosis instead of disease detection. In this research, we propose Hybrid-CNN-Siamese-Network(HCSNet), a CNN ar- chitecture which is used for 2D neuro image classification. HCSNet combines Incep- tion ResNet V2 and a Siamese network together to achieve better performance than those two methods alone. In order to illustrate why this architecture performs well on neuro imaging, a comprehensive overview of existing techniques for CNN analysis and experiments about comparing different CNNs' performance on the same neuro image dataset are provided [1]. We merge all the abnormal cases to one abnormal ii class so that there are only two classes (normal/abnormal) in the dataset. Experimen- tal results demonstrate the effectiveness of our proposed CNN architecture. HCSNet obtains an overall 92% accuracy and 94.4% accuracy for detecting normal cases. We further discuss the potential future work that would be done based on this CNN architecture. iii This is dedicated to my parents. iv Acknowledgments I would like to thank my two thesis advisors Prof. Bradley D. Clymer of the Electrical and Computer Engineering at the Ohio State University and Dr. Barbaros Selnur Erdal of the Department of the Radiology at the Ohio State University. Dr. Clymer, thank you for the guiding and patience to me. Dr. Erdal, it's my honor to work with you, you are the best advisor. Thank you. I would also like to thank all of my friends who have helped me to improve the writing. It is impossible for me to finish this thesis without you guys' assistance and support.Thank you. Songyue Qian v Table of Contents Page Abstract . ii Dedication . iv Acknowledgments . v List of Tables . viii List of Figures . x 1. Introduction . 1 1.1 Motivation . 1 1.2 Problem Statement . 2 1.3 Organization . 3 2. Background . 5 2.1 Neuroimaging template . 5 2.2 Neural Network Architecture . 7 2.3 Contribution . 9 3. Methodology . 11 3.1 Overview of Convolutional Neural Network(CNN) . 11 3.2 Neural Network Training . 14 3.2.1 Backpropagation . 14 3.2.2 Training sets setting . 15 3.3 Neural Network Layers . 16 3.3.1 Convolutional Layers . 16 3.3.2 Pooling Layers . 21 vi 3.3.3 Activation Layers . 23 3.3.4 Dropout . 27 3.3.5 Normalization Layers . 27 3.3.6 Loss Layers . 29 3.4 Neural Network design . 30 3.4.1 LeNet . 30 3.4.2 AlexNet . 31 3.4.3 VGGNet . 34 3.4.4 GoogleNet . 37 3.4.5 ResNet . 41 3.4.6 Siamese Network . 45 3.4.7 Hybrid CNN-Siamese network(HCSNet) . 48 3.5 Pre-processing . 50 3.6 Analysis Techniques . 51 3.6.1 Qualitative Analysis by example . 51 3.6.2 Confusion Matrices . 53 3.6.3 Learning Curves . 55 3.6.4 Others . 55 4. Experiment and Result . 57 4.1 Parameter setting . 57 4.1.1 Implementation on scratch network . 58 4.1.2 Implementation on pre-trained network . 59 4.2 Result and Statistics on Experiment I . 60 4.3 Result and Statistics on Experiment II . 66 5. Conclusion and Future Discussion . 72 5.1 Conclusion . 72 5.2 Future work and discussion . 73 5.2.1 Datasets size . 73 5.2.2 Siamese network parameter setting . 73 5.2.3 Segmentation region . 73 5.2.4 3D object processing . 74 Bibliography . 75 vii List of Tables Table Page 3.1 Example of Confusion matrix . 54 4.1 DataSet distribution on Experiment I . 59 4.2 DataSet distribution on experiment II . 60 4.3 Result of LeNet after 5000 steps in set 1 . 62 4.4 Result of LeNet after 5000 steps in set 2 . 62 4.5 Result of AlexNet after 5000 steps in set 1 . 62 4.6 Result of AlexNet after 5000 steps in set 2 . 62 4.7 Result of VGG-16 after 5000 steps in set 1 . 63 4.8 Result of VGG-16 after 5000 steps in set 2 . 63 4.9 Result of Inception v3 after 5000 steps in set 1 . 63 4.10 Result of Inception v3 after 5000 steps in set 2 . 63 4.11 Result of Inception v4 after 5000 steps in set 1 . 64 4.12 Result of Inception v4 after 5000 steps in set 2 . 64 4.13 Result of Inception ResNet after 5000 steps in set 1 . 64 4.14 Result of Inception ResNet after 5000 steps in set 2 . 64 viii 4.15 Result of Siamese Network after 5000 steps in set 1 . 65 4.16 Result of Siamese Network after 5000 steps in set 2 . 65 4.17 Result of HCSNet after 5000 steps in set 1 . 65 4.18 Result of HCSNet after 5000 steps in set 2 . 65 4.19 Result of Inception V3 in Experiment II . 67 4.20 Result of Inception ResNet V2 in Experiment II . 68 4.21 Result of Inception V3 and Siamese Network in Experiment II . 69 4.22 Result of HCSNet in Experiment II . 70 ix List of Figures Figure Page 3.1 Multiple Neurons with single directions node . 11 3.2 Multiple Neurons with single directions node . 12 3.3 Multiple Neurons with single directions node . 13 3.4 Convolutional Process . 18 3.5 Convolutional Matrix . 18 3.6 Convolutional Matrix with padding=2 . 20 3.7 Max and Average pooling . 22 3.8 Sigmoid activation function . 24 3.9 Derivative of Sigmoid Activation Function . 24 3.10 tanh activation function . 25 3.11 Derivative of tanh Activation Function . 25 3.12 ReLU activation function . 25 3.13 Derivative of ReLU Activation Function . 25 3.14 Leaky ReLU Activation . 26 3.15 Before and after applying dropout . 28 x 3.16 Dropout processing equation[2] . 29 3.17 Architecture of LeNet-5[3] . 31 3.18 Layer structure of AlexNet[4] . 32 3.19 Parameter number and setting of AlexNet . 33 3.20 Architecture of VGGNet[5] . 35 3.21 Inception module of GoogleNet . 38 3.22 Inception V2 module . 40 3.23 Inception V3 module . 41 3.24 Residual unit of ResNet . 42 3.25 Compare VGG and ResNet structure . 44 3.26 Residual unit of two and three layer ResNet . 45 3.27 Siamese Architecture [6] . 46 3.28 Workflow of HCSNet . 49 4.1 Different network validation accuracy vs. number of steps in set 1 . 61 4.2 Different network validation accuracy vs. number of steps in set 2 . 61 4.3 Validation accuracy vs. epoch Inception V3 Experiment II . 67 4.4 Loss vs. epoch Inception V3 Experiment II . 68 4.5 Validation accuracy vs. epoch Inception ResNet V2 Experiment II . 69 4.6 Loss vs. epoch Inception ResNet V2 Experiment II . 70 xi Chapter 1: Introduction 1.1 Motivation While the usefulness of medical imaging is constantly leading to better image quality and higher accuracy of segmentation results, manual segmentation is a time- consuming and laborious process. Hence, automatic segmentation is a demanded technique which is mostly done by quantitative research algorithms. In general, most of the quantitative research about neuroimaging focuses on aligning one or several anatomical templates to the target image (via a linear or nonlinear registration pro- cess) and transferring segmentation labels from the templates to the image. However, such methods may not be capable of capturing the full anatomical variability of tar- gets because the generated model may only focus on the major components of the brain structure. One of the quantitative research algorithms generates a template.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages91 Page
-
File Size-