Supervised Learning-Based Multimodal MRI Brain Image Analysis

Supervised Learning-Based Multimodal MRI Brain Image Analysis

Supervised Learning-based Multimodal MRI Brain Image Analysis M Soltaninejad Doctor of Philosophy 2017 Supervised Learning-based Multimodal MRI Brain Image Analysis Mohammadreza Soltaninejad A thesis submitted in partial fulfilment of the requirements of the University of Lincoln for the degree of Doctor of Philosophy November 2017 Acknowledgement I wish to acknowledge MyHealthAvatar who have funded me during my research and study. I would like to express my special thanks to my supervisor, Dr Xujiong Ye, for her kind guidance and support. I would also like to express my sincere thanks to my second supervisor Dr Tryphon Lambrou for his valuable support, guidance. I would also like to extend my gratitude to Prof Nigel Allinson, head of Laboratory of Vision Engineering. I express my gratitude to Dr Guang Yang for making the clinical MRI data available. I am also grateful to Dr Adnan Qureshi for providing the clinical data and clinical assessments. Special thanks to Prof Franklyn Howe, Dr Timothy Jones, Dr Tomas Barrick for their support and kind comments on my research. I express my gratitude to my colleague Dr Lei Zhang for his guidance and support during my study. Thanks to my mother and father, Farkhondeh and Rabi, my beloved brother, Mahdi, my dearest sisters, Zahra and Mansureh, for lots of love and their support of me. I Abstract Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the II single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state- of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior- knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy. III Table of Contents Acknowledgement .................................................................................................................... I Abstract .................................................................................................................................... II Table of Contents ................................................................................................................... IV List of Figures ..................................................................................................................... VIII List of Tables ..................................................................................................................... XVII List of Abbreviations ............................................................................................................ XX Chapter 1 ................................................................................................................................ 1 Introduction ............................................................................................................................ 1 1.1 Problem statement .................................................................................................... 1 1.2 Motivations .............................................................................................................. 2 1.3 Aims and objectives ................................................................................................. 4 1.4 Contributions ............................................................................................................ 4 1.5 Thesis Structure ....................................................................................................... 6 Chapter 2 ................................................................................................................................ 7 Clinical Background .............................................................................................................. 7 2.1 Introduction .............................................................................................................. 7 2.2 Brain Tissues ............................................................................................................ 7 2.3 Conventional MRI ................................................................................................... 7 2.3.1 The Physics behind MRI .................................................................................. 8 2.3.2 Resonance ........................................................................................................ 8 2.3.3 MR Signal Generation ..................................................................................... 9 2.3.4 Relaxation ...................................................................................................... 10 2.3.5 Tissue Contrast ............................................................................................... 11 2.3.6 Conventional MRI Protocols ......................................................................... 13 2.3.7 Limitations of Conventional MRI .................................................................

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    243 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us