J Math Imaging Vis (2018) 60:355–381 https://doi.org/10.1007/s10851-017-0759-8

Spatially Constrained Student’s t-Distribution Based for Robust Image Segmentation

Abhirup Banerjee1 · Pradipta Maji1

Received: 8 January 2016 / Accepted: 5 September 2017 / Published online: 21 September 2017 © Springer Science+Business Media, LLC 2017

Abstract The finite Gaussian mixture model is one of the Keywords Segmentation · Student’s t-distribution · most popular frameworks to model classes for probabilistic Expectation–maximization · Hidden Markov random field model-based image segmentation. However, the tails of the Gaussian distribution are often shorter than that required to model an image class. Also, the estimates of the class param- 1 Introduction eters in this model are affected by the pixels that are atypical of the components of the fitted Gaussian mixture model. In In image processing, segmentation refers to the process this regard, the paper presents a novel way to model the image of partitioning an image space into some non-overlapping as a mixture of finite number of Student’s t-distributions for meaningful homogeneous regions. It is an indispensable step image segmentation problem. The Student’s t-distribution for many image processing problems, particularly for med- provides a longer tailed alternative to the Gaussian distri- ical images. Segmentation of brain images into three main bution and gives reduced weight to the outlier observations tissue classes, namely white matter (WM), gray matter (GM), during the parameter estimation step in finite mixture model. and cerebro-spinal fluid (CSF), is important for many diag- Incorporating the merits of Student’s t-distribution into the nostic studies. For example, in multiple sclerosis diseases, hidden Markov random field framework, a novel image seg- accurate quantification of WM lesions is necessary for drug mentation algorithm is proposed for robust and automatic treatment assessment, while in schizophrenia and epilepsy, image segmentation, and the performance is demonstrated volumetric analysis of GM, WM, and CSF is required to char- on a set of HEp-2 cell and natural images. Integrating the acterize the morphological differences between subjects. In bias field correction step within the proposed framework, a a similar way, automatic segmentation of human epithelial novel simultaneous segmentation and bias field correction type 2 (HEp-2) cells from indirect immunofluorescent (IIF) algorithm has also been proposed for segmentation of mag- images is a necessary step of the IIF antinuclear antibody test netic resonance (MR) images. The efficacy of the proposed for the diagnosis of connective tissue diseases. approach, along with a comparison with related algorithms, is Many image processing techniques for automatic image demonstrated on a set of real and simulated brain MR images segmentation exist throughout the literature [10,53]. Among both qualitatively and quantitatively. them, the thresholding methods [39,52,59] segment the scaler images by using one or more thresholds. These meth- ods are usually very simple, fast, and work reasonably well for images with very good contrast between distinctive sub- B Pradipta Maji regions. But they do not consider the spatial characteristics [email protected] of an image, which makes them sensitive to noise and out- Abhirup Banerjee liers. Li et al. [33] tried to remove this noise sensitivity of [email protected] the thresholding algorithms by incorporating the local inten- sity information to the framework. Lee et al. [29]alsosolved 1 Biomedical Imaging and Bioinformatics Lab, Machine Intelligence Unit, Indian Statistical Institute, 203 B. T. Road, the similar problem by finding a boundary between two sub- Kolkata, West Bengal 700108, India regions using the path connection algorithm and changing 123 356 J Math Imaging Vis (2018) 60:355–381 the threshold adaptively. Region growing is another popular dependent on its neighboring pixels. Liu and Zhang [38] inte- technique for image segmentation [45], which is generally grated the level set method with FGM framework for image applied for delineation of small, simple structures. However, segmentation in the presence of intensity inhomogeneity and it also gets affected by the presence of noise and outliers. noise. Mangin et al. [44] solved this problem by applying a homo- However, none of the works reported above, based on topic region growing algorithm, which is able to preserve either FCM or FM, considers spatial information for seg- the topology between initial and extracted regions. Artificial mentation. In this regard, spatial information of neighboring neural network-based algorithms have also been investigated pixels has been incorporated into the FCM framework to in the image segmentation problems [12], either applied in make it robust from the effect of noise and outliers [2, supervised fashion [24] or unsupervised fashion [10,57]. 13,15,21,28,36,55,64,66,70,71], while the Markov random Variational models [32,60] such as snake model [47] and field (MRF) model is integrated within the FM model-based geodesic active contour model [56] have been applied to seg- probabilistic framework [19,25,50,73]. Incorporating spa- ment the complex-shaped image structures in an efficient and tial information of the pixels with their intensity distribution, automated manner. Among other techniques, self-organizing Zhang et al. [73] introduced the hidden Markov random maps [34], wavelet transform [6,42,43], k-means [1], fuzzy field (HMRF) model and proposed a joint EM-HMRF frame- connectedness [17,58,74], optimum-path forest [7], work to achieve robust segmentation in noisy environment. vector machine [18], level set [37], and graph-cut [14,16,30]- In another work, Held et al. [25] developed an MRF seg- based approaches are applied in the segmentation of various mentation algorithm, based on the adaptive segmentation image structures. algorithm of Wells et al. [65]. Their algorithm incorpo- One of the major problems in image segmentation is rated three features, namely nonparametric distributions of uncertainty. Imprecision in computations and vagueness in tissue intensities, neighborhood correlations, and signal inho- class definitions often cause this uncertainty. In this back- mogeneities, that are of special importance for an image. ground, fuzzy c-means (FCM) algorithm is considered as Diplaros et al. [19] introduced a generative model with the one of the most popular techniques in modeling uncertainty assumption that the hidden class labels of the pixels are gen- of image segmentation problems [11,24,31,67]. The proba- erated by prior distributions that share similar parameters bilistic model [4,5] is another popular framework to model for neighboring pixels. The same EM algorithm was applied image classes for segmentation. This model generally applies with a smoothing step interleaved between the E- and the M- the expectation-maximization (EM) algorithm that labels the steps that couple the posteriors of neighboring pixels in each pixels according to their values, calculated based iteration. In another work, Nguyen and Wu [50] proposed a on the intensity distribution of the image. Using a suitable new way to incorporate spatial relationships among neigh- assumption about the intensity distribution, the probabilis- boring pixels using a simple metric in the joint FGM-MRF tic approaches attempt to estimate the associated class label, framework. given only the intensity of each pixel. Such an estimation Although the aforementioned FM- or MRF-based seg- problem is necessarily formulated using maximum a poste- mentation methods provide different ways to achieve robust riori (MAP) or maximum likelihood (ML) principles. In this image segmentation, all of them assume the underlying regard, the finite mixture (FM) model, more specifically, the intensity distribution of each image class as a Gaussian distri- finite Gaussian mixture (FGM) model, is one of the most bution. Gaussian distribution is a unimodal distribution that popular models for image segmentation [22,35,38,51,65]. attains highest probability density value at a single point in Wells et al. [65] applied the EM framework in the FGM its range of variation, which is mean, and probability den- model to achieve optimal segmentation performance, while sity decreases symmetrically as one traverses away from the simultaneously reducing the shading artifacts in brain MR mean. This assumption is particularly very useful to model images. Liang et al. [35] developed a statistical method to an image class, as an image class, in general, also has a ten- simultaneously classify the pixels and estimate the parame- dency to have high concentration of intensity values around ters of the intensity distribution of each class in multispectral the mean and the concentration decreases as one deviates images. Their EM framework assumed the distribution of further from the mean. However, the tails of the Gaussian image intensities as a mixture of finite number of multivariate distribution are often shorter than that required to model an normal distributions and the prior distribution of each class as image class. There generally exist outliers, that is, observa- a Markov random field (MRF). In another work, Greenspan tions with large deviation from the class mean, in an image et al. [22] modeled each image class by large number of class, which are very difficult to model and have the ability Gaussian components in the same FGM framework. Nguyen to degrade the performance of the segmentation algorithm et al. [51] proposed an extension of the standard Gaussian if Gaussian distribution is fitted to model the image classes. mixture model. Their proposed framework assumed the prior Also, the estimates of the class parameters in this model get distribution of each class to be different for each pixel and 123 J Math Imaging Vis (2018) 60:355–381 357 corrupted by the pixels that are atypical of the components portions, while the second model applies Gamma prior on of the fitted Gaussian mixture model. the Student’s t-distributed local differences of the contextual The Student’s t-distribution, in this regard, provides a mixing proportions. longer tailed alternative to the Gaussian distribution. Being a In this regard, the paper presents a new way to model the more generalized , it enables the FM intensity distribution of the image as a mixture of finite num- model to provide a more robust fitting of datasets as compared ber of Student’s t-distributions, which can effectively model to the FGM model. Sfikas et al. [61] modeled the intensity dis- the outlier observations of each class and reduce the effect tribution in an image as a Student’s t-mixture model (SMM) of outlier components during parameter estimation. Repre- for segmentation. However, they did not take into consider- senting the Student’s t-distribution as an infinite mixture ation the spatial information of neighboring pixels. Nguyen of Gaussian distributions with Gamma priors, the proposed and Wu [49] developed a finite Student’s t-mixture model model provides a robust fitting of datasets, since observa- with spatial constraints (SMM-SC) for medical image seg- tions that are atypical of a component are given reduced mentation, where the spatial information is incorporated as weight during parameter estimation step. Integrating the mer- a linear smoothing filter based on Dirichlet distribution and its of Student’s t-distribution with the HMRF model, a novel Dirichlet law. Since the Student’s t-distribution is directly image segmentation algorithm, termed as tHMRF, has been applied to model the intensity distribution in each tissue class, proposed. The effectiveness of the proposed algorithm has the estimates of the parameters in this model do not have a been demonstrated and compared with related algorithms, closed-form solution and are derived iteratively using the on a set of HEp-2 cell and natural images. Modeling the log- gradient descent algorithm. Although the performance of the transformed intensity values of the image as a finite mixture algorithm was demonstrated for brain MR image segmenta- of Student’s t-distributions, a simultaneous segmentation and tion tasks, the authors did not consider the problem of bias bias field correction algorithm, termed as tEM, has been pro- field artifact in brain MR images. Xiong et al. [68]alsodevel- posed for MR images. The bias field correction step has been oped a similar framework for image segmentation. However, judiciously integrated within the proposed EM framework in their framework, the spatial information for each pixel is for accurate estimation of the bias field to achieve optimum incorporated by modeling the using a func- segmentation results. The HMRF framework is incorporated tion that represents the weight of each pixel belonging to a into the proposed method to include spatial information of specific class. In another work, Xiong et al. [69]usedadif- the pixels to achieve robustness even in the presence of noise ferent function to represent the weight of a pixel belonging and outliers. Finally, the effectiveness of the proposed tEM to a specific class in the same SMM framework. algorithm, along with a comparison with related algorithms, Nguyen and Wu [48], in another work, presented a non- is demonstrated on a set of real and simulated brain MR symmetric mixture model for image segmentation without images both qualitatively and quantitatively. incorporating the spatial information of neighboring pixels. The structure of the rest of this paper is as follows: Each tissue class, in this framework, is modeled using two Sect. 2 introduces the basic concepts of FM model and Student’s t-distributions to model the non-symmetric inten- HMRF framework. The proposed segmentation algorithm sity distributions. Similar to their previous approach, the is introduced in Sect. 3, along with the theory of Student’s algorithm was applied for brain MR image segmentation, t-distribution. Section 4 presents the novel simultaneous seg- without considering the bias field artifacts. Zhang et al. [72] mentation and bias field correction algorithm, incorporating introduced a weighted Student’s t-mixture model (WSMM) the bias field correction step into the proposed framework. for image segmentation, where the local spatial information Section 5 demonstrates the performance of the proposed seg- is incorporated by modeling the prior probability distribution mentation algorithm, along with a comparison with related as a linear combination of the posterior . Instead methods, for automatic segmentation of HEp-2 cells and of using Student’s t-probability density function (pdf) in the natural images. A few case studies and a comparison with FM model, the authors applied a function f (x) = γ xα of the other methods are also presented in Sect. 5 for simultaneous Student’s t-pdf, which is Student’s t-pdf with power α and segmentation and bias field correction of brain MR images. coefficient γ . Sfikas et al. [62], in another work, developed Concluding remarks are given in Sect. 6. two different models imposing MRF smoothness priors on the contextual mixing proportions of a spatially varying FGM model. Unlike the aforementioned works, here, the intensity distribution in each class is modeled using Gaussian distribu- 2 Basics of Finite Mixture and HMRF Models tion, while the spatial information is modeled in two different ways using MRF priors with line processes. The first model This section presents the basic concepts of finite mixture of spatial information applies Bernoulli prior on normally (FM) model and HMRF framework. The proposed algorithm distributed local differences of the contextual mixing pro- for image segmentation is developed based on these concepts. 123 358 J Math Imaging Vis (2018) 60:355–381

2.1 Basics of Finite Mixture Model i. A random field is said to be an MRF with respect to a neighborhood system N iff Let S ={1, 2,...,N} be the set of indices and X and Y be   two random fields, whose state spaces are L ={1, 2,...,L} (i) p(x)>0, ∀x ∈ X ; and (ii) p(x |xS−{ }) = p(x |xN ).   i i i i and D ={1, 2,...,D}, respectively. Let x and y be a   configuration of X and Y , respectively. X and Y denote, According to the Hammersley–Clifford theorem [8], an   respectively, the set of all possible configurations. Given MRF can be described using a Gibbs distribution. Hence, Xi = l, Yi follows a conditional probability distribution 1   p(x) = exp −U(x) (8)  Z  p(yi |l) = f (yi ; θl ), ∀l ∈ L (1) where Z is a normalizing constant, called the partition func- θ tion, and U(x) is an energy function of the form where l is the set of parameters of the class having label l.  We also assume that (X, Y ) is pairwise independent, that is,    U(x) = V (x), (9)   c  ∈C p(x, y) = p(xi , yi ). (2) c   i∈S which is a sum of clique potentials V (x) over all possible c  In case of FM model, for every l ∈ L and i ∈ S, cliques C. A clique c is defined as a subset of sites in S,in which, each pair of distinct sites is neighbors. ω = ( = ) Given this scenario, that is, X is an MRF over finite state l p Xi l (3)  space L,if is independent of the individual sites i ∈ S. So, the joint prob- ability distribution of two configurations x and y, dependent 1. the state of X is unobservable,   θ ={ω ,θ ; ∈ L}  2. given any particular configuration x ∈ X ,everyY on the model parameters l l l ,is  i   follows a known conditional probability distribution p(x, y|θ) = p(x , y |θ) = {ω f (y ; θ )}. (4) p(yi |xi ) of the form f (yi ; θx ), and  i i xi i xi i  ∈S ∈S 3. for any x ∈ X , the random variables Y are conditionally i i  i independent, Hence, the of Yi = y, dependent on the parameter set θ, is given by the model is called the HMRF model.   Following the above assumptions, the marginal distribu- ( |θ) = ( , |θ) = ω ( ; θ ). p y p l y l f y l (5) tion of Yi = yi , dependent on the parameter set θ ={θl ; l ∈ l∈L l∈L L} ={ ; ∈ and Xi ’s neighborhood configuration XNi X j j Ni }, is modified as This model is called the FM model. In case, f (y; θl ) =  φ( ; θ ) θ = (μ ,σ2) p(y |xN ,θ) = p(l, y |xN ,θ) y l , where l l l and i i i i ∈L   l ( − μ )2 1 y l = p(l|xN ) f (yi ; θl ). (10) φ(y; θl ) = √ exp − , (6) i πσ σ 2 ∈L 2 l 2 l l the model is called the finite Gaussian mixture (FGM) model. Here, xNi is a realization of XNi . On the other hand, if θl = (μl ,σl ,νl ) and

 ν +  Γ l 1 3 tHMRF: Proposed Segmentation Algorithm φ( ; θ ) = 2 , y l (ν + ) (7) √ ν  ( −μ )2  l 1 σ πν Γ( l ) + y l 2 l l 2 1 ν σ 2 This section presents a new segmentation algorithm, termed l l as tHMRF, integrating judiciously the merits of Student’s the model is called the finite Student’s t-mixture (FtM). t-distribution into the HMRF framework.

2.2 Basics of HMRF Model 3.1 Student’s t-Distribution

In a Markov random field (MRF), the sites in S are related to According to the finite Gaussian mixture model, the proba- one another by a neighborhood system, which is defined as bility density function (pdf) of a Y is given N ={Ni ; i ∈ S}, where Ni is the set of sites neighboring by 123 J Math Imaging Vis (2018) 60:355–381 359  ( ; θ) = ω φ( ; μ ,σ2), f y l y l l (11) Gaussian Student’s t with ν = 100 l∈L   Student’s t with ν = 10 ( − μ )2 Student’s t with ν = 5 2 1 y l Student’s t with ν = 2 where φ(y; μl ,σ ) = √ exp − . (12) l πσ σ 2 Student’s t with ν = 1 2 l 2 l

In case of modeling outliers or class having tails longer f(x) than normal, the data can be modeled using two-component Gaussian mixture as [54,61] (1 − ) φ(y; μ, σ 2) + φ(y; μ, cσ 2), (13)

0.00 0.05 0.10 0.15 where c is large and is small, representing the small pro- −10 −5 0 5 10 portion of observations that have a relatively large . x The mixture model of (13) can then be rewritten as Fig. 1 Probability density curves for Gaussian and Student’s t- σ 2 distributions with μ = 0.0, σ 2 = 5.0, and varying ν φ(y; μ, )dH(u), (14) R u 3.2 Proposed Segmentation Algorithm where H is the probability distribution that places mass (1 − ) at the point u = 1 and mass at the point u = 1 . Suppose, c This section illustrates the formulation of the proposed Stu- H is now replaced by the pdf of a gamma random variable ν ν dent’s t-distribution and HMRF model-based segmentation with parameters α = and β = , that is, by the random 2 2 algorithm, termed as tHMRF. In the proposed method, the variable U distributed  as ν ν intensity distribution of the image is modeled as a mix- U ∼ gamma , , (15) ture of finite number of Student’s t-distributions. As the 2 2 pixels in an image are spatially connected, the neighbor- where the gamma(α, β) density function f (u; α, β) is given hood information of each pixel is introduced into the finite by mixture model to achieve robust and accurate segmentation performance. Let y be the intensity value of the ith pixel, βα i −βu α−1 f (u; α, β) = u I( ,∞)(u); (α, β > ) where i ∈ S and xi denotes the corresponding class label, Γ(α)e 0 0 (16) xi ∈ L ={1, 2,...,L}. Hence, the image can be represented ( ) > as a finite mixture of Student’s t-distributions as follows: and the indicator function I(0,∞) u is 1 for u 0 and is zero  elsewhere. The model in (14) is then reduced to p(y |θ) = p(y |X = l)p(X = l|xN ), ∀ i ∈ S (18)   i i i i i Γ ν+1 l∈L 2   f (y; μ, σ, ν) = , (17) ν +1 √  ( ;μ,σ)  (ν+1) Γ l ν d y 2 ( | ) = 2 , ∈ R σ πν Γ( ) 1 + where p yi l (ν + ) yi 2 ν √ ν  ( ;μ ,σ )  l 1 σ πν Γ( l ) 1 + d yi l l 2 l l 2 νl ( ; μ, σ) = (y−μ)2 (19) where d y σ 2 denotes the Mahalanobis squared distance between scalers y and μ with variance σ 2. and p(l|xN ) denotes the prior probability that the ith pixel The expression in (17) is the pdf of Student’s t-distribution i with location parameter μ, scale parameter σ, and ν degrees belongs to lth tissue class, given the class labels of its neigh- N of freedom. In case ν>1, μ becomes the mean of Y , and if boring pixels i . In this regard, the HMRF model of Zhang σ 2 et al. [73] is applied to incorporate the spatial information of ν>2, becomes the variance. As ν tends to infinity, ν(ν−2) each pixel into the proposed probabilistic framework via the U converges to one with probability one, and so Y becomes prior probability distribution p(X = l|xN ). marginally Gaussian with mean μ and variance σ 2. Thus, i i Defining the clique potential as V (x) =−δ(x − x ),the Student’s t-distribution provides a heavy-tailed alternative c  i j probability distribution of the class labels x, according to (8), to the Gaussian distribution with mean μ and variance equal  is given by to a scalar multiple of σ 2 (if ν>2). Figure 1 represents    the probability density curves for Gaussian and Student’s 1 p(x) = exp uˆi (xi ) , (20) t-distributions with μ = 0.0, σ 2 = 5.0, and varying ν.  Z i∈S Hence, from the above discussion and Fig. 1, it is clear that the Student’s t-distribution is a more generalized probabil- where Z is the normalizing constant, called the partition func- ity distribution, which, at special case ν →∞, reduces to tion, and uˆi (xi ) is the current number of neighbors of pixel i Gaussian distribution. having class label xi . Hence, 123 360 J Math Imaging Vis (2018) 60:355–381   ˆ ( ) p(y |l) p(l|xN ) exp ui xi = i i . p(x |xN ) =   . (21)  (25) i i ( | ) ( | ) exp uˆi (m) p yi m p m xNi m∈L m∈L τ Assuming the pixel intensities are statistically indepen- The expression of il in (25) denotes the Ω ∈ dent, the probability density of the entire image can be written that the ith pixel belongs to the class l with class label l, l as L. Evidently, it calculates the belongingness of the ith pixel Ω    to l . Hence, it can be considered as the membership value Ω p(y|θ) = p(yi |θ) = p(yi |l)p(l|xN ). (22) of pixel i to class l , and the corresponding expression as  i i∈S i∈S l∈L the membership function, which calculates the membership of a pixel to a specific class. As the estimation of parameters θ ={μl ,σl ,νl ; l ∈ L} Since is the distribu- from the above expression using either ML or MAP princi- tion for Ui , simple algebraic calculations with the help of ple is computationally infeasible, the EM algorithm is used (23) and (24) would derive the conditional distribution of Ui to solve the problem. The standard EM algorithm has two given Yi = yi and δil = 1, which is parts: first it tries to estimate a set of latent variables based on the given data in its E-step, and then, in the M-step, it Ui | Yi = yi ,δil = 1 ∼ gamma(αil,βil), (26) tries to find the optimum estimate of the parameters of the distribution based on the original variables and the new set vl + 1 of latent variables. Iteratively optimizing these two steps, the where αil = EM algorithm converges to its local optimum solution. The 2 vl + d(yi ; μl ,σl ) latent variables, in this problem, are defined as and βil = . 2 , = δ = 1 if Xi l From (26), the latent variable U is estimated, given the il , i 0 otherwise. observed variables and the current estimate of the parameters:

Here, the variable δil acts as the indicator variable that checks uil = E(Ui |yi ,δil = 1,θ) whether the ith pixel belongs to lth class. Using the charac- vl + 1 teristics of Student’s t-distribution, discussed in Sect. 3.1, = . (27) vl + d(yi ; μl ,σl ) it is further assumed that the observed data yi , i ∈ S aug- Also, (lu) = E(log U |y ,δ = 1,θ) mented by the δ , i ∈ S, l ∈ L are still incomplete, and a set il i i il    il v + v + of additional missing data U , i ∈ S is introduced, where U l 1 l 1 i i = log uil + ψ − log , is defined so that given δil = 1, 2 2 (28) σ 2 | = ,δ = ∼ μ , l 1 ∂ Yi Ui ui il 1 N l (23) where ψ(s) = Γ( ) ∂ Γ(s) is the digamma function. Hence, ui s s the E-step of the proposed tHMRF segmentation algorithm consists of estimating the latent variables τ , u , and (lu) , independently for i ∈ S, and il il il where i ∈ S and l ∈ L.  νl νl The optimal labeling of the pixels of the image is esti- Ui | δil = 1 ∼ gamma , . (24) 2 2 mated, according to the MAP criterion, as follows:   Given δ ={δ ,...,δ }, i ∈ S,theU , i ∈ S is indepen- i i1 iL i xˆ = arg max p(y|x)p(x) dently distributed according to (24).  x      So, in the expectation step or E-step of the proposed algo- δ ∈ S = arg min − log p(y|x) + U(x) rithm, all the latent variables of the model, that is, il, i , x   ∈ L ∈ S   l and Ui , i are estimated, given the observed vari-    ables and the current estimates of the parameters: = arg min − log p(yi |xi ) + U(x) x   ∈S  i  τ = E(δ |y , xN ,θ)  ν +  il il i i xi 1 1 = arg min − log Γ − log σx − log(πνx ) = (δ = | , N ,θ) x 2 i 2 i p il 1 yi x i  ∈S i    = p(Xi = l|yi , xN ,θ)(= p(l|yi )) ν (ν + 1) d(y ; μ ,σ ) i −log Γ( xi ) − xi log 1 + i xi xi + U(x) . ν  p(y |X = l,θ )p(X = l|xN ) 2 2 xi = i i l i i (29) p(yi |θ) 123 J Math Imaging Vis (2018) 60:355–381 361

Using an iterative optimization technique, termed as ∂  ∂ (t) (t) (t) iterated conditional modes (ICM) algorithm [9], this opti- Q(θ|θ ) = ⇒ τ Q , (θ|θ ) = Also, ∂ν 0 il ∂ν 2 il 0 mization problem is reduced to l ∈S l   i        ( ) ν ν ( ) ( ) (ν + ) ( ; μ ,σ ) ν + ⇒ τ t + l − ψ l − t + ( ) t = xi 1 d yi xi xi xi 1 il 1 log uil lu il 0 xˆi = arg min log 1 + − log Γ 2 2 xi 2 νx 2 i∈S i     ν νl νl 1 ( ) ( ) ( ) 1 xi ⇒ + − ψ + τ t (( ) t − t ) = , + log σx + log νx + log Γ( ) −ˆui (xi ) 1 log lu u 0 i 2 i 2 2 2 (t) il il il nl i∈S (30)  (t) = τ (t). where nl il (35) Now, in the maximization step or M-step of the EM i∈S algorithm, the Q-function or the expected complete-data log- From (35), it is clearly visible that νl cannot be solved likelihood E[log p(y, x, u|θ)] is generated, depending on the    explicitly. So, a numerical method has to be used to solve the observed variables and additional latent variables, as follows: νˆ(t+1) above problem and to find optimum l . Here, Newton– ν (θ|θ (t)) = [ ( , , | ,θ)| ,θ(t)] Raphson method is used to solve (35). The estimate of l is Q E log p y x u xN y ν          obtained as follows: start with an initial estimate of l , that (t) (t) (t) = τ ( | ) + , (θ|θ ) + , (θ|θ ) , (ν ) il log p l xNi Q2 il Q3 il (31) is, l 0. The process is repeated as i∈S l∈L   ((ν ) ) (t) (t) g l n where Q2,il(θ|θ ) = E log p(ui |Xi = l,θl )|yi ,θ (ν ) + = (ν ) − , (36) l l n 1 l n g ((ν ) ) ν ν  ν  ν ν   l n = l l − Γ l − l (t) + l − ( )(t) log log uil 1 lu il 2 2  2 2 2  until it converges to an optimum solution, where (t) (t) and Q3,il(θ|θ ) = E log p(yi |ui , Xi = l,θl )|yi ,θ     l k k 2 g(k) = 1 + log − ψ 1 ( ) 1 1 ( ) (y − μ ) = (lu) t − log σ − log(2π)− u t i l . 2 2 il l il 2  2 2 2 σ 1 ( ) ( ) ( ) l + τ t ((lu) t − u t ) (t) il il il (37) nl i∈S Optimizing the Q-function with respect to parameters μl   σ 1 1 k and l , we get the estimates of the parameters, as follows: and g (k) = − ψ , (38) k 2 2 ∂  ∂ (t) (t) (t) Q(θ|θ ) = 0 ⇒ τ Q , (θ|θ ) = 0 ∂μ il ∂μ 3 il ψ ( ) = ∂ ψ( ) l ∈S l where s ∂s s is the trigamma function. The pseu-  i (t) (t) docode of the proposed tHMRF algorithm is presented in τ u yi ⇒ˆμ(t+1) = i∈S il il , Algorithm 1. l ( ) ( ) (32) τ t u t i∈S il il ∂  ∂ (θ|θ (t)) = ⇒ τ (t) (θ|θ (t)) = Algorithm 1: tHMRF: Student’s t-distribution and Q 0 il Q3,il 0 ∂σl ∂σl HMRF Model Based Image Segmentation i∈S  Input : Input image, number of image classes (t) (t) (t+1) 2 τ u (yi −ˆμ ) Output: Segmented image ⇒ (σˆ 2)(t+1) = i∈S ilil l . (33) 1 Initial segmentation and parameter estimation; l τ (t) i∈S il 2 do 3 Estimate the class labels using (30); E-step: Following the proposal of Kent et al. [27], the denominator ( ) ( ) ( ) 4 Estimate the membership values τ using (25); τ t in (33) is replaced by τ t u t for faster i∈S il i∈S il il 5 Estimate latent variables u and lu using (27)and(28), convergence of the EM algorithm. Hence, the estimate of respectively; (σˆ 2)(t+1) M-step: l is modified to: 2 6 Update parameters μ and σ using (32)and(34),  respectively; (t) (t) (t+1) 2 τ u (yi −ˆμ ) 7 Update ν using (36); 2 (t+1) i∈S il il l (σˆ ) =  . (34) 8 t ← t + 1; l (t) (t) τ u 9 while the algorithm does not converge and the maximum number i∈S il il of iterations has not reached; 10 Construct the segmented image

123 362 J Math Imaging Vis (2018) 60:355–381

4 tEM: Proposed Simultaneous Segmentation and where Bias Field Correction Algorithm  ν + Γ l 1 ( |β ,Ω ) = 2 , This section presents another algorithm, termed as tEM, p yi i l (ν + ) (43)  l 1 √ ν ( −β ;μ ,σ ) 2 for simultaneous segmentation and bias field correction of σ πν Γ( l ) + d yi i l l l l 2 1 ν MR images, integrating judiciously the merits of Student’s l t-distribution and HMRF framework. Ωl ∼ t, yi ∈ R, and λ is the density of the uniform distribu- 4.1 Proposed Framework tion.

The model of bias field assumes that the bias field is a mul- 4.2 Bias Field Estimation tiplicative component. If the intensity of the ith pixel of the inhomogeneity-free MR image is ui , and the correspond- Assuming the pixel intensities are statistically independent ing intensity inhomogeneity component and noise are bi and [23,65,73], the probability density of the entire image can ni , respectively, then the intensity vi of the ith pixel of the be written as acquired MR image is obtained as follows:  p(y|β) = p(yi |βi ) (44) v = u b + n , ∀ i ∈ S (39)   i i i i i∈S where S ={1, 2,...,N} and N being the number of pixels T T where y = (y1, y2,...,yN ) and β = (β1,β2,...,βN ) . in the MR image. In general, one can estimate the bias field   from the noisy MR image and may apply post-filtering to The bias field is modeled by an N-dimensional zero mean remove noise from the bias-corrected image [23]. As bias Gaussian prior probability density [23,65,73]: field is multiplicative, a logarithmic transformation has to be applied on the observed intensity data to make it additive, p(β) = G (β) (45)  β  that is,  √ 1 1 T −1 =˜+ β where G (x) = exp − x β x . yi yi i (40) β  (2π)N |β | 2  Using Bayes’ theorem, the posterior probability of the bias where yi = log vi , y˜i = log ui , and βi = log bi . The class field given the observed intensity data is obtained as label of the ith pixel is denoted by xi , where xi ∈ L = {1, 2,...,L} p(y|β)p(β) The intensity distribution of the observed intensity val- p(β|y) =    . (46)   p(y) ues (log-transformed) can be modeled as a mixture of finite  number of probability distributions as follows: According to the MAP principle, the desired estimate of β   p(yi |βi ,θ)= p(yi |βi ,Ωl )p(Ωl ) ∀ i ∈ S (41) should be the one having the largest posterior probability [65] l∈L βˆ = arg max p(β|y). (47) where Ωl is the class having label l. In this regard, it should  β    be noted that p(yi |Ωl ) and p(yi |Xi = l) have the same meaning. A zero-gradient condition on the logarithm of the posterior In the proposed simultaneous segmentation and bias field probability transforms the optimization problem of (47)into correction algorithm, the log-transformed intensity distribu- finding the solution of tion of the brain MR image is modeled as a mixture of finite   number of Student’s t-distributions and one uniform distribu- ∂ tion. The CSF, pathologies, and other non-brain tissue classes p(β|y) = ∀i ∂β ln 0 (48) are unified together into a single class, named “other,” with i   β=βˆ   uniform distribution as the variance of these classes is very large [23]. In effect, the brain MR image is represented as which, after further algebraic calculations, reduces to: follows:     ∂  p(yi |βi ) = p(yi |βi ,Ωl )p(Ωl ) + λp(Ωother); (42) ln p(y j |β j ) + ln p(β) = 0 ∀i. (49) ∂β ˆ :Ω ∼ i  β=β l l t j   123 J Math Imaging Vis (2018) 60:355–381 363 ⎡ ⎤ β  ∂ (β) As only the ith term of summation depends on i ,(49) (y − β − μ ) ∂β p ⇒ ⎣ i i l τ u + i  ⎦ = 0 ∀i can again be written as σ 2 il il (β) :Ω ∼ p l l t l  β=βˆ  ∂ ∂    ∂β p(yi |βi ) ∂β p(β) i + i  = 0 ∀i (57) p(yi |βi ) p(β) β=βˆ     ∂ p(Ω ) p(y |β ,Ω ) (Ω ) ( |β ,Ω )  l ∂β i i l ∂  p l p yi i l i p(β) τ = . l:Ωl ∼t ∂β where il (58) ⇒ + i  = 0 ∀i. (50) p(yi |βi ) p(yi |βi ) p(β) β=βˆ    The expression τil in (58) denotes the posterior proba- t Following the characteristics of Student’s -distribution, bility that pixel i belongs to the lth tissue class Ωl . So, it u discussed in Sect. 3.1, a set of additional missing data i , evidently calculates the belongingness of the pixel i to Ωl , ∈ S i is introduced, where ui is defined so that, which is similar to the membership value of pixel i to tissue class Ω . So, the expression can be considered as the mem- σ 2 l l Yi | Ui = ui ,βi ,Ωl ∼ N(μl , ) (51) bership function that calculates the membership of a pixel to ui a tissue class. Hence, the expectation step of the proposed tEM algorithm incorporates estimation of all latent variables independently for i ∈ S, and of the model, namely τ , u , and (lu) , where i ∈ S and  il il il νl νl l ∈ L. Ui | Ωl ∼ gamma , (52) 2 2 Now, (57) can be written compactly as independently for i ∈ S. These assumptions lead to:  ∂  ∂β p(β) −1 i Ri −  iiβi +  = 0 ∀i (59) ∂ (y − β − μ ) p(β) β=βˆ p(y |β ,Ω ) = i i l p(y |β ,Ω ) u (53)  ∂β i i l σ 2 i i l il   i l  (y − μ ) where u = E(U |y ,β ,Ω ). where R = τ u i l (60) il i i i l i il il σ 2 Since gamma distribution is the conjugate prior distribu- l:Ωl ∼t l tion for U , algebraic operations on (51) and (52) would lead i is the mean residual and the mean inverse covariance is to: ⎧  τ   ⎨ iluil vl + 1 vl + d(yi − βi ; μl ,σl ) , if i = p Ui | Yi = yi ,βi ,Ωl ∼ gamma , . − σ 2 2 2  1 = :Ω ∼ l (61) ip ⎩ l l t (54) 0, otherwise.

From (54), the latent variable uil is estimated as: Further operations on (59) lead to v +   = l 1 . ∇β p(β) uil (55) − −1β +  = vl + d(yi − βi ; μl ,σl ) R 0(62)   p(β) β=βˆ    ( ) = ( | ,β ,Ω ,θ) Also, lu il E log Ui yi i l    v + v + − l 1 l 1 ⇒ R − −1βˆ −  1βˆ = 0 = log uil + ψ − log  β 2 2   ˆ −1 −1 −1 ⇒ β = H R; where H =[ + β ] . (63) (56)  

−1 Replacing (53)into(50), the expression reduces to: Since, estimating β , and hence β , is computationally ⎡ ⎤ infeasible, H is estimated by a linear low-pass filter [65]. (yi − βi − μl ) p(Ω ) p(y |β ,Ω )u ∂ Hence, the bias field at ith pixel is estimated by ⎢ :Ω ∼ l 2 i i l il ⎥ ⎢ l l t σ ∂β p(β)⎥ ⎢ l + i  ⎥ = 0 ⎣ p(yi |βi ) p(β) ⎦  [F R]i β=βˆ βˆ =  (64)   i ⎡ ⎤ [F S]i  ∂ (β)  (y − β − μ ) p(Ω )p(y |β ,Ω ) ∂β p ⇒ ⎣ i i l l i i l u + i  ⎦ = 0 σ 2 ( |β ) il (β) :Ω ∼ l p yi i p −1 l l t  β=βˆ where S =  1 and F is a low-pass filter.     123 364 J Math Imaging Vis (2018) 60:355–381

4.3 HMRF Model for Parameter Estimation and then estimate parameter θ by maximizing the Q-function or Segmentation the complete-data log likelihood E[log p(y, x, u|xN ,θ,β)].      In the current problem, the Q-function is derived as: As the pixels in an image are spatially connected, the spatial  (t) (t) (t) information of the neighboring pixels of an image is incorpo- Q(θ|θ ) = p(x|y,θ ) log p(y, x, u|xN ,θ ,β)       rated into the proposed simultaneous segmentation and bias x      field correction framework, to attain optimum segmentation = (t)( | ) ( | ) + (θ|θ (t)) + (θ|θ (t)) , p l yi log p l xNi Q2,il Q3,il as well as optimal estimate of the parameters of the tissue i∈S l∈L intensity distribution. In this regard, the HMRF framework (68) of Zhang et al. [73] is incorporated into the proposed frame- work. where Similar to (29), the optimal labeling of the pixels of the   (θ|θ (t)) = ( | = ,θ )| ,θ(t) image can also be estimated here according to the MAP cri- Q2,il E log p ui xi l l yi l      terion: ν ν ν ν ( ) ν ( ) = l log l − log Γ l − l u t + l − 1 (lu) t , 2 2 2 2 il 2 il xˆ = arg max{p(y|x,β)p(x)}. (65) (t) (t)  x   Q , (θ|θ ) = E[log p(y |u ,β , x = l,θ )|y ,θ ]    3 il i i i i l i l 2 1 (t) 1 1 (t) (yi − βi − μl ) = (lu) − log σl − log(2π)− u , Assuming pixel intensities are conditionally independent 2 il 2 2 il σ 2 given their class labels and bias fields, we get l  p(y |l,β )p(l|xN ) p(y|x,β) = p(yi |xi ,βi ) i i i   and p(l|yi ) =  . (69) i∈S p(yi |m,βi )p(m|xN )  ν +1  m∈L i  Γ xi = 2 (ν + )  xi 1 Optimizing the Q-function with respect to parameters, the ∈S √ ν d(y −β ;μ ,σ ) i σ πν Γ( xi ) + i i xi xi 2 2 xi xi 1 ν estimates of μ and σ are obtained as follows: 2 xi l l √ 1 N  = exp(−U(y|x,β)) [Z = ( π) ] (66) ( ) Z   p(t)(l|y )u t y ( + ) ∈S i il i μˆ t 1 = i , (70) l (t)( | ) (t)  p l yi uil  i∈S where U(y|x,β) = U(yi |xi ,βi ) (t) (t) (t+1) 2   p (l|yi )u (yi − βi −ˆμ ) ∈S 2 (t+1) i∈S il l   i    (σˆ ) =  . (71)  (ν + ) ( − β ; μ ,σ ) ν + l (t)( | ) xi 1 d yi i xi xi xi 1 p l yi = log 1 + − log Γ i∈S 2 νx 2 i∈S i   1 ν Following the proposal of Kent et al. [27], the denominator + log σ + log ν + log Γ xi .   xi xi (t) (t) (t) 2 2 p (l|yi ) of (71) is replaced by p (l|yi )u i∈S i∈S il for faster convergence of the EM algorithm. Hence, the esti- Using (66), (65) can be reduced to (σˆ 2)(t+1) mate of l is modified to: ˆ = ( ( | ,β) + ( )) = ( | ,β).  x arg min U y x U x arg min U x y ( ) ( + )  x    x   p(t)(l|y )u t (y − β −ˆμ t 1 )2   ( + ) ∈S i il i i l (σˆ 2) t 1 = i  . (72) l (t) (t) p (l|yi )u This optimization problem is then solved with the help of i∈S il ICM algorithm [9]: Optimization of the Q-function with respect to νl leads xˆi = arg min U(xi |y, xS−{i},βi ) ∀ i ∈ S to: x i     (ν + ) ( − β ; μ ,σ )   xi 1 d yi i xi xi ν ν  = arg min log 1 + + log σx l l 1 (t) (t) (t) x ν i + −ψ + ( | )(( ) − ) = i 2 xi 1 log ( ) p l yi lu il uil 0     2 2 t ν + ν nl i∈S 1 xi 1 xi + log νx − log Γ + log Γ −ˆui (xi ) . 2 i 2 2 (73) (67)  (t) (t) where, n = p (l|yi ). Since the explicit solution of l i∈S xˆ and latent variables τ, u, and lu using current estimate of ν from (73) cannot be found, here also the Newton–Raphson     l the parameter θ, construct the complete dataset {ˆx, y, u}, and method is used to obtain the numerical solution of νl .The    123 J Math Imaging Vis (2018) 60:355–381 365 same process of (36) is applied with the following change in Markov random field (HMRF) model, is studied and com- the functions: pared with that of several probabilistic model-based seg- mentation algorithms: Gaussian distribution and HMRF      k k 1 (t) (t) (t) (GHMRF) [73], finite mixture of Student’s t-distributions g(k)=1 + log − ψ + p (l|yi )((lu) − u ) 2 2 (t) il il nl i∈S (FtM) [54], and finite Gaussian mixture (FGM) model [35]; (74) several c-means algorithms: hard c-means (HCM), fuzzy c-means (FCM) [24], rough-fuzzy c-means (RFCM) [40],   and robust rough-fuzzy c-means (rRFCM) [41]; and several 1 1 k and g (k) = − ψ . (75) Student’s t-distribution-based segmentation algorithms: Stu- k 2 2 dent’s t-mixture model with spatial constraints (SMM-SC) The pseudocode of the proposed tEM algorithm is pre- [49], asymmetric Student’s t-mixture model (AsymSMM) sented in Algorithm 2. [48], weighted Student’s t-mixture model (WSMM) [72], directional spatially varying Student’s t-distribution mixture model (DSVStMM) [68], spatially directional information- Algorithm 2: tEM: t-distribution Based Simultaneous based Student’s t-distribution mixture model (SDIStMM) Segmentation and Bias Field Correction [69], Student’s t-mixture model (SMM) [61], and line Input : Input image, number of image classes process-based Gaussian mixture model (LPGMM) [62]. The Output: Segmented image, bias field corrected image 1 Initial segmentation and parameter estimation; comparative performance analysis is studied with respect to 2 do three segmentation evaluation metrics, namely Dice coef- 3 Estimate the class labels using (67); ficient, sensitivity, and specificity. A good segmentation E-step: algorithm should make the values of these three indices as 4 Estimate the membership values τ of each pixel into different tissue classes using (58); high as possible, and ideally, the values should be equal to 1. 5 Estimate latent variables u and lu using (55)and(56), The evaluation indices computed over different benchmark respectively; datasets are graphically presented using box-and-whisker M-step: plot or box plot. The significance analysis of the quantitative 6 Estimate the logarithm of bias field component at each pixel using (64); evaluation measures is demonstrated in terms of p-values 2 7 Update parameters μ and σ using (70)and(72), computed through paired-t test and Wilcoxon signed-rank respectively; test (both one-tailed). The level of significance for both sta- 8 Update ν using (36); tistical tests is considered as 0.05. 9 t ← t + 1; To analyze the performance of the proposed and existing 10 while the algorithm does not converge and the maximum number of iterations has not reached; algorithms, the experimentation is performed over all bench- 11 Construct the segmented image mark indirect immunofluorescence (IIF) images (image num- ber 1–28) obtained from “MIVIA HEp-2 Images Dataset” [20]. The images were acquired with the help of a fluo- rescence microscope coupled with a mercury vapor lamp and with a digital camera. The images have a resolution of 5 Experimental Results 1388 × 1038 pixels, a color depth of 24 bits, and they are stored in an uncompressed format. A single channel is suffi- This section analyses the performance of the proposed cient to convey all the information [20]. tHMRF and tEM algorithms on a set of IIF and natural To analyze the performance of the proposed and existing images and a set of real and simulated brain MR images, algorithms on natural images, the experimentation is per- respectively. formed over a subset of the “Berkeley Image Segmentation Dataset” [46], which is comprised of a set of natural images 5.1 Datasets and Algorithms Compared along with their segmentation maps provided by different individuals. The images have a resolution of 321 × 481 pix- The effectiveness of the proposed algorithms is demonstrated els and a color depth of 24 bits. The following images are on some benchmark datasets, along with a comparison with considered for the analysis: 22090 (no. of classes 4), 24063 related algorithms. (no. of classes 4), 78019 (no. of classes 7), 105053 (no. of classes 3), 108073 (no. of classes 4), 124084 (no. of classes 5.1.1 Segmentation of HEp-2 Cell and Natural Images 4), 135069 (no. of classes 2), 253036 (no. of classes 4), 46076 (no. of classes 6), 302003 (no. of classes 3), 61086 (no. of The performance of the proposed segmentation algorithm classes 5), and 106025 (no. of classes 4). (tHMRF), based on Student’s t-distribution and hidden 123 366 J Math Imaging Vis (2018) 60:355–381

5.1.2 Simultaneous Segmentation and Bias Field harvard.edu/ibsr/). The volumes of BrainWeb database are Correction of Brain MR Images generated using an MRI simulator over a normal brain anatomical model with varying noise levels (0 components (0 The performance of the proposed simultaneous segmentation truth segmented image is obtained from the brain anatomi- and bias field correction algorithm, termed as tEM, based on cal model. For IBSR images, the manual segmentation by Student’s t-distribution, EM algorithm, and HMRF model, an expert supervisor is provided for each volume, which is studied and compared with that of existing simultaneous serves as the gold standard for segmentation. All brain MR segmentation and bias field correction algorithms, namely image volumes of BrainWeb and IBSR database are of size adaptive segmentation (ASeg) [65], modified EM (mEM) 181 × 217 × 181 and 256 × 128 × 256, respectively. [23], and HMRF-EM [73]; existing bias field correction algorithms, namely RC2 [3] and N3 [63]; fuzzy c-means clus- 5.2 Importance of Student’s t-Distribution tering with local information and kernel metric (KWFLICM) [21], FSL 5.0 [26] analysis tool for MRI, finite Gaussian In general, the FM-based statistical frameworks assume that mixture (FGM) model-based segmentation algorithm [35], each class is normally distributed. However, in the proposed and Gaussian distribution and HMRF-based segmentation segmentation algorithm, Student’s t-distribution is incorpo- algorithm (GHMRF) [73]. To evaluate the performance of rated to define an image class. different bias field correction algorithms, two quantitative indices, namely index of class separability (IoCS) [3] and 5.2.1 Segmentation of HEp-2 Cell Images index of joint variation (IoJV) [3], are used. A good bias field correction method should make the value of IoCS as In order to establish the importance of Student’s t-distribution high as possible and that of IoJV as low as possible. On over Gaussian distribution for image segmentation based on the other hand, three quantitative indices, namely Dice coef- HMRF framework, experimentation is carried out on bench- ficient, sensitivity, and specificity, are used to evaluate the mark IIF images of MIVIA database. Figure 2 demonstrates performance of different segmentation algorithms. the statistical distribution of the segmentation results of the To analyze the performance of different algorithms, the proposed tHMRF algorithm and GHMRF algorithm using experimentation is executed over all benchmark simulated box plot, while Table 1 presents the statistical significance MR images obtained from “BrainWeb: Simulated Brain analysis of both algorithms, along with mean and standard Database” (http://www.bic.mni.mcgill.ca/brainweb/) and all deviation of the segmentation results, with respect to Dice real MR images of “IBSR: Internet Brain Segmentation coefficient, sensitivity, and specificity. From all the results Repository” (volume number 1–18) (http://www.cma.mgh. reported in Fig. 2 and Table 1 for HEp-2 cell segmentation, it

Dice Coefficient Sensitivity Specificity 0.95 1 1 0.9 0.9 0.9 0.85 0.8 0.8 0.8 0.75 0.7 0.7 0.7 0.6 0.6 0.65 0.5 0.6 0.5 0.4 0.55 0.4 0.5 0.3 0.45 0.3 0.2 tHMRF GHMRF FtM tHMRF GHMRF FtM tHMRF GHMRF FtM

Fig. 2 Box plot depicting the importance of Student’s t-distribution and HMRF framework for HEp-2 cell segmentation

Table 1 Comparative performance analysis of tHMRF, GHMRF, and FtM for HEp-2 cell segmentation Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tHMRF 0.837381 0.083767 – – 0.840281 0.156247 – – 0.854868 0.120998 – – GHMRF 0.835609 0.085161 0.3168 0.2990 0.852295 0.158183 0.9858 0.9993 0.832534 0.150895 7.4E−03 6.5E−06 FtM 0.810541 0.105745 2.9E−03 3.0E−03 0.859361 0.188352 0.9088 0.9927 0.771666 0.180144 5.2E−05 1.4E−06

123 J Math Imaging Vis (2018) 60:355–381 367

Dice Coefficient Sensitivity Specificity 1 1 0.95 0.95 0.95 0.9 0.9 0.9 0.85 0.85 0.85 0.8 0.8 0.8 0.75 0.75 0.7 0.75 tHMRF GHMRF FtM tHMRF GHMRF FtM tHMRF GHMRF FtM

Fig. 3 Box plot depicting the importance of Student’s t-distribution and HMRF framework for natural image segmentation can be seen that the proposed tHMRF algorithm achieves sig- 5.2.3 Simultaneous Segmentation and Bias Field nificantly better segmentation results compared to GHMRF Correction of Brain MR Images with respect to specificity index, while better but not signifi- cant (marked in italics) segmentation results with respect to To establish the importance of Student’s t-distribution over Dice coefficient, for both statistical tests. On the other hand, Gaussian distribution for simultaneous segmentation and the segmentation performance of GHMRF is significantly bias field correction, experimentation is carried out on sev- better (marked in bold) compared to tHMRF with respect to eral MR images. The HMRF-EM algorithm of Zhang et al. sensitivity index. [73] applies Gaussian distribution to model the brain tissue classes in the joint EM-HMRF framework. Hence, the impor- tance of Student’s t-distribution over Gaussian distribution in the joint EM-HMRF framework is demonstrated by com- 5.2.2 Segmentation of Natural Images paring the tEM with the HMRF-EM algorithm. The results are reported in Fig. 4 and Table 3 for the proposed tEM algo- To establish the importance of Student’s t-distribution over rithm and the HMRF-EM algorithm with respect to different Gaussian distribution for HMRF model-based image seg- quantitative indices. From Table 3, it is clearly observed that mentation, experimentation is carried out on benchmark the tEM provides significantly better segmentation results natural images of Berkeley image segmentation database. compared to the HMRF-EM algorithm with respect to all Figure 3 demonstrates the statistical distribution of the seg- three segmentation evaluation indices, namely Dice coeffi- mentation results of the proposed tHMRF algorithm and cient, sensitivity, and specificity. In case of restoration, the GHMRF algorithm using box plot, while Table 2 presents the proposed tEM algorithm provides significantly better bias statistical significance analysis of both algorithms. From all field correction than the HMRF-EM algorithm with respect the results reported in Fig. 3 and Table 2 for natural image seg- to IoCS index, considering 0.05 as the level of significance, mentation, it is observed that the proposed tHMRF algorithm for both Wilcoxon signed-rank test and paired t-test. With achieves significantly better segmentation results compared respect to IoJV index, the performance of the tEM algorithm to GHMRF with respect to both Dice coefficient and speci- is better compared to the HMRF-EM, but not significantly ficity. With respect to sensitivity index, the performance of (marked in italics). tHMRF is significantly better than GHMRF, when compared Hence, all the results reported in Figs. 2, 3, and 4 and using p-values computed through Wilcoxon signed-rank test, Tables 1, 2, and 3 establish the importance of using Student’s while better but not statistically significant, when compared t-distribution in the proposed segmentation algorithms. The using p-values computed through paired-t test.

Table 2 Comparative performance analysis of tHMRF, GHMRF, and FtM for natural image segmentation

Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tHMRF 0.869843 0.061432 – – 0.893905 0.071816 – – 0.904738 0.062049 – – GHMRF 0.864338 0.061984 2.8E−03 2.4E−04 0.889925 0.073970 0.0889 0.0320 0.893801 0.065739 2.0E−03 7.3E−04 FtM 0.849812 0.060366 2.5E−06 2.4E−04 0.887896 0.076145 3.5E−03 4.9E−04 0.892906 0.066853 6.1E−04 1.2E−03

123 368 J Math Imaging Vis (2018) 60:355–381

Dice Coefficient Sensitivity Specificity 0.95 0.95 0.99

0.9 0.9 0.985 0.98 0.85 0.85 0.975 0.8 0.8 0.97 0.75 0.75 0.965 0.7 0.7 0.96

0.65 0.65 0.955 tEM HMRF-EM tFM tEM HMRF-EM tFM tEM HMRF-EM tFM IoCS IoJV 3 0.9 0.85 2.8 0.8 2.6 0.75 2.4 0.7 2.2 0.65 0.6 2 0.55 1.8 0.5 1.6 0.45 0.4 tEM HMRF-EM tFM tEM HMRF-EM tFM

Fig. 4 Box plot depicting the importance of Student’s t-distribution and HMRF framework for brain MR image analysis

Table 3 Comparative performance analysis of tEM, HMRF-EM, and tFM for segmentation and bias field correction in brain MR images Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tEM 0.851388 0.081129 – – 0.893843 0.059205 – – 0.979726 0.007310 – – HMRF-EM 0.845413 0.078952 1.7E−03 2.5E−03 0.888874 0.058351 4.0E−03 6.6E−03 0.978204 0.006993 2.0E−03 7.2E−04 tFM 0.848307 0.077652 0.1003 0.1651 0.887911 0.057397 9.5E−03 8.4E−03 0.978608 0.007277 0.0311 0.0933

Algorithm IoCS IoJV

Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon tEM 2.386287 0.514882 - - 0.658200 0.235792 - - HMRF-EM 2.319288 0.538007 1.1E−05 3.7E−06 0.660664 0.238610 0.1766 0.1345 tFM 2.382155 0.430383 0.4381 0.3441 0.679224 0.221603 1.0-04 1.0-04 better performance of the t-distribution over Gaussian dis- porated into the proposed segmentation methods to include tribution is achieved due to the fact that the t-distribution spatial information of the pixels to achieve robust segmenta- is a more generalized probability distribution, which is able tion. to model the properties of intensity distribution of an image much more accurately than that of the Gaussian distribution. 5.3.1 Segmentation of HEp-2 Cell Images

5.3 Relevance of HMRF Framework To compare the performance of HMRF framework over FM model for segmentation, experimentation is carried out on The FM model works well only on images with low levels of benchmark IIF images of MIVIA dataset. Figure 2 and Table noise as the spatial information is not taken into account in it. 1 report the comparative performance analysis of the tHMRF But it produces unreliable results for heavy noisy images. In and FtM algorithms with respect to three segmentation order to address this problem, the HMRF framework is incor- evaluation indices, namely Dice coefficient, sensitivity, and 123 J Math Imaging Vis (2018) 60:355–381 369 specificity. From all the results reported in Fig. 2 and Table algorithms such as FGM, HCM, FCM, RFCM, and rRFCM. 1, it can be observed that the proposed tHMRF algorithm Results are reported in Fig. 5 and Table 4 for IIF images of achieves significantly better segmentation results compared MIVIA dataset, with respect to three indices, namely Dice to FtM with respect to both Dice coefficient and specificity coefficient, sensitivity, and specificity. The paired-t test and index, while the performance of FtM is better (marked in Wilcoxon signed-rank test are performed for significance bold) compared to tHMRF with respect to sensitivity. analysis. From the results reported in Fig. 5 and Table 4,it can be observed that the tHMRF provides significantly bet- 5.3.2 Segmentation of Natural Images ter segmentation results compared to existing segmentation algorithms with respect to Dice coefficient, at 95% confi- To compare the performance of HMRF framework over dence level, irrespective of the statistical tests used. With FM model for natural image segmentation, experimenta- respect to sensitivity, the FGM achieves better segmenta- tion is carried out on some benchmark images of Berkeley tion performance (marked in bold) compared to tHMRF, image segmentation dataset. Figure 3 and Table 2 report while the performance of tHMRF is significantly better com- the comparative performance analysis of the tHMRF and pared to HCM, FCM, RFCM, and rRFCM. With respect to FtM algorithms with respect to Dice coefficient, sensitiv- specificity, the proposed tHMRF attains significantly bet- ity, and specificity. From the results, it can be inferred that ter segmentation performance compared to FGM, while the the proposed tHMRF algorithm achieves significantly bet- performance of HCM, FCM, RFCM, and rRFCM is better ter segmentation results compared to FtM with respect to all compared to tHMRF. three segmentation evaluation indices. Figures 6, 7, 8, and 9 depict the qualitative segmentation performance of different algorithms on HEp-2 cell images. 5.3.3 Simultaneous Segmentation and Bias Field The original images and corresponding ground truth images Correction of Brain MR Images are also reported. The segmented outputs generated by dif- ferent methods establish the fact that the proposed tHMRF To compare the performance of HMRF framework over FM method generates more promising outputs than the exist- model for segmentation, experimentation is also carried out ing algorithms. The better performance of the tHMRF is on brain MR images of BrainWeb and IBSR databases. Fig- achieved due to the fact that the Student’s t-distribution pro- ure 4 and Table 3 report the comparative segmentation and vides better representation of image classes, which makes bias field correction performance analysis of the tEM and the tHMRF perform well in image segmentation than Gaus- tFM algorithms with respect to three segmentation evaluation sian distribution. Also, the integration of HMRF framework indices, namely Dice coefficient, sensitivity, and specificity, incorporates spatial information of the neighboring pixels and two bias field correction evaluation indices, namely IoCS into the proposed segmentation method, which enables the and IoJV. algorithm to achieve robust segmentation. With respect to sensitivity values, the performance of the proposed tEM method is significantly better than tFM, while 5.5 Segmentation Performance on Natural Images the tEM achieves better but not significant (marked in italics) segmentation results than tFM, with respect to Dice coeffi- This section presents the comparative performance analysis cient. With respect to specificity values, the proposed tEM of the proposed tHMRF algorithm and several other clus- method attains statistically significant segmentation results tering algorithms such as FGM, HCM, FCM, RFCM, and than tFM, when compared using p-value computed through rRFCM, for natural image segmentation. Results are reported paired-t test, while the performance of tEM is better than in Fig. 10 and Table 5, with respect to three indices, namely tFM, but not significantly, when compared using p-value Dice coefficient, sensitivity, and specificity. From the results computed through Wilcoxon signed-rank test. In case of bias reported in Fig. 10 and Table 5, it is visible that the tHMRF field correction, the tEM algorithm achieves better but not provides significantly better segmentation results compared statistically significant (marked in italics) bias correction to all existing segmentation algorithms at 95 level, irrespec- results than tFM with respect to IoCS value. With respect tive of the quantitative evaluation indices and statistical tests to IoJV value, the performance of the proposed tEM is sig- used. Figures 11 and 12 depict the comparative segmentation nificantly better than tFM, irrespective of the statistical tests performance of different algorithms on two natural images of used. Berkeley image segmentation database, along with the orig- inal images and corresponding ground truth images. From 5.4 Segmentation Performance on IIF Images the segmented images generated by different algorithms, it is clear that the proposed tHMRF method generates more This section presents the comparative performance analy- promising segmented images compared to the existing meth- sis of the proposed tHMRF algorithm and several clustering ods. 123 370 J Math Imaging Vis (2018) 60:355–381

Dice Coefficient Sensitivity Specificity 1 1 0.9 0.9 0.9 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4

0.4 0.3 0.3 0.2 0.2 tHMRF FGM HCM FCM RFCM rRFCM tHMRF FGM HCM FCM RFCM rRFCM tHMRF FGM HCM FCM RFCM rRFCM

Fig. 5 Box plot depicting the performance of tHMRF and different clustering algorithms on HEp-2 cell images

Table 4 Comparative performance analysis of tHMRF and different clustering algorithms for HEp-2 cell segmentation Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tHMRF 0.837381 0.083767 – – 0.840281 0.156247 – – 0.854868 0.120998 – – FGM 0.805942 0.096939 5.2E−05 3.2E−05 0.852072 0.190960 0.8112 0.9550 0.768795 0.186057 9.7E−05 1.0E−05 HCM 0.746904 0.171415 9.6E−04 1.1E−04 0.660157 0.220709 9.2E−08 3.7E−09 0.948188 0.051342 11 FCM 0.768697 0.148792 2.7E−03 3.6E−04 0.692588 0.208033 1.2E−06 1.1E−08 0.936552 0.069779 0.9997 1 RFCM 0.750141 0.161845 7.8E−04 1.5E−04 0.664243 0.215826 1.2E−07 3.7E−09 0.945181 0.064463 0.9999 1 rRFCM 0.783106 0.146356 0.0105 2.0E−03 0.719267 0.206597 1.6E−05 1.9E−08 0.927338 0.074256 0.9993 1

(a) Input (b) GT (c) t HMRF (d) GHMRF (e) FtM

(f) FGM (g) HCM (h) FCM (i) RFCM (j) rRFCM

Fig. 6 Input image of HEp-2 cells (image no. 09), ground truth, and segmented images obtained using different algorithms

5.6 Bias Field Correction of MR Images From the results reported in Fig. 13 and Table 6, it can be seen that the proposed tEM algorithm provides significantly To find out the effectiveness of the proposed tEM algorithm better restoration than the FSL and mEM with respect to IoCS for bias field correction over other existing algorithms such index. Also, the performance of the tEM algorithm is bet- as RC2 [3], N3 [63], FSL [26], ASeg [65], and mEM [23], ter, but not statistically significant, compared to the N3 with experimentation is carried out on all images of BrainWeb and respect to IoCS values. On the other hand, the performance of IBSR database, and the corresponding results are reported in the proposed tEM algorithm is better but not significant than Fig. 13 and Table 6 with respect to two quantitative indices. the RC2 algorithm with respect to IoCS values, when com- Both paired t-test and Wilcoxon signed-rank test are per- pared using p-value computed through paired-t test, while formed for significance analysis. the performance of RC2 is better but not significant (marked

123 J Math Imaging Vis (2018) 60:355–381 371

(a) Input (b) GT (c) tHMRF (d) GHMRF (e) FtM

(f) FGM (g) HCM (h) FCM (i) RFCM (j) rRFCM

Fig. 7 Input image of HEp-2 cells (image no. 14), ground truth, and segmented images obtained using different algorithms

(a) Input (b) GT (c) tHMRF (d) GHMRF (e) FtM

(f) FGM (g) HCM (h) FCM (i) RFCM (j) rRFCM

Fig. 8 Input image of HEp-2 cells (image no. 18), ground truth, and segmented images obtained using different algorithms

(a) Input (b) GT (c) tHMRF (d) GHMRF (e) FtM

(f) FGM (g) HCM (h) FCM (i) RFCM (j) rRFCM

Fig. 9 Input image of HEp-2 cells (image no. 21), ground truth, and segmented images obtained using different algorithms

123 372 J Math Imaging Vis (2018) 60:355–381

Dice Coefficient Sensitivity Specificity 1 1 0.95 0.95 0.95 0.9 0.9 0.9 0.85 0.85

0.8 0.85 0.8

0.8 0.75 0.75

0.7 0.7 0.75 tHMRF FGM HCM FCM RFCM rRFCM tHMRF FGM HCM FCM RFCM rRFCM tHMRF FGM HCM FCM RFCM rRFCM

Fig. 10 Box plot depicting the performance of tHMRF and different clustering algorithms on natural images

Table 5 Comparative performance analysis of tHMRF and different clustering algorithms for natural image segmentation Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tHMRF 0.869843 0.061432 – – 0.893905 0.071816 – – 0.904738 0.062049 – – FGM 0.839993 0.064692 1.9E−03 1.7E−03 0.887088 0.073705 6.6E−03 2.4E−03 0.891800 0.064189 7.3E−04 2.4E−04 HCM 0.831208 0.059810 2.5E−04 1.2E−03 0.885314 0.072856 1.8E−04 4.9E−04 0.895860 0.061089 7.2E−05 2.4E−04 FCM 0.842352 0.064057 3.3E−03 1.2E−03 0.885433 0.072887 1.0E−03 4.9E−04 0.897873 0.061587 7.4E−04 2.4E−04 RFCM 0.828032 0.064878 1.6E−03 2.4E−04 0.886961 0.072888 4.1E−03 7.3E−04 0.898447 0.062013 1.2E−03 2.4E−04 rRFCM 0.848573 0.067077 0.0108 8.1E−03 0.887425 0.073501 7.9E−03 1.7E−03 0.900115 0.062393 4.5E−04 2.4E−04

(a) Input (b) GT (c) tHMRF (d) GHMRF (e) FtM

(f) FGM (g) HCM (h) FCM (i) RFCM (j) rRFCM

Fig. 11 Input image of Berkeley image segmentation database (image no. 22090), ground truth, and segmented images obtained using different algorithms in bold), when compared using p-value computed through RC2 with respect to IoJV values, when compared using p- Wilcoxon signed-rank test. However, the ASeg algorithm value computed through paired-t test, while the performance provides better segmentation performance compared to the is better but not significant, when compared using p-value proposed tEM algorithm, with respect to IoCS values, irre- computed through Wilcoxon signed-rank test. However, the spective of the statistical tests used. N3 and ASeg provide better (marked in bold) restoration With respect to IoJV index, the proposed tEM algorithm performance than the tEM for IoJV index, irrespective of the attains significantly better restoration performance than the statistical tests applied. FSL algorithm and better but not statistically significant per- Figures 14, 15, and 16 compare the reconstructed images formance than the mEM algorithm. On the other hand, it produced by the tEM, HMRF-EM, tFM, RC2, N3, FSL, achieves significantly better segmentation results than the ASeg, and mEM, for different bias fields, noise levels, and

123 J Math Imaging Vis (2018) 60:355–381 373

(a) Input (b) GT (c) tHMRF (d) GHMRF (e) FtM

(f) FGM (g) HCM (h) FCM (i) RFCM (j) rRFCM

Fig. 12 Input image of Berkeley image segmentation database (image no. 135069), ground truth, and segmented images obtained using different algorithms

IoCS IoJV 3 1.1 2.8 1 2.6 0.9 2.4 0.8 2.2 2 0.7 1.8 0.6 1.6 0.5 1.4 1.2 0.4 tEM RC2 N3 FSL ASeg mEM tEM RC2 N3 FSL ASeg mEM

Fig. 13 Box plot depicting the performance of different bias field correction algorithms on brain MR images

Table 6 Comparative performance analysis of different bias correction algorithms on brain MR images Algorithm IoCS IoJV

Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon tEM 2.386287 0.514882 – – 0.658200 0.235792 – – RC2 2.327793 0.528691 0.1687 0.5463 0.686928 0.233706 0.0305 0.0633 N3 2.339543 0.458204 0.0764 0.0576 0.654089 0.229192 0.7157 0.9628 FSL 1.705142 0.333216 2.5E−12 1.5E−11 0.727979 0.233788 3.2E−08 1.6E−09 ASeg 2.422572 0.476535 0.9651 0.8188 0.652563 0.243083 0.9104 0.9587 mEM 2.302948 0.531196 0.0135 5.3E−03 0.676211 0.280335 0.0597 0.2853 volumes. All the results reported in Figs. 14, 15, and 16 estab- 5.7 Segmentation Performance on MR Images lish the fact that the proposed tEM algorithm estimates the bias field more accurately and restores images better than This section compares the segmentation performance of the do the existing methods. Hence, all the results reported in proposed tEM algorithm with that of several existing algo- Table 6, and Figs. 13, 14, 15, and 16 establish that the pro- rithms, namely FSL [26], ASeg [65], mEM [23], KWFLICM posed tEM algorithm performs better bias correction on both [21], FGM [35], and GHMRF [73]. Results are reported in BrainWeb and IBSR databases than the existing simultane- Fig. 17 and Table 7 with respect to three quantitative indices, ous segmentation and bias correction algorithms, irrespective namely Dice coefficient, sensitivity, and specificity. of the noise levels and bias field.

123 374 J Math Imaging Vis (2018) 60:355–381

(a) Input (b) tEM (c) HMRF-EM (d) tFM (e) RC2 (f) N3 (g) FSL (h) ASeg (i) mEM

Fig. 14 Input images of BrainWeb with 20 by different algorithms (top to bottom: noise 3%, 5%, and 7%)

(a) Input (b) tEM (c) HMRF-EM (d) tFM (e) RC2 (f) N3 (g) FSL (h) ASeg (i) mEM

Fig. 15 Input images of BrainWeb with 40 by different algorithms (top to bottom: noise 3%, 5%, and 7%)

From all the results reported in Fig. 17 and Table 7,it generated by different methods establish the fact that the can be observed that the tEM provides significantly bet- proposed tEM method generates more promising outputs ter segmentation results compared to FSL, ASeg, mEM, than do the existing algorithms. The better performance of KWFLICM, and FGM, irrespective of the quantitative the tEM is achieved due to the fact that the Student’s t- indices and statistical tests used. On the other hand, the distribution provides better representation of brain tissue performance of the proposed tEM is significantly bet- classes than Gaussian distribution, which makes the tEM ter compared to the GHMRF with respect to sensitivity perform well in brain MR image segmentation. Also, the and specificity, while better but not statistically significant integration of HMRF with Student’s t-distribution incorpo- (marked in italics) with respect to Dice coefficient, irrespec- rates spatial information of the neighboring pixels into the tive of the statistical tests applied. framework, which enables the algorithm to achieve robust Figures 18, 19, and 20 depict the comparative segmen- and accurate segmentation results, even in the presence of tation performance of different algorithms, along with the heavy noise and outliers. corresponding ground truth images. The segmented outputs

123 J Math Imaging Vis (2018) 60:355–381 375

(a) Input (b) tEM (c) HMRF-EM (d) tFM (e) RC2 (f) N3 (g) FSL (h) ASeg (i) mEM

Fig. 16 Input images of IBSR and restored images by different algorithms (top to bottom: volume no. 2, 5, 8, 10, and 14)

Dice Coefficient Sensitivity Specificity

0.9 0.9 0.98

0.8 0.8 0.96

0.7 0.7 0.94

0.6 0.6 0.92

0.5 0.5 0.9

0.4 0.4 0.88

0.3 0.3 0.86

0.2 0.2 0.84 tEM FSL ASeg mEM KWFLICM FGM GHMRF tEM FSL ASeg mEM KWFLICM FGM GHMRF tEM FSL ASeg mEM KWFLICM FGM GHMRF

Fig. 17 Box plot depicting the performance of different segmentation algorithms on brain MR images

Table 7 Comparative performance analysis of different segmentation algorithms on brain MR images Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tEM 0.851388 0.081129 – – 0.893843 0.059205 – – 0.979726 0.007310 – – FSL 0.830975 0.072109 2.1E−08 6.3E−08 0.884582 0.054352 3.7E−04 7.4E−05 0.976818 0.006379 4.9E−05 8.0E−06 ASeg 0.663345 0.190230 1.2E−08 1.5E−11 0.769097 0.199141 8.1E−05 1.6E−08 0.934725 0.029774 4.1E−12 1.5E−11 mEM 0.604897 0.121859 8.1E−14 1.5E−11 0.664417 0.128373 1.2E−13 1.5E−11 0.931670 0.025061 1.9E−14 1.5E−11 KWFLICM 0.816302 0.114600 1.2E−03 2.2E−03 0.836768 0.103975 2.8E−05 3.7E−09 0.973961 0.014014 6.9E−04 5.0E−04 FGM 0.841015 0.070310 0.0153 0.0130 0.866903 0.053153 2.6E−10 2.0E−10 0.974795 0.006424 1.1E−06 1.9E−07 GHMRF 0.846711 0.074879 0.1310 0.0540 0.884302 0.054940 1.5E−03 5.8E−05 0.977571 0.006814 5.6E−03 4.1E−04

5.8 Performance of Several t-Distribution-Based [69], SMM [61], and LPGMM [62] over several IIF, natural, Segmentation Algorithms and brain MR images.

The performance of the proposed segmentation algorithm 5.8.1 Segmentation of HEp-2 Cell Images is also compared with several Student’s t-distribution- based segmentation algorithms, namely SMM-SC [49], From all the results reported in Fig. 21 and Table 8 for HEp-2 AsymSMM [48], WSMM [72], DSVStMM [68], SDIStMM cell segmentation, it can be seen that the proposed tHMRF 123 376 J Math Imaging Vis (2018) 60:355–381

(a) GT (b) tEM (c) HMRF-EM (d) tFM (e) FSL (f) ASeg (g) mEM (h) KWFLICM (i) FGM (j) GHMRF

Fig. 18 Ground truth and segmented images obtained using different algorithms on BrainWeb database with 20% bias (top to bottom: noise 3%, 5%, and 7%)

(a) GT (b) tEM (c) HMRF-EM (d) tFM (e) FSL (f) ASeg (g) mEM (h) KWFLICM (i) FGM (j) GHMRF

Fig. 19 Ground truth and segmented images obtained using different algorithms on BrainWeb database with 40% bias (top to bottom: noise 3%, 5%, and 7%)

(a) GT (b) tEM (c) HMRF-EM (d) t FM (e) FSL (f) ASeg (g) mEM (h) KWFLICM (i) FGM (j) GHMRF

Fig. 20 Ground truth and segmented images obtained using different algorithms on IBSR database (top to bottom: volume no. 2, 5, 8, 10, and 14)

123 J Math Imaging Vis (2018) 60:355–381 377

Dice Coefficient Sensitivity Specificity 0.95 1 1 0.9 0.9 0.9 0.85 0.8 0.8 0.8 0.7 0.75 0.7 0.7 0.6 0.6 0.65 0.5 0.6 0.5 0.4 0.55 0.4 0.3 0.5 0.45 0.3 0.2 tHMRF SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM tHMRF SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM tHMRF SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM

Fig. 21 Box plot depicting the performance of different t-distribution-based segmentation algorithms on HEp-2 cell IIF images

Table 8 Performance of different t-distribution-based segmentation algorithms on HEp-2 cell IIF images Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tHMRF 0.837381 0.083767 – – 0.840281 0.156247 – – 0.854868 0.120998 – – SMM-SC 0.831370 0.089517 0.0791 0.0116 0.852496 0.178914 0.8925 0.9851 0.828286 0.152255 3.3E−03 1.6E−07 AsymSMM 0.815169 0.093724 8.2E−04 3.2E−04 0.854202 0.158139 0.9940 0.9987 0.788039 0.158622 3.1E−05 1.1E−06 WSMM 0.819151 0.088586 1.0E−03 4.0E−04 0.853766 0.157482 0.9877 0.9949 0.791738 0.155651 2.7E−05 9.4E−07 DSVStMM 0.826807 0.091927 0.0519 0.0168 0.853249 0.158100 0.9914 0.9994 0.823516 0.153570 1.5E−03 2.6E−08 SDIStMM 0.826226 0.091800 0.0392 0.0149 0.853051 0.156887 0.9851 0.9936 0.824131 0.153861 1.8E−03 2.6E−08 SMM 0.810541 0.105745 2.9E−03 3.0E−03 0.859361 0.188352 0.9088 0.9927 0.771666 0.180144 5.2E−05 1.4E−06 LPGMM 0.833490 0.087238 0.1582 0.0331 0.846820 0.176003 0.7576 0.9249 0.830410 0.151533 4.8E−03 1.7E−06

Dice Coefficient Sensitivity Specificity 1 1 0.95 0.95 0.95 0.9 0.9 0.9 0.85 0.85 0.85 0.8 0.8 0.8 0.75 0.75 0.7 0.75 tHMRF SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM tHMRF SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM tHMRF SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM

Fig. 22 Box plot depicting the performance of different t-distribution-based segmentation algorithms on natural images algorithm achieves significantly better HEp-2 cell seg- through Wilcoxon signed-rank test, while its performance is mentation results compared to several t-distribution-based better but not statistically significant when compared using segmentation algorithms, namely SMM-SC, AsymSMM, p-values computed through paired-t test. WSMM, DSVStMM, SDIStMM, SMM, and LPGMM, with respect to specificity index for both statistical tests, while the performances of existing t-distribution-based segmentation 5.8.2 Segmentation of Natural Images algorithms are better compared to tHMRF (marked in bold) with respect to sensitivity. On the other hand, the tHMRF From all the results reported in Fig. 22 and Table 9 for natural achieves significantly better segmentation results compared image segmentation, it is observed that the proposed tHMRF to AsymSMM, WSMM, SDIStMM, and SMM, with respect algorithm achieves significantly better segmentation results to Dice coefficient. The performance of tHMRF is also bet- compared to several existing t-distribution-based segmenta- ter than SMM-SC, DSVStMM, and LPGMM, with respect tion algorithms, irrespective of the quantitative evaluation to Dice coefficient when compared using p-values computed indices and statistical tests used. 123 378 J Math Imaging Vis (2018) 60:355–381

Table 9 Performance of different t-distribution-based segmentation algorithms on natural images Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tHMRF 0.869843 0.061432 – – 0.893905 0.071816 – – 0.904738 0.062049 – – SMM-SC 0.861787 0.059844 2.0E−03 2.4E−03 0.888197 0.071363 7.4E−04 2.4E−04 0.900981 0.062258 3.0E−05 2.4E−04 AsymSMM 0.849864 0.060792 8.2E−03 3.4E−03 0.864884 0.073025 3.9E−04 2.4E−04 0.898937 0.063228 1.5E−03 1.7E−03 WSMM 0.850257 0.060657 9.5E−03 2.4E−03 0.871024 0.072219 1.5E−04 2.4E−04 0.900408 0.063012 7.5E−03 2.4E−04 DSVStMM 0.860130 0.059869 2.2E−03 3.4E−03 0.888312 0.072814 7.9E−03 0.0105 0.897860 0.062417 1.1E−05 2.4E−04 SDIStMM 0.860053 0.059879 2.6E−03 3.4E−03 0.886818 0.073276 8.3E−04 1.2E−03 0.897641 0.062567 5.8E−06 2.4E−04 SMM 0.849812 0.060366 2.5E−06 2.4E−04 0.887896 0.076145 3.5E−03 4.9E−04 0.892906 0.066853 6.1E−04 1.2E−03 LPGMM 0.863955 0.061137 1.5E−03 2.4E−03 0.888778 0.071872 1.3E−04 2.4E−04 0.901314 0.062440 4.1E−04 2.4E−04

Dice Coefficient Sensitivity Specificity 0.95 0.95 0.99 0.985 0.9 0.9 0.98 0.85 0.85 0.975 0.97 0.8 0.8 0.965 0.75 0.75 0.96 0.955 0.7 0.7 0.95 0.65 0.65 0.945 tEM SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM tEM SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM tEM SMM-SC AsymSMM WSMM DSVStMM SDIStMM SMM LPGMM

Fig. 23 Box plot depicting the performance of different t-distribution-based segmentation algorithms on brain MR images

Table 10 Performance of different t-distribution-based segmentation algorithms on brain MR images Algorithm Dice coefficient Sensitivity Specificity

Mean Std. dev. p-values Mean Std. dev. p-values Mean Std. dev. p-values

Paired-t Wilcoxon Paired-t Wilcoxon Paired-t Wilcoxon tEM 0.851388 0.081129 – – 0.893843 0.059205 – – 0.979726 0.007310 – – SMM-SC 0.841535 0.082658 2.7E−09 1.6E−09 0.888741 0.059190 3.0E−09 1.5E−11 0.975176 0.007699 1.6E−15 1.5E−11 AsymSMM 0.832257 0.082027 5.4E−07 4.0E−08 0.859384 0.059471 2.4E−10 1.5E−11 0.972796 0.009919 1.9E−08 8.0E−10 WSMM 0.834090 0.082617 5.7E−06 1.1E−07 0.866949 0.056955 6.4E−12 1.5E−11 0.974768 0.009905 1.2E−05 4.5E−09 DSVStMM 0.842337 0.080105 1.1E−08 4.7E−08 0.887552 0.058522 2.6E−07 7.3E−07 0.972087 0.007716 2.2E−16 1.5E−11 SDIStMM 0.842303 0.080071 2.3E−08 5.4E−08 0.886146 0.058203 1.5E−10 6.3E−10 0.971900 0.007685 2.2E−16 1.5E−11 SMM 0.825053 0.080236 1.2E−11 1.5E−11 0.854051 0.056180 1.3E−15 1.5E−11 0.971702 0.007956 8.7E−13 1.5E−11 LPGMM 0.844257 0.080603 8.9E−09 9.3E−09 0.887379 0.058875 3.6E−05 1.3E−09 0.975382 0.007627 1.6E−10 1.5E−11

5.8.3 Segmentation of Brain MR Images reasons behind the improved performance of tEM is that the proposed tEM algorithm incorporates the bias field cor- From all the results reported in Fig. 23 and Table 10 for seg- rection procedure inside the probabilistic framework for mentation of brain MR images, it is visible that the proposed simultaneous segmentation and bias field reduction from MR tEM algorithm achieves significantly better segmentation images, while the other existing algorithms do not consider results compared to several existing t-distribution-based the removal of this degrading artifact. segmentation algorithms, irrespective of the quantitative evaluation indices and statistical tests used. One of the main

123 J Math Imaging Vis (2018) 60:355–381 379   1  6 Conclusion = exp δ(x − x ) Z i j c∈C The contributions of the paper are as follows:   1   = exp δ(xi − x j ) Z ∈S ∈N 1. development of a new image segmentation algorithm, i j i    termed as tHMRF, integrating judiciously the merits of 1 = exp uˆ (x ) , Student’s t-distribution into the HMRF model; Z i i i∈S 2. development of a new simultaneous segmentation and bias field correction algorithm, termed as tEM, for MR image segmentation; and Equation (53) is derived as 3. demonstrating the efficacy of the proposed tHMRF and

tEM algorithms, along with a comparison with other ∂ ∂ ∞ p(y |β ,Ω ) = p(y |u ,β ,Ω )p(u |Ω ) u related algorithms, on a set of IIF, natural, and real and ∂β i i l ∂β i i i l i l d i i  i −∞  simulated brain MR images. ∞ ∂ = p(yi |ui ,βi ,Ωl ) p(ui |Ωl )dui −∞ ∂βi The objective of the proposed research work is to find a ∞ u (y − β − μ ) better model for the intensity distribution of an image than the = i i i l p(y |u ,β ,Ω )p(u |Ω )du σ 2 i i i l i l i finite Gaussian mixture model. To achieve this goal, the paper −∞ l ∞ incorporates the Student’s t-distribution to model an image (yi − βi − μl ) = ui p(yi , ui |βi ,Ωl )dui class. Having a longer tail than the Gaussian distribution, the σ 2 −∞ l Student’s t-distribution provides a more robust fitting of the ( − β − μ ) ∞ = yi i l ( |β ,Ω ) ( | ,β ,Ω ) image classes than the Gaussian distribution, as the outlier p yi i l ui p ui yi i l dui σ 2 −∞ observations are given reduced weight during the parameter l (y − β − μ ) estimation step. = i i l p(y |β ,Ω ) E(U |y ,β ,Ω ) σ 2 i i l i i i l The major contribution of the paper lies in developing a l methodology for segmentation of images. It integrates the (yi − βi − μl ) = p(y |β ,Ω ) u σ 2 i i l il merits of Student’s t-distribution into the HMRF framework l to achieve robust and accurate image segmentation, even in noisy environment. Integrating the bias field correction step, Acknowledgements This work is partially supported by the Depart- a novel simultaneous segmentation and bias field correction ment of Science and Technology, Government of India, New Delhi (grant no. SB/S3/EECE/050/2015), and the Department of Electronics algorithm has also been proposed for segmentation of MR and Information Technology, Government of India (grant no. PhD- images. Finally, the effectiveness of the proposed tHMRF MLA/4(90)/2015-16). and tEM algorithms are demonstrated both qualitatively and quantitatively, along with a comparison with other related algorithms, on a set of IIF images, natural images, and brain MR images. Although the performance of the proposed seg- References mentation algorithm has been efficiently demonstrated on IIF 1. Abras, G.N., Ballarin, V.L.: A weighted k-means algorithm applied images and natural images, the algorithm can also be applied to brain tissue classification. J. Comput. Sci. Technol. 5(3), 121– to other segmentation problems. Similarly, the simultaneous 126 (2005) segmentation and bias field correction algorithm can also be 2. Ahmed, M.N., Yamany, S.M., Mohamed, N., Farag, A.A., Mori- applied to segment other non-brain MR images. arty, T.: A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imag- ing 21(3), 193–199 (2002) 3. Banerjee, A., Maji, P.: Rough sets for bias field correction in MR Apppendix images using contraharmonic mean and quantitative index. IEEE Trans. Med. Imaging 32(11), 2140–2151 (2013) 4. Banerjee, A., Maji, P.: Rough sets and stomped normal distribu- Defining the clique potential as V (x) =−δ(x − x ),the c  i j tion for simultaneous segmentation and bias field correction in equation (20) is derived as brain MR images. IEEE Trans. Image Process. 24(12), 5764–5776 (2015)   5. Banerjee, A., Maji, P.: Rough-probabilistic clustering and hidden 1 p(x) = exp − U(x) Markov random field model for segmentation of HEp-2 cell and   Z   brain MR images. Appl. Soft Comput. 46, 558–576 (2016) 1  6. Bello, M.G.: A combined Markov random field and wave-packet = exp − V (x) transform-based approach for image segmentation. IEEE Trans. Z c  c∈C Image Process. 3(6), 834–846 (1994) 123 380 J Math Imaging Vis (2018) 60:355–381

7. Bergo, F.P.G., Falcão, A.X., Miranda, P.A.V., Rocha, L.M.: Auto- 29. Lee, C., Huh, S., Ketter, T., Unser, M.: Unsupervised connectivity- matic image segmentation by tree pruning. J. Math. Imaging Vis. based thresholding segmentation of midsagittal brain MR images. 29(2), 141–162 (2007) Comput. Biol. Med. 28(3), 309–338 (1998) 8. Besag, J.: Spatial interaction and the statistical analysis of lattice 30. Lempitsky, V., Blake, A., Rother, C.: Branch-and-mincut: global systems. J. R. Stat. Soc. Ser. B 36(2), 192–326 (1974) optimization for image segmentation with high-level priors. J. 9. Besag, J.: On the statistical analysis of dirty pictures. J. R. Stat. Math. Imaging Vis. 44(3), 315–329 (2012) Soc. Ser. B 48(3), 259–302 (1986) 31. Li, C.L., Goldgof, D.B., Hall, L.O.: Knowledge-based classifica- 10. Bezdek, J.C., Hall, L.O., Clarke, L.P.: Review of MR image seg- tion and tissue labeling of MR images of human brain. IEEE Trans. mentation techniques using . Med. Phys. 20(4), Med. Imaging 12(4), 740–750 (1993) 1033–1048 (1993) 32. Li, F., Shen, C., Pi, L.: A new diffusion-based variational model for 11. Brandt, M.E., Bohan, T.P., Kramer, L.A., Fletcher, J.M.: Estimation image denoising and segmentation. J. Math. Imaging Vis. 26(1), of CSF, white and gray matter volumes in hydrocephalic chil- 115–125 (2006) dren using fuzzy clustering of MR images. Comput. Med. Imaging 33. Li, H.D., Kallergi, M., Clarke, L.P., Jain, V.K., Clark, R.A.: Markov Graph. 18, 25–34 (1994) random field for tumor detection in digital mammography. IEEE 12. Cagnoni, S., Coppini, G., Rucci, M., Caramella, D., Valli, G.: Neu- Trans. Med. Imaging 14(3), 565–576 (1995) ral network segmentation of magnetic resonance spin echo images 34. Li, Y., Chi, Z.: MR brain image segmentation based on self- of the brain. J. Biomed. Eng. 15(5), 355–362 (1993) organizing map network. Int. J. Inf. Technol. 11(8), 45–53 (2005) 13. Cai, W., Chen, S., Zhang, D.: Fast and robust fuzzy c-means 35. Liang, Z., MacFall, J.R., Harrington, D.P.: Parameter estimation clustering algorithms incorporating local information for image and tissue segmentation from multispectral MR images. IEEE segmentation. Pattern Recognit. 40, 825–838 (2007) Trans. Med. Imaging 13(3), 441–449 (1994) 14. Chang, J.C., Chou, T.: Iterative graph cuts for image segmentation 36. Liew, A.W.C., Yan, H.: An adaptive spatial fuzzy clustering algo- with a nonlinear statistical shape prior. J. Math. Imaging Vis. 49(1), rithm for 3-D MR image segmentation. IEEE Trans. Med. Imaging 87–97 (2014) 22(9), 1063–1075 (2003) 15. Chen, S., Zhang, D.: Robust image segmentation using FCM with 37. Liu, C., Dong, F., Zhu, S., Kong, D., Liu, K.: New variational spatial constraints based on new kernel-induced distance measure. formulations for level set evolution without reinitialization with IEEE Trans. Syst. Man Cybern. Part B Cybern. 34(4), 1907–1916 applications to image segmentation. J. Math. Imaging Vis. 41(3), (2004) 194–209 (2011) 16. Chen, X., Udupa, J.K., Bagci, U., Zhuge, Y.,Yao, J.: Medical image 38. Liu, J., Zhang, H.: Image segmentation using a local GMM in segmentation by combining graph cuts and oriented active appear- a variational framework. J. Math. Imaging Vis. 46(2), 161–176 ance models. IEEE Trans. Image Process. 21(4), 2035–2046 (2012) (2013) 17. Ciesielski, K.C., Udupa, J.K., Falcão, A.X., Miranda, P.A.V.:Fuzzy 39. Maji, P., Kundu, M.K., Chanda, B.: Second order fuzzy measure connectedness image segmentation in graph cut formulation: a and weighted co-occurrence matrix for segmentation of brain MR linear-time algorithm and a comparative analysis. J. Math. Imaging images. Fundamenta Informaticae 88(1–2), 161–176 (2008) Vis. 44(3), 375–398 (2012) 40. Maji, P., Pal, S.K.: Rough set based generalized fuzzy c-means 18. Cyganek, B.: One-class support vector ensembles for image seg- algorithm and quantitative indices. IEEE Trans. Syst. Man Cybern. mentation and classification. J. Math. Imaging Vis. 42(2), 103–117 Part B Cybern. 37(6), 1529–1540 (2007) (2012) 41. Maji, P., Paul, S.: Rough-fuzzy clustering for grouping functionally 19. Diplaros, A., Vlassis, N., Gevers, T.: A spatially constrained gen- similar genes from microarray data. IEEE/ACM Trans. Comput. erative model and an EM algorithm for image segmentation. IEEE Biol. Bioinform. 10(2), 286–299 (2013) Trans. Neural Netw. 18(3), 798–808 (2007) 42. Maji, P., Roy, S.: Rough-fuzzy clustering and multiresolution 20. Foggia, P., Percannella, G., Soda, P., Vento, M.: Benchmarking image analysis for text-graphics segmentation. Appl. Soft Com- HEp-2 cells classification methods. IEEE Trans. Med. Imaging put. 30, 705–721 (2015) 32(10), 1878–1889 (2013) 43. Maji, P., Roy, S.: Rough-fuzzy clustering and unsupervised feature 21. Gong, M., Liang, Y., Shi, J., Ma, W., Ma, J.: Fuzzy c-means selection for wavelet based MR image segmentation. PLoS One clustering with local information and kernel metric for image seg- 10(4), e0123,677 (2015). doi:10.1371/journal.pone.0123677 mentation. IEEE Trans. Image Process. 22(2), 573–584 (2013) 44. Mangin, J.F., Frouin, V., Bloch, I., Rgis, J., Lpez-Krahe, J.: From 22. Greenspan, H., Ruf, A., Goldberger, J.: Constrained Gaussian mix- 3D magnetic resonance images to structural representations of ture model framework for automatic segmentation of MR brain the cortex topography using topology preserving deformations. J. images. IEEE Trans. Med. Imaging 25(9), 1233–1245 (2006) Math. Imaging Vis. 5(4), 297–318 (1995) 23. Guillemaud, R., Brady, M.: Estimating the bias field of MR images. 45. Manousakes, I.N., Undrill, P.E., Cameron, G.G.: Split and merge IEEE Trans. Med. Imaging 16(3), 238–251 (1997) segmentation of magnetic resonance medical images: performance 24. Hall, L.O., Bensaid, A.M., Clarke, L.P., Velthuizen, R.P., Silbiger, evaluation and extension to three dimensions. Comput. Biomed. M.S., Bezdek, J.C.: A comparison of neural network and fuzzy Res. 31(6), 393–412 (1998) clustering techniques in segmenting magnetic resonance images of 46. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human the brain. IEEE Trans. Neural Netw. 3(5), 672–682 (1992) segmented natural images and its application to evaluating seg- 25. Held, K., Kops, E.R., Krause, B.J., Wells III, W.M., Kikinis, R., mentation algorithms and measuring ecological . In: Muller-Gartner, H.W.: Markov random field segmentation of brain Proceedings of the 8th IEEE International Conference on Com- MR images. IEEE Trans. Med. Imaging 16(6), 878–886 (1997) puter Vision, pp. 416–423 (2001) 26. Jenkinson, M., Beckmann, C.F., Behrens, T.E., Woolrich, M.W., 47. McInerney, T., Terzopoulos, D.: T-snakes: topology adaptive Smith, S.M.: FSL. Neuroimage 62(2), 782–790 (2012) snakes. Med. Image Anal. 4(2), 73–91 (2000) 27. Kent, J.T., Tyler, D.E., Vard, Y.: A curious likelihood identity for 48. Nguyen, T.M., Wu, Q.M.J.: A robust non-symmetric mixture mod- the multivariate t-distribution. Comm. Stat. Simul. Comput. 23(2), els for image segmentation. In: Proceedings of the 19th IEEE 441–453 (1994) International Conference on Image Processing (2012) 28. Krinidis, S., Chatzis, V.: A robust fuzzy local information c-means 49. Nguyen, T.M., Wu, Q.M.J.: Robust student’s-t mixture model with clustering algorithm. IEEE Trans. Image Process. 19(5), 1328– spatial constraints and its application in medical image segmenta- 1337 (2010) tion. IEEE Trans. Med. Imaging 31(1), 103–116 (2012) 123 J Math Imaging Vis (2018) 60:355–381 381

50. Nguyen, T.M., Wu, Q.M.J.: Fast and robust spatially constrained 72. Zhang, H., Wu, Q.M.J., Nguyen, T.M.: Image segmentation by a Gaussian mixture model for image segmentation. IEEE Trans. Cir- new weighted student’s t-mixture model. IET Image Process. 7(3), cuits Syst. Video Technol. 23(4), 621–635 (2013) 240–251 (2013) 51. Nguyen, T.M., Wu, Q.M.J., Ahuja, S.: An extension of the standard 73. Zhang, Y., Brady, M., Smith, S.: Segmentation of brain MR images mixture model for image segmentation. IEEE Trans. Neural Netw. through a hidden Markov random field model and the expectation- 21(8), 1326–1338 (2010) maximization algorithm. IEEE Trans. Med. Imaging 20(1), 45–57 52. Otsu, N.: A threshold selection method from gray-level histogram. (2001) IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979) 74. Zhuge, Y., Udupa, J.K., Saha, P.K.: Vectorial scale-based fuzzy- 53. Pal, N.R., Pal, S.K.: A review on image segmentation techniques. connected image segmentation. Comput. Vis. Image Underst. 101, Pattern Recognit. 26(9), 1277–1294 (1993) 177–193 (2006) 54. Peel, D., McLachlan, G.J.: Robust mixture modelling using the t-distribution. Stat. Comput. 10, 339–348 (2000) 55. Pham, D.L., Prince, J.L.: Adaptive fuzzy segmentation of mag- netic resonance images. IEEE Trans. Med. Imaging 18(9), 737–752 Abhirup Banerjee received (1999) the B.Sc. degree in Statis- 56. Pi, L., Fan, J., Shen, C.: Color image segmentation for objects of tics from University of Cal- interest with modified geodesic active contour method. J. Math. cutta, India in 2009 and Master Imaging Vis. 27(1), 51–57 (2007) degree in Statistics and Ph.D. 57. Reddick, W.E., Glass, J.O., Cook, E.N., Elkin, T.D., Deaton, R.J.: degree in Computer Science both Automated segmentation and classification of multispectral mag- from Indian Statistical Institute, netic resonance images of brain using artificial neural networks. India, in 2011 and 2017, respec- IEEE Trans. Med. Imaging 16(6), 911–918 (1997) tively. His research interests 58. Saha, P.K., Udupa, J.K.: Relative fuzzy connectedness among include biomedical image anal- multiple objects: theory, algorithms, and applications in image seg- ysis, image processing, statisti- mentation. Comput. Vis. Image Underst. 82, 42–56 (2001) cal pattern recognition, machine 59. Sahoo, P.K., Soltani, S., Wong, A.K.C., Chen, Y.C.: A survey of learning, and so forth. He has thresholding techniques. Comput. Vis. Graph. Image Process. 41, published around 10 papers in 233–260 (1988) international journals and con- 60. Sawatzky, A., Tenbrinck, D., Jiang, X., Burger, M.: A variational ferences. He is also a reviewer of many international journals. Dr. framework for region-based segmentation incorporating physical Banerjee has received the ISCA Young Scientist Award from the Indian noise models. J. Math. Imaging Vis. 47(3), 179–209 (2013) Science Congress Association in the year 2016–2017 and the Second 61. Sfikas, G., Nikou, C., Galatsanos, N.: Robust image segmentation Prize in the Fourth IDRBT Doctoral Colloquium in 2014. with mixtures of students’s t-distribution. In: Proceedings of the IEEE International Conference on Image Processing, pp. 273–276 (2007) 62. Sfikas, G., Nikou, C., Galatsanos, N., Heinrich, C.: Spatially vary- ing mixtures incorporating line processes for image segmentation. J. Math. Imaging Vis. 36(2), 91–110 (2010) Pradipta Maji received the 63. Sled, J.G., Zijdenbos, A.P., Evans, A.C.: A nonparametric method B.Sc. degree in Physics, the for automatic correction of intensity nonuniformity in MRI data. MSc degree in Electronics sci- IEEE Trans. Med. Imaging 17(1), 87–97 (1998) ence, and the Ph.D. degree in 64. Wang, X.Y., Bua, J.: A fast and robust image segmentation using the area of Computer Science FCM with spatial information. Digit. Signal Process. 20, 1173– from Jadavpur University, India, 1182 (2010) in 1998, 2000, and 2005, respec- 65. Wells III, W.M., Grimson, W.E.L., Kikins, R., Jolezs, F.A.: Adap- tively. Currently, he is an asso- tive segmentation of MRI data. IEEE Trans. Med. Imaging 15(8), ciate professor in the Machine 429–442 (1996) Intelligence Unit, Indian Statisti- 66. Xia, Y., Feng, D., Wang, T., Zhao, R., Zhang, Y.: Image segmen- cal Institute, Kolkata, India. His tation by clustering of spatial patterns. Pattern Recognit. Lett. 28, research interests include pat- 1548–1555 (2007) tern recognition, machine learn- 67. Xiao, K., Ho, S.H., Hassanien, A.E.: Automatic unsupervised seg- ing, soft computing, computa- mentation methods for MRI based on modified fuzzy c-means. tional biology and bioinformat- Fundamenta Informaticae 87(3–4), 465–481 (2008) ics, medical image processing, and so forth. He has published more than 68. Xiong, T., Yi, Z., Zhang, L.: Grayscale image segmentation by 100 papers in international journals and conferences. He is an author of spatially variant mixture model with student’s t-distribution. Mul- a book published by Wiley-IEEE Computer Society Press and another timed. Tools Appl. 72(1), 167–189 (2014) book published by Springer-Verlag, London. Dr. Maji has received the 69. Xiong, T., Zhang, L., Yi, Z.: Robust t-distribution mixture modeling 2008 Microsoft Young Faculty Award from Microsoft Research Lab- via spatially directional information. Neural Comput. Appl. 24(6), oratory India Pvt., the 2009 Young Scientist Award from the National 1269–1283 (2014) Academy of Sciences, India, and the 2011 Young Scientist Award from 70. Yang, Z., Chung, F.L., Shitong, W.: Robust fuzzy clustering-based the Indian National Science Academy, and has been selected as the 2009 image segmentation. Appl. Soft Comput. 9, 80–84 (2009) Young Associate of the Indian Academy of Sciences, India. 71. Zhang, H., Wu, Q.M.J., Nguyen, T.M.: A robust fuzzy algorithm based on students t-distribution and mean template for image seg- mentation application. IEEE Signal Process. Lett. 20(2), 117–120 (2013)

123