Estimating the Granularity Coefficient of a Potts-Markov Random Field Within an MCMC Algorithm

Estimating the Granularity Coefficient of a Potts-Markov Random Field Within an MCMC Algorithm

View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Open Archive Toulouse Archive Ouverte Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12372 To link to this article : DOI :10.1109/TIP.2013.2249076 URL : http://dx.doi.org/10.1109/TIP.2013.2249076 To cite this version : Pereyra, Marcelo Alejandro and Dobigeon, Nicolas and Batatia, Hadj and Tourneret, Jean-Yves Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm. (2013) IEEE Transactions on Image Processing, vol. 22 (n° 6). pp. 2385-2397. ISSN 1057-7149 Any correspondance concerning this service should be sent to the repository administrator: [email protected] Estimating the Granularity Coefficient of a Potts-Markov Random Field within a Markov Chain Monte Carlo Algorithm Marcelo Pereyra, Nicolas Dobigeon, Hadj Batatia and Jean-Yves Tourneret Abstract— This paper addresses the problem of estimating the standard Bayesian image classification or segmentation prob- Potts parameter β jointly with the unknown parameters of a lem. More precisely, we consider Bayesian models defined Bayesian model within a Markov chain Monte Carlo (MCMC) by a conditional observation model with unknown parameters algorithm. Standard MCMC methods cannot be applied to this z problem because performing inference on β requires computing and a discrete hidden label vector whose prior distribution the intractable normalizing constant of the Potts model. In the is a Potts model with hyperparameter β (this Bayesian model proposed MCMC method, the estimation of β is conducted using is defined in Section II). From a methodological perspective, a likelihood-free Metropolis–Hastings algorithm. Experimental inference on β is challenging because the distribution f (z, β) results obtained for synthetic data show that estimating β jointly depends on the normalizing constant of the Potts model with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of (hereafter denoted as C(β)), which is generally intractable. β. On the other hand, choosing an incorrect value of β can This problem has received some attention in the recent image degrade estimation performance significantly. To illustrate the processing literature, as it would lead to fully unsupervised interest of this method, the proposed algorithm is successfully algorithms [9]–[13]. applied to real bidimensional SAR and tridimensional ultrasound In this work we focus on the estimation of β within a images. Markov chain Monte Carlo (MCMC) algorithm that handles Index Terms— Bayesian estimation, Gibbs sampler, intractable 2D or 3D data sets [14]–[18]. MCMC methods are powerful normalizing constants, mixture model, Potts-Markov field. tools to handle Bayesian inference problems for which the minimum mean square error (MMSE) or the maximum a I. INTRODUCTION posteriori (MAP) estimators are difficult to derive analytically. ODELING spatial correlation in images is fundamental MCMC methods generate samples that are asymptotically Min many image processing applications. Markov ran- distributed according to the joint posterior of the unknown dom fields (MRFs) have been recognized as efficient tools model parameters. These samples are then used to approximate for capturing these spatial correlations [1]–[8]. One particular the Bayesian estimators. However, standard MCMC methods MRF often used for Bayesian classification and segmentation cannot be applied directly to Bayesian problems based on is the Potts model, which generalizes the binary Ising model the Potts model. Indeed, inference on β requires computing to arbitrary discrete vectors. The amount of spatial correla- the normalizing constant of the Potts model C(β), which is tion introduced by this model is controlled by the so-called generally intractable. Specific MCMC algorithms have been granularity coefficient β. In most applications, this important designed to estimate Markov field parameters in [19], [20] parameter is set heuristically by cross-validation. and more recently in [9], [10]. A variational Bayes algorithm This paper studies the problem of estimating the Potts based on an approximation of C(β) has also been recently coefficient β jointly with the other unknown parameters of a proposed in [11]. Maximum likelihood estimation of β within expectation-maximization (EM) algorithms has been studied Manuscript received April 12, 2012; revised October 25, 2012; accepted in [12], [13], [21]. The strategies used in these works for January 26, 2013. Date of publication February 26, 2013; date of current avoiding the computation of C(β) are summarized below. version April 17, 2013. This work was supported in part by the CAMM4D Project, Funded by the French FUI, the Midi-Pyrenees region, and the SuSTaIN Program - EPSRC under Grant EP/D063485/1 - the Department of Mathematics of the University of Bristol, and the Hypanema ANR under A. Pseudo-Likelihood Estimators Project n_ANR-12-BS03-003. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Rafael Molina. One possibility to avoid evaluating C(β) is to eliminate M. Pereyra is with the School of Mathematics, University of Bristol, it from the posterior distribution of interest. More precisely, University Walk BS8 1TW, U.K. (e-mail: [email protected]). one can define a prior distribution f (β) such that the nor- N. Dobigeon, H. Batatia, and J.-Y. Tourneret are with the Univer- sity of Toulouse, IRIT/INP-ENSEEIHT/T´eSA, Toulouse 31071, France malizing constant cancels out from the posterior (i.e., f (β) ∝ + (e-mail: [email protected]; [email protected]; jean-yves. C(β)1R+ (β), where 1R+ (·) is the indicator function on R ), [email protected]). resulting in the so-called pseudo-likelihood estimators [22]. Although analytically convenient this approach can result in poor estimation unless β is small [23]. B. Approximation of C(β) computation and are generally too costly for image processing Another possibility is to approximate the normalizing applications. An alternative auxiliary variable method based ∗ constant C(β). Existing approximations can be classified into on a one-sample estimator of the ratio C(β)/C(β ) has been three categories: based on analytical developments, on sam- proposed in [33] and recently been improved by using several pling strategies or on a combination of both. A survey of the auxiliary vectors and sequential Monte Carlo samplers in [34] ∗ state-of-the-art approximation methods up to 2004 has been (the ratio C(β)/C(β ) arises in the MCMC algorithm defined presented in [20]. The methods considered in [20] are the in Section III-C). More details on the application of [33] to the mean field, the tree-structured mean field and the Bethe energy estimation of the Potts coefficient β are provided in a separate (loopy Metropolis) approximations, as well as two sampling technical report [29]. strategies based on Langevin MCMC algorithms. It is reported in [20] that mean field type approximations, which have D. Likelihood-Free Methods been successfully used within EM [24], [25] and stochastic Finally, it is possible to avoid computing the normalizing EM algorithms [26], generally perform poorly in MCMC constant C(β) by using likelihood-free MCMC methods [35]. algorithms. More recently, exact recursive expressions have These methods circumvent explicit evaluation of intractable been proposed to compute C(β) analytically [11]. However, likelihoods within an MH algorithm by using a simulation- to our knowledge, these recursive methods have only been based approximation. More precisely, akin to the auxiliary successfully applied to small problems (i.e., for MRFs of size variable method [30], an auxiliary vector w distributed accord- smaller than 40×40) with reduced spatial correlation β < 0.5. ing to the likelihood f (z|β) is introduced. MH algorithms Another sampling-based approximation consists in estimat- that do not require evaluating f (z|β) (nor C(β)) can then be ing C(β) by Monte Carlo integration [27, Ch. 3], at the considered to generate samples that are asymptotically distrib- expense of very substantial computation and possibly biased uted according to the exact posterior distribution f (β|z) [35]. estimations (bias arises from the estimation error of C(β)). Although generally unfeasible1, these exact methods have Better results can be obtained by using importance sampling given rise to the approximative Bayesian computation (ABC) or path sampling methods [28]. These methods have been framework [36], which studies likelihood-free methods to gen- applied to the estimation of β within an MCMC image erate samples from approximate posterior densities fǫ (β|z) ≈ processing algorithm in [19]. Although more precise than f (β|z) at a reasonable computational cost. To our knowledge Monte Carlo integration, approximating C(β) by importance these promising techniques, that are increasingly regarded sampling or path sampling still requires substantial compu- as “the most satisfactory approach to intractable likelihood tation and is generally unfeasible for large fields. This has problems” [36], have not yet been applied to image processing motivated recent works that reduce computation by combining problems. importance sampling with analytical approximations. More The main contribution of this paper is to propose an ABC precisely, approximation methods that combine importance MCMC algorithm for the joint estimation of the label vector z, sampling with extrapolation schemes have been proposed for the granularity coefficient β and the other unknown parameters the Ising model (i.e., a 2-state Potts model) in [9] and for the of a Bayesian segmentation problem based on a Potts model. 3-state Potts model in [10]. However, we have found that this The estimation of β is included within an MCMC algorithm extrapolation technique introduces significant bias [29].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us