1 Introduction to Markov Random Fields

1 Introduction to Markov Random Fields

1 Introduction to Markov Random Fields Andrew Blake and Pushmeet Kohli This book sets out to demonstrate the power of the Markov random field (MRF) in vision. It treats the MRF both as a tool for modeling image data and, coupled with a set of recently developed algorithms, as a means of making inferences about images. The inferences con- cern underlying image and scene structure to solve problems such as image reconstruction, image segmentation, 3D vision, and object labeling. This chapter is designed to present some of the main concepts used in MRFs, both as a taster and as a gateway to the more detailed chapters that follow, as well as a stand-alone introduction to MRFs. The unifying ideas in using MRFs for vision are the following: • Images are dissected into an assembly of nodes that may correspond to pixels or agglomerations of pixels. • Hidden variables associated with the nodes are introduced into a model designed to “explain” the values (colors) of all the pixels. • A joint probabilistic model is built over the pixel values and the hidden variables. • The direct statistical dependencies between hidden variables are expressed by explicitly grouping hidden variables; these groups are often pairs depicted as edges in a graph. These properties of MRFs are illustrated in figure 1.1. The graphs corresponding to such MRF problems are predominantly gridlike, but may also be irregular, as in figure 1.1(c). Exactly how graph connectivity is interpreted in terms of probabilistic conditional depen- dency is discussed a little later. The notation for image graphs is that the graph G = (V, E) consists of vertices V = (1, 2,...,i,...,N) corresponding, for example, to the pixels of the image, and a set of edges E where a typical edge is (i, j), i, j ∈ V, and edges are considered to be undirected, so that (i, j) and (j, i) refer to the same edge. In the superpixel graph of figure 1.1), the nodes are superpixels, and a pair of superpixels forms an edge in E if the two superpixels share a common boundary. The motivation for constructing such a graph is to connect the hidden variables associated with the nodes. For example, for the task of segmenting an image into foreground and background, each node i (pixel or superpixel) has an associated random variable Xi that 2 1 Introduction to Markov Random Fields (a) (b) (c) Figure 1.1 Graphs for Markov models in vision. (a) Simple 4-connected grid of image pixels. (b) Grids with greater con- nectivity can be useful—for example, to achieve better geometrical detail (see discussion later)—as here with the 8-connected pixel grid. (c) Irregular grids are also useful. Here a more compact graph is constructed in which the nodes are superpixels—clusters of adjacent pixels with similar colors. may take the value 0 or 1, corresponding to foreground or background, respectively. In order to represent the tendency of matter to be coherent, neighboring sites are likely to have the same label. So where (i, j) ∈ E, some kind of probabilistic bias needs to be associated with the edge (i, j) such that Xi and Xj tend to have the same label—both 0 or both 1. In fact, any pixels that are nearby, not merely adjacent, are likely to have the same label. On the other hand, explicitly linking all the pixels in a typical image, whose foreground/background labels have correlations, would lead to a densely connected graph. That in turn would result in computationally expensive algorithms. Markov models explicitly represent only the associations between relatively few pairs of pixels—those pixels that are defined as neighbors because of sharing an edge in E. The great attraction of Markov Models is that they leverage a knock-on effect—that explicit short-range linkages give rise to implied long-range correlations. Thus correlations over long ranges, on the order of the diameters of typical objects, can be obtained without undue computational cost. The goal of this chapter is to investigate probabilistic models that exploit this powerful Markov property. 1.1 Markov Chains: The Simplest Markov Models In a Markov chain a sequence of random variables X = (X1,X2,...)has a joint distribution specified by the conditionals P(Xi | Xi−1,Xi−2,...,X1). The classic tutorial example [381, sec. 6.2] is the weather, so that Xi ∈ L ={sunny, rainy}. The weather on day i can be influenced by the weather many days previous, but in the simplest form of Markov chain, the dependence of today’s weather is linked explicitly only to yesterday’s weather. It is also linked implicitly, as a knock-on effect, to all previous days. This is a first-order Markov assumption, that P(Xi | Xi−1,Xi−2,...,X1) = P(Xi | Xi−1). (1.1) This is illustrated in figure 1.2. The set of conditional probabilities P(Xi | Xi−1) is in fact a2× 2 matrix. For example: 1.1 Markov Chains 3 (a) x1 xi–1 xi xi+1 xN (b) Mon Tues Wed Thur (c) x1 xi–1 xi xi+1 xN Figure 1.2 A simple first-order Markov chain for weather forecasting. (a) A directed graph is used to represent the conditional dependencies of a Markov chain. (b) In more detail, the state transition diagram completely specifies the proba- bilistic process of the evolving weather states. (c) A Markov chain can alternatively be expressed as an undirected graphical model; see text for details. Yesterday (Xi−1) Rain Sun Today (Xi ) Rain 0.4 0.8 Sun 0.6 0.2 An interesting and commonly used special case is the stationary Markov chain, in which the matrix Mi (x, x ) = P(Xi = x | Xi−1 = x ) (1.2) is independent of time i, so that Mi (., .) = Mi−1(., .). In the weather example this corre- sponds to the assumption that the statistical dependency of weather is a fixed relationship, the same on any day. We will not dwell on the simple example of the Markov chain, but a few comments may be useful. First, the first-order explicit structure implicitly carries longer-range dependencies, too. For instance, the conditional dependency across three successive days is obtained by multiplying together the matrices for two successive pairs of days: P(Xi = x | Xi−2 = x ) = Mi (x, x )Mi−1(x ,x ). (1.3) x∈L 4 1 Introduction to Markov Random Fields Thus the Markov chain shares the elegance of Markov models generally, which will recur later with models for images, that long-range dependencies can be captured for the “price” of explicitly representing just the immediate dependencies between neighbors. Second, higher- order Markov chains, where the explicit dependencies go back farther than immediate neighbors, can also be useful. A famous example is “predictive text,” in which probable letters in a typical word are characterized in terms of the two preceding letters—taking just the one preceding letter does not give enough practical predictive power. Predictive text, then, is a second-order Markov chain. The directed graph in figure 1.2a) is a graphical representation of the fact that, for a Markov chain, the joint density can be decomposed as a product of conditional densities: P(x) = P(xN | xN−1)...P(xi | xi−1)...P(x2 | x1)P (x1), (1.4) where for simplicity, in a popular abuse of notation, P(x) denotes P(X = x) and, similarly, P(xi | xi−1) denotes P(Xi = xi | Xi−1 = xi−1). This convention is used frequently through- out the book. An alternative formalism that is commonly used is the undirected graphical model. Markov chains can also be represented in this way (figure 1.2c), corresponding to a factorized decomposition: P(x) = N,N−1(xN ,xN−1)...i,i−1(xi ,xi−1)...2,1(x2,x1), (1.5) where i,i−1 is a factor of the joint density. It is easy to see, in this simple case of the Markov chain, how the directed form (1.4) can be reexpressed in the undirected form (1.5). However, it is not the case in general, and in particular in 2D images, that models expressed in one form can easily be expressed in the other. Many of the probabilistic models used in computer vision are most naturally expressed using the undirected formalism, so it is the undirected graphical models that dominate in this book. For details on directed graphical models see [216, 46]. 1.2 The Hidden Markov Model (HMM) Markov models are particularly useful as prior models for state variables Xi that are to be inferred from a corresponding set of measurements or observations z = (z1,z2,..., zi ,...,zN ). The observations z are themselves considered to be instantiations of a random variable Z representing the full space of observations that can arise. This is the classical situation in speech analysis [381, sec. 6.2], where zi represents the spectral content of a fragment of an audio signal, and Xi represents a state in the time course of a particular word or phoneme. It leads naturally to an inference problem in which the posterior distribution for the possible states X, given the observations z, is computed via Bayes’s formula as P(X = x | Z = z) ∝ P(Z = z | X = x)P (X = x). (1.6) Here P(X = x) is the prior distribution over states—that is, what is known about states X in the absence of any observations. As before, (1.6) is abbreviated, for convenience, to 1.2 The Hidden Markov Model (HMM) 5 P(x | z) ∝ P(z | x)P (x).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us