
An Introduction to PottsUtils Dai Feng dai [email protected] Package PottsUtils comprises several functions related to the Potts models defined on undirected graphs. The main purpose of the package is to make available several functions that generate samples from the models. To facilitate that, there are other utilities. Fur- thermore, there is a by-product of simulation functions. Altogether, there are three sets of functions. The first produces basic properties of a graph to facilitate the simulation func- tions (they maybe used for other purposes as well). The second provides various simulation functions. The third currently includes only one function which computes the normalizing constant based on simulation results. This introduction was intended to help users to understand better the functions, the documentation (Rd files), and the source code. For more technical details (mathematical proof among others), we refer users to references herein. Hereafter, first we introduce some basic concepts and definitions related to the Potts models. Second, algorithms used in simulation functions are presented in a concise way. Third, a function to obtain normalizing constant is introduced. Forth, we discuss some computational issues. Finally, some future work is outlined. 1 Notation and Terminology In this section we introduce concepts and definitions involved in the discussions of the Potts models. The notations used are similar to those given in Winkler (2003). Based on that, several related functions in the package are introduced. We consider the Potts model defined on a finite undirected graph. A graph describes a set of connections between objects. Each object is called a node or vertex. There are observations to characterize the properties of vertices. Let V be a finite set, the set of vertices; V = f1; 2;;:::;Ng, where N is the total number of vertices. For every vertex i 2 V, let zi take values in a finite set of categories Z = f1; 2; : : : ; kg, where k is the number of categories. In the package we use different colors to represent different categories and vertices from the same category are of the same color. The product Z = ZN is the space of configurations z = (zi; i 2 V). A strictly positive probability measure P on Z for every z 2 Z is called a stochastic or random field. Note that P has to be strictly positive on Z to satisfy the assumptions of the Hammersley-Clifford theorem; see Besag (1974) and Winkler (2003) for details. 1 Figure 1: 2 neighbors in 1D A collection @ = (@(v): v 2 V) of subsets of V is called a neighborhood system, if (i) i2 = @(i) and (ii) i 2 @(j) if and only if j 2 @(i). The sites j 2 @(i) are called neighbors of i. We use i ∼ j to denote that i and j are neighbors of each other. There is an edge between i and j if and only if they are neighbors. Define a graph G = fV; Eg, where E is the set of edges. For a finite undirected graph, the number of vertices are finite and edge (i; j) is equivalent to edge (j; i). The function getEdges() can be used to get edges of a graph. When neighbors i and j are from the same category (of the same color), then there is a bond between them. The random field P is a Markov random field (MRF) w.r.t. the neighborhood system @ if for all z 2 Z, P (zijzj; j 6= i) = P (zijzj; j 2 @(i)) Probability measures of the form exp{−H(z)g P (z) = P x2Z exp{−H(x)g are called Gibbs fields (or measures). H is called the energy function or Hamiltonian, and P x2Z exp{−H(x)g is called the partition function or normalizing constant. For detailed account on MRF, Gibbs measures, and related issues, we refer to Winkler (2003). When using Markov random field models, the first question is how to define neighbors of all vertices. For a 1D lattice, the simplest way to define neighbors is that every vertex (except those on the boundaries) has the two adjacent vertices as its neighbors, see Figure 1 for illustration. Of course, a vertex could have more than two neighbors. For a 2D lattice, there are two common ways to define neighbors. One is that neighbors of a vertex comprise its available N, S, E, and W adjacencies. Another is that, besides those four, there are four diagonal adjacencies on its north-west, north-east, south-west, and south-east. See Figure 2 for illustrations. Probability measures defined on the former are called the first-order Markov random fields and the latter the second-order Markov random fields. For a 3D lattice, besides defining six neighbors in the x, y, and z directions, one can add twelve diagonal neighbors in the x − y, x − z, and y − z planes, and another eight on the 3D diagonals. This leads to a six neighbor structure, an eighteen neighbor structure, and a twenty-six neighbor structure. For illustration, see Figure 3. The package provides a function called getNeighbors() to generate all neighbors of a graph. 2 Figure 2: Four and eight neighbors in 2D (a) six neighbors (b) eighteen neighbors (c) twenty-six neighbors Figure 3: Illustration of neighbor structures in 3D 3 After defining neighbors, the second question is how to model the spatial relationship among neighbors. One choice is to use a model from the Potts model family, a set of MRF models with the Gibbs measure defined as follows. ( N ) −1 X X p(z) = C(β) exp αi(zi) + β wijf(zi; zj) (1) i=1 i∼j where C(β) is a normalizing constant and i ∼ j indicates neighboring vertices. We need to define neighborhood structure and then assign relationships among neighboring vertices. The parameter β, called the inverse temperature, determines the level of spatial homogeneity between neighboring vertices in the graph. A zero β would imply that neighboring vertices are independent. We use positive β values. The wij are weights and we assume wij > 0. The PN term i=1 αi(zi) is called the external field. The αi(zi) are functions of zi. When β = 0, the external field completely characterizes the distribution of the independent zi; i = 1; 2;:::;N. When f(zi; zj) = I(zi = zj) model (1) becomes ( N ) −1 X X p(z) = C(β) exp αi(zi) + β I(zi = zj) (2) i=1 i∼j For k = 2, this model is called the Ising model (Ising, 1925); for k > 2 it is the Potts (1953) model. The Ising model was originally proposed to describe the physical properties of magnets. Due to its flexibility and simplicity, the Ising model and its various versions have been widely used in other fields, such as brain models in cognitive science (Feng, 2008), information and machine learning theory (MacKay (2003) and references therein), economics (Bourgine and Nadal (2004) and references therein), sociology (Kohring, 1996) and game theory (Hauert and Szab´o,2005). The most commonly used Potts model is the one without an external field and with wij ≡ 1, ( ) −1 X p(z) = C(β) exp β I(zi = zj) (3) i∼j We refer to this as the simple Potts model. Let αi(zi) ≡ 0 and f(zi; zj) = wijI(zi = zj). Then (1) reduces to ( ) −1 X p(z) = C(β) exp β wijI(zi = zj) (4) i∼j 1 where wij is the weight between vertex i and j. For example we might take wij = d(zi;zj ) where d(zi; zj) is a distance function, say Euclidean distance, between two vertices. This model is referred to as the compound Potts model. In model (1), let αi(zi) = 0 and define f(zi; zj) as 8 a if z = z <> 1 i j f(zi; zj) = a2 if jzi − zjj = 1 (5) > :a3 otherwise 4 where a1 ≥ a2 ≥ a3. We call this model the repulsion Potts model. This model assumes an ordering of the colors and that neighboring vertices are most likely of the same color, and if they are different then it is more likely that they are similar than totally different. See Feng (2008) for more details. 2 Simulation of the Potts Models It is very hard to find algorithms (such as inversion of CDF, rejection sampling, adaptive rejection sampling, or ratio-of-uniforms sampling) to generate i.i.d. samples from the Potts models, and Markov chain methods have to be used for the simulation. To generate samples from model (1), single site updating, for example Gibbs sampling, is easy but may mix slowly. The Swendsen and Wang (1987) algorithm (SW) is widely used to generate random samples from the simple Potts model. Wolff's algorithm (Wolff, 1989) has been advocated as an alternative to the SW. A Gibbs sampler that takes advantage of the conditional independence structure to update variables zi; i = 1; 2;:::;N, could make the simulation much faster than a single site updating scheme (Feng, 2008). When there is external field, the partial decoupling method might outperform the Gibbs sampling. 2.1 Swendsen-Wang Algorithm The SW algorithm was originally proposed for the simulation of the simple Potts model. Drawing auxiliary variables uijjz for neighboring vertices (i; j) from independent and uniform distributions on [0; expfβI(zi = zj)g] makes the joint density Y p(z; u) / I[0;expfβI(zi=zj )g](uij) (6) i∼j The conditional distribution of z given u is also uniform on possible configurations. If uij ≥ 1, then there is a bond between vertices i and j (when uij ≥ 1 definitely zi = zj); otherwise there is no further constraint.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-