Gibbs Sampling, Exponential Families and Orthogonal Polynomials

Gibbs Sampling, Exponential Families and Orthogonal Polynomials

Statistical Science 2008, Vol. 23, No. 2, 151–178 DOI: 10.1214/07-STS252 c Institute of Mathematical Statistics, 2008 Gibbs Sampling, Exponential Families and Orthogonal Polynomials1 Persi Diaconis, Kshitij Khare and Laurent Saloff-Coste Abstract. We give families of examples where sharp rates of conver- gence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical orthogonal polynomials as eigenfunctions. Key words and phrases: Gibbs sampler, running time analyses, expo- nential families, conjugate priors, location families, orthogonal polyno- mials, singular value decomposition. 1. INTRODUCTION which has f as stationary density under mild con- ditions discussed in [4, 102]. The Gibbs sampler, also known as Glauber dy- The algorithm was introduced in 1963 by Glauber namics or the heat-bath algorithm, is a mainstay of [49] to do simulations for Ising models, and inde- scientific computing. It provides a way to draw sam- pendently by Turcin [103]. It is still a standard tool ples from a multivariate probability density f(x1,x2, of statistical physics, both for practical simulation ...,xp), perhaps only known up to a normalizing (e.g., [88]) and as a natural dynamics (e.g., [10]). constant, by a sequence of one-dimensional sampling The basic Dobrushin uniqueness theorem showing problems. From (X1,...,Xp) proceed to (X1′ , X2,..., existence of Gibbs measures was proved based on Xp), then (X1′ , X2′ , X3,...,Xp),..., (X1′ , X2′ ,...,Xp′ ) this dynamics (e.g., [54]). It was introduced as a where at the ith stage, the coordinate is sampled base for image analysis by Geman and Geman [46]. from f with the other coordinates fixed. This is one Statisticians began to employ the method for rou- pass. Continuing gives a Markov chain X, X′, X′′,..., tine Bayesian computations following the works of Tanner and Wong [101] and Gelfand and Smith [45]. Textbook accounts, with many examples from bi- Persi Diaconis is Professor, Department of Statistics, ology and the social sciences, along with extensive arXiv:0808.3852v1 [stat.ME] 28 Aug 2008 Sequoia Hall, 390 Serra Mall, Stanford University, references are in [47, 48, 78]. Stanford, California 94305-4065, USA e-mail: In any practical application of the Gibbs sampler, [email protected]. Kshitij Khare is Graduate it is important to know how long to run the Markov Student, Department of Statistics, Sequoia Hall, 390 chain until it has forgotten the original starting state. Serra Mall, Stanford University, Stanford, California 94305-4065, USA e-mail: [email protected]. Laurent Indeed, some Markov chains are run on enormous Saloff-Coste is Professor, Department of Mathematics, state spaces (think of shuffling cards or image anal- Cornell University, Malott Hall, Ithaca, New York ysis). Do they take an enormous number of steps to 14853-4201, USA e-mail: [email protected]. reach stationarity? One way of dealing with these 1Discussed in 10.1214/08-STS252A, 10.1214/08-STS252B, problems is to throw away an initial segment of the 10.1214/08-STS252C and 10.1214/08-STS252D; rejoinder at run. This leaves the question of how much to throw 10.1214/08-STS252REJ. away. We call these questions running time analyses This is an electronic reprint of the original article below. published by the Institute of Mathematical Statistics in Despite heroic efforts by the applied probability Statistical Science, 2008, Vol. 23, No. 2, 151–178. This community, useful running time analyses for Gibbs reprint differs from the original in pagination and sampler chains is still a major research effort. An typographic detail. overview of available tools and results is given at the 1 2 P. DIACONIS, K. KHARE AND L. SALOFF-COSTE end of this introduction. The main purpose of the present paper is to give families of two component examples where a sharp analysis is available. These may be used to compare and benchmark more ro- bust techniques such as the Harris recurrence tech- niques in [64], or the spectral techniques in [2] and [107]. They may also serve as a base for the compar- ison techniques in [2, 26, 32]. Here is an example of our results. The following example was studied as a simple expository example in [16] and [78], page 132. Let n f (x)= θx(1 θ)n x, θ x − − π(dθ) = uniform on (0, 1), x 0, 1, 2,...,n . ∈ { } These define the bivariate Beta/Binomial density (uniform prior) Fig. 1. Simulation of the Beta/Binomial “x-chain” with n = 100. n f(x,θ)= θx(1 θ)n x x − − is reversible with stationary density m(x) [i.e., m(x) · with marginal density k(x,x′)= m(x′)k(x′,x)]. For the Beta/Binomial ex- ample 1 m(x)= f(x,θ) dθ n n n + 1 x x′ 0 k(x,x′)= , Z 2n + 1 2n 1 x+x′ = , x 0, 1, 2,...,n . (1.1) n + 1 ∈ { } 0 x,x n. ≤ ′ ≤ The posterior density [with respect to the prior A simulation of the Beta/Binomial “x-chain” (1.1) π(dθ)] is given by with n = 100 is given in Figure 1. The initial posi- tion is 100 and we track the position of the Markov fθ(x) π(θ x)= chain for the first 200 steps. | m(x) The proposition below gives an explicit diagonal- n = (n + 1) θx(1 θ)n x, θ (0, 1). ization of the x-chain and sharp bounds for the bi- x − ℓ − ∈ variate chain [Kn,θ denotes the density of the distri- bution of the bivariate chain after ℓ steps starting The Gibbs sampler for f(x,θ) proceeds as follows: from (n,θ)]. Itf shows that order n steps are neces- From x, draw θ from Beta(x + 1,n x + 1). sary and sufficient for convergence. That is, for sam- • ′ − From θ′, draw x′ from Binomial(n,θ′). pling from the Markov chain K to simulate from the • probability distribution f, a small (integer) multi- The output is (x′,θ′). Let K(x,θ; x′,θ′) be the tran- fn ple of n steps suffices, while 2 steps do not. The sition density for this chain. While K has f(x,θ) as following bounds make this quantitative. The proof stationary density, the (K,ff ) pair is not reversible is given in Section 4. For example, when n = 100, (see below). This blocks straightforwardf use of spec- after 200 steps the total variation distance (see be- tral methods. Liu et al.f [77] observed that the “x- low) to stationarity is less than 0.0192, while after chain” with kernel 50 steps the total variation distance is greater than 1 0.1858, so the chain is far from equilibrium. k(x,x′)= f (x′)π(θ x) dθ We simulate 3000 independent replicates of the θ | Z0 Beta/Binomial “x-chain” (1.1) with n = 100, start- 1 f (x)f (x ) ing at the initial value 100. We provide histograms = θ θ ′ dθ m(x) of the position of the “x-chain” after 50 steps and Z0 GIBBS SAMPLING AND ORTHOGONAL POLYNOMIALS 3 Fig. 2. Histograms of the observations obtained from simulating 3000 independent replicates of the Beta/Binomial “x-chain” after 50 steps and 200 steps. 200 steps in Figure 2. Under the stationary distribu- The calculations work because the operator with tion (which is uniform on 0, 1,..., 100 ), one would density k(x,x′) takes polynomials to polynomials. expect roughly 150 observations{ in each} block of the Our main results give classes of examples with the histogram. As expected, these histograms show that same explicit behavior. These include: the empirical distribution of the position after 200 fθ(x) is one of the exponential families singled steps is close to the stationary distribution, while • out by Morris [86, 87] (binomial, Poisson, negative the empirical distribution of the position after 50 binomial, normal, gamma, hyperbolic) with π(θ) steps is quite different. the conjugate prior. If f, g are probability densities with respect to a fθ(x)= g(x θ) is a location family with π(θ) con- σ-finite measure µ, then the total variation distance • jugate and g−belongs to one of the six exponential between f and g is defined as families above. (1.2) f g = 1 f(x) g(x) µ(dx). Section 2 gives background. In Section 2.1 the k − kTV 2 | − | Z Gibbs sampler is set up more carefully in both sys- tematic and random scan versions. Relevant Markov Proposition 1.1. For the Beta/Binomial ex- chain tools are collected in Section 2.2. Exponential ample with uniform prior, we have: families and conjugate priors are reviewed in Section 2.3. The six families are described more carefully (a) The chain (1.1) has eigenvalues in Section 2.4 which calculates needed moments. A brief overview of orthogonal polynomials is in Sec- β0 = 1, tion 2.5. n(n 1) (n j + 1) β = − · · · − , 1 j n. Section 3 is the heart of the paper. It breaks the j (n + 2)(n + 3) (n + j + 1) ≤ ≤ · · · operator with kernel k(x,x′) into two pieces: T : L2(m) L2(π) defined by In particular, β1 = 1 2/(n+2). The eigenfunctions → are the discrete Chebyshev− polynomials [orthogonal polynomials for m(x) = 1/(n + 1) on 0,...,n ]. T g(θ)= fθ(x)g(x)m(dx) { } Z (b) For the bivariate chain K, for all θ,n and ℓ, and its adjoint T ∗. Then k is the kernel of T ∗T .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    29 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us