
Adaptive Path Sampling in Metastable Posterior Distributions Yuling Yao ⇤† Collin Cademartori ⇤† Aki Vehtari ‡ Andrew Gelman § 1 Sep 2020 Abstract The normalizing constant plays an important role in Bayesian computation, and there is a large literature on methods for computing or approximating normalizing constants that cannot be evaluated in closed form. When the normalizing constant varies by orders of magnitude, methods based on importance sampling can require many rounds of tuning. We present an improved approach using adaptive path sampling, iteratively reducing gaps between the base and target. Using this adaptive strategy, we develop two metastable sampling schemes. They are automated in Stan and require little tuning. For a multimodal posterior density, we equip simulated tempering with a continuous temperature. For a funnel-shaped entropic barrier, we adaptively increase mass in bottleneck regions to form an implicit divide-and-conquer. Both approaches empirically perform better than existing methods for sampling from metastable distributions, including higher accuracy and computation efficiency. Keywords: importance sampling, Markov chain Monte Carlo, normalizing constant, path sampling, posterior metastability, simulated tempering. 1. The normalizing constant and posterior metastability In Bayesian computation, the posterior distribution is often available as an unnormalized density q(✓). The unknown and often analytically-intractable integral q(✓)d✓ is called the normalizing constant of q. Many statistical problems involve estimating the normalizing constant, or the ratios R of them among several densities. For example, the marginal likelihood of a statistical model with likelihood p(y ✓) and prior p(✓) is the normalizing constant of p(y ✓)p(✓): p(y)= p(✓,y)d✓.The | | Bayes factor of two models p(y ✓ ),p(✓ ) and p(y ✓ ),p(✓ ), requires the ratio of the normalizing | 1 1 | 2 2 R constants in densities p(y, ✓1) and p(y, ✓2). Besides, we are often interested in the normalizing constant as a function of parameters. In a posterior density p(✓ , ✓ ,...,✓ y), the marginal density of coordinate ✓ is proportional to 1 2 d| 1 ... p(✓ , ✓ ,...,✓ y)d✓ ,...,d✓ , the normalizing constant of the posterior density with respect 1 2 d| 2 d to all remaining parameters. Accurate normalizing constant estimation means we have well explored R R region containing most of the posterior mass, which implies that we can then accurately also estimate posterior expectations of many other functionals. In simulated tempering and annealing, we augment the distribution q(✓) with an inverse tem- perature λ and sample from p(✓, λ) q(✓)λ. Successful tempering requires fully exploring the / arXiv:2009.00471v1 [stat.CO] 1 Sep 2020 space of λ, which in turn requires evaluation of the normalizing constant as a function of λ: z(λ)= q(✓)λd✓. Similar tasks arise for model selection and averaging on a series of statisti- cal models indexed by a continuous tuning parameter λ: p(✓,y λ). In cross validation, we attach to R | each data point y a λ and augment the model p(y ✓)p(✓)tobeq(λ,y,✓)= p(y ✓)λi p(✓), such i i i| i i| that the pointwise leave-one-out log predictive density log p(yi y i) becomes the log normalizing | − Q constant log p(y ✓)p (✓ y, λ =0, λ =1, j = i) d✓. i| | i j 8 6 ⇤joint first author.R †Department of Statistics, Columbia University. ‡Helsinki Institute of Information Technology; Aalto University, Department of Computer Science. §Department of Statistics and Political Science, Columbia University. In all of these problems we are given an unnormalized density q(✓, λ), where ✓ ⇥ is a multidi- 2 mensional sampling parameter and λ ⇤ is a free parameter, and we need to evaluate the integrals 2 at any λ, z : ⇤ R,z(λ)= q(✓, λ)d✓, λ ⇤. (1) ! 8 2 Z⇥ z( ) is a function of λ. For convenience, throughout the paper we will call z( )thenormalizing con- · · stant without describing it is a function. We also use notations q and p to distinguish unnormalized and normalized densities. In most applications, it is enough to capture z(λ) up to a multiplicative factor that is free of λ, or equivalently the ratios of this integral with respect to a fixed reference point λ0 over any λ: z˜(λ)=z(λ)/z(λ ), λ ⇤. (2) 0 8 2 1.1. Easy to find an estimate, prone to extrapolation Two accessible but conceptually orthogonal approaches stand out for the computation of (1) and (2). Viewing (1) as the expectation with respect to the conditional density ✓ λ q(✓, λ), we can | / numerically integrate (1) using quadrature, where the simplest is linearly interpolation, and the log ratio in (2) can be computed from first order Taylor series expansion, z(λ) d 1 d log (λ λ ) log z(λ) (λ λ ) q(✓, λ) d✓. (3) z(λ ) ⇡ − 0 dλ |λ=λ0 ⇡ − 0 z(λ ) dλ |λ=λ0 0 0 Z⇥ ✓ ◆ In contrast, we can sample from the conditional density ✓ λ q(✓, λ ), and compute (1)by | 0 / 0 importance sampling, S z(λ) 1 q(✓s, λ) , ✓s=1, ,S q(✓, λ0). (4) z(λ ) ⇡ S q(✓ , λ ) ··· ⇠ 0 s=1 s 0 X How should we choose between estimates (3) and (4)? There is no definite answer. For example, in variational Bayes with model parameter ✓ and variational parameter λ, the gradient part of the (3)isthescore function estimator based on the log-derivative trick, while the gradient of (4) under a location-scale family in q(✓ λ) is called the Monte Carlo gradient estimator using the s| reparametrization trick—but it is in general unknown which is better. On the other hand, both (3) and (4) impose severe scalability limitations on the dimension of λ and ✓. Essentially they extrapolate either from the density conditional on λ0 or from simulation draws from q(✓ λ ) to make inferences about the density conditional on λ, hence depending crucially | 0 on how close the two conditional distribution ✓ λ and ✓ λ are. Even under a normal approximation, | 0 | the Kullback-Leibler (KL) divergence between these densities scales linearly with the dimension of ✓. Thus it takes at least (exp(dim(✓)) posterior draws to make (4) practically accurate. The O reliability of (3) is further contingent on how flat the Hessian of z(λ0) is, which is more intractable to estimate. In practice, these methods can fail silently especially when implemented as a black-box step in a large algorithm without diagnostics. 1.2. Free energy and sampling metastability From the Bayesian computation perspective, the log normalizing constant is interpreted as the analogy of “free energy” in physics (up to a multiplicative constant), in line with the interpretation of log q(✓) to be the “potential energy” in Hamiltonian Monte Carlo sampling. 2 A) Energetic barrier B) Energetic barrier C) Entropic barrier D) Entropic barrier low inv−temperature with more concentration on the neck high density θ2 θ2 θ2 θ2 low density θ1 θ1 θ1 θ1 Figure 1: An illustration of metastability in a bivariate a distribution p(✓1, ✓2).(A)Withamixture of two Gaussian distributions, the energetic barrier prevents rapid mixing between modes. (B) With a lower inverse temperature λ, the energetic barrier becomes flatter in the conditional distribution p(✓ λ). (C) With an entropic barrier, the left and right part of the distribution is only connected | though a narrow tunnel, where the Markov chain will behave like a random walk. (D) Adding more density on the neck increases the transition probability, while leaving p(✓ ✓ ) invariant. 2| 1 A potential energy is called metastable if the corresponding probability measure has some regions of high probability, but separated by low probability connections. Following are two types of metastability, which both cause Markov chain Monte Carlo algorithms difficulty moving between regions. This pathology is often identifiable by a large Rˆ between chains and low e↵ective sample size in generic sampling (Vehtari et al., 2020). 1. In an energetic barrier, the density is isolated in multiple modes, and the transition probability is low between modes. In particular, the Hamiltonian—the sum of the potential and kinetic energy—is preserved in every Hamiltonian Monte Carlo trajectory. Hence, a transition across modes is unlikely unless the kinetic energy is exceptionally large. 2. In an entropic barrier, or funnel, the typical set is connected only through narrow and possibly twisted ridges. This barrier is amplified when the dimension of ✓ increases, in a way that a random walk in high dimensional space can hardly find the correct direction. As the name suggests, we have the freedom to adaptively tune the free energy of the sampling distribution to remove the metastability therein. The basic strategy is to augment the original metastable density q(✓) with an auxiliary variable λ, obtaining some density on the extended space q(✓, λ). For energetic barriers, we can take λ to be an inverse temperature variable, a power transformation of the posterior density. The energetic barrier is flattened by a lower inverse temperature. Temperature-based approaches cannot eliminate entropic barriers in the same way, but the transition is boosted by adding probability mass to the neck region. Figure 1 gives a graphical illustration. While the normalizing constant is not itself statistically meaningful in sampling from the aug- mented density q(✓, λ), computing it serves as an essential intermediate step to constructing the otherwise intractable joint density. Unlike in usual Monte Carlo methods where the target distri- bution is given and and static, here as in an augmented system, we have the flexibility to either sample ✓ from some conditional distribution ✓ λ at a discrete sequence of λ, or from some joint | distribution of (✓, λ), treating λ continuously. While continuous joint sampling has a finer-grained expressiveness for approximating the normalizing constant, it is harder to access the conditional distribution, which is ultimately what we need when λ is an augmented parameter.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-