On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients

On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients

On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients Difan Zou 1 Quanquan Gu 1 Abstract tions such as Bayesian inference, reinforcement learning, and computer vision. In the past decades, many MCMC Hamiltonian Monte Carlo (HMC), built based on algorithms, such as random walk Metropolis (Mengersen the Hamilton’s equation, has been witnessed great et al., 1996), ball walk (Lovasz´ & Simonovits, 1990), hit success in sampling from high-dimensional pos- and run (Smith, 1984), Langevin dynamics (LD) based al- terior distributions. However, it also suffers from gorithms (Langevin, 1908; Parisi, 1981), and Hamiltonian computational inefficiency, especially for large Monte Carlo (HMC) (Duane et al., 1987), have been in- training datasets. One common idea to overcome vented and studied. Among them HMC has been recognized this computational bottleneck is using stochas- as the most effective MCMC algorithm due to its rapid mix- tic gradients, which only queries a mini-batch of ing rate and small discretization error. In practice, HMC has training data in each iteration. However, unlike been deployed as the default sampler in many open packages the extensive studies on the convergence analy- such as Stan (Carpenter et al., 2017) and Tensorflow (Abadi sis of HMC using full gradients, few works fo- et al., 2016). In specific, HMC simulates the trajectory of a cus on establishing the convergence guarantees particle in the Hamiltonian system, which is described by of stochastic gradient HMC algorithms. In this the following Hamilton’s equation paper, we propose a general framework for prov- ing the convergence rate of HMC with stochastic dq(t) @H(q(t); p(t)) = gradient estimators, for sampling from strongly dt @p (1.1) log-concave and log-smooth target distributions. dp(t) @H(q(t); p(t)) = − ; We show that the convergence to the target distri- dt @q bution in 2-Wasserstein distance can be guaran- teed as long as the stochastic gradient estimator where q and p are position and momentum variables, and is unbiased and its variance is upper bounded H(q; p) is the so-called Hamiltonian function, which is along the algorithm trajectory. We further apply typically defined as the sum of the potential energy f(q) 2 the proposed framework to analyze the conver- and the kinetic energy kpk2=2. In each step, HMC solves gence rates of HMC with four standard stochastic (1.1) using the sample generated in the last step as the ini- gradient estimators: mini-batch stochastic gra- tial position q(0) and an independently generated Gaussian dient (SG), stochastic variance reduced gradient random vector as the initial momentum p(0), then outputs (SVRG), stochastic average gradient (SAGA), and the solution at a certain time τ, i.e., q(τ), as the next sam- control variate gradient (CVG). Theoretical re- ple. It is well known that if the Hamilton’s equation can be sults explain the inefficiency of mini-batch SG, exactly solved and the potential energy function f(x) ad- and suggest that SVRG and SAGA perform bet- mits certain good properties, the sample sequence generated ter in the tasks with high-precision requirements, by HMC asymptotically converges to the target distribu- while CVG performs better for large dataset. Ex- tion π / exp(−f(x)) (Lee et al., 2018; Chen & Vempala, periment results verify our theoretical findings. 2019). However, it is generally intractable to exactly solve (1.1), and numerical integrators are needed to solve it ap- proximately. One of the most popular HMC algorithms 1. Introduction adopts the leapfrog integrator for solving (1.1) following by a Metropolis-Hasting (MH) correction step (Neal et al., Monte Carlo Markov Chain (MCMC) methods have been 2011). Aside from the algorithmic development, the con- witnessed great success in many machine learning applica- vergence rate of HMC has also been extensively studied in recent literature (Bou-Rabee et al., 2018; Lee et al., 2018; 1 Department of Computer Science, UCLA. Correspondence to: Mangoubi & Smith, 2017; Durmus et al., 2017; Mangoubi Quanquan Gu <[email protected]>. & Vishnoi, 2018; Chen & Vempala, 2019; Chen et al., 2019), Proceedings of the 38 th International Conference on Machine which demonstrate its superior performance compared with Learning, PMLR 139, 2021. Copyright 2021 by the author(s). other MCMC methods. On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients However, as the data size grows rapidly nowadays, the stan- have been successfully incorporated into Langevin based dard HMC algorithm suffers from huge computational cost. algorithms for faster sampling (Dubey et al., 2016; Chatterji For a Bayesian inference problem, the energy function f(x) et al., 2018; Baker et al., 2018; Brosse et al., 2018; Zou (a.k.a., negative log-posterior in Bayesian learning prob- et al., 2018a; Li et al., 2018). It is unclear whether these es- lem) is formulated as the sum of the negative log-likelihood timators can be adapted in the HMC algorithm to overcome Pn functions over all observations (i.e., f(x) = i=1 fi(x)). the drawbacks of SG-HMC. When the number of observations (i.e., n) becomes ex- In this paper, we propose a general framework for prov- tremely large, the standard HMC algorithm may fail as ing the convergence rate of HMC with stochastic gradients it requires to query the entire dataset to compute the full for sampling from strongly log-concave and log-smooth gradient rf(x). To overcome the computational burden, a distributions. At the core of our analysis is a sharp charac- common idea is to leverage stochastic gradient in each up- terization of the solution to the Hamilton’s equation (1.1) date, i.e., we only compute the gradient approximately using obtained using stochastic gradients, which is in the order of a mini-batch of training data, which gives rise to stochas- p O( η), where η is the step size of the numerical integrator. tic gradient HMC (SG-HMC) (Chen et al., 2014)1. This Under the proposed proof framework, the convergence rate idea has also triggered a bunch of work focusing on improv- of HMC with a variety of stochastic gradient estimators can ing the scalability of other gradient-based MCMC methods be derived. We summarize the main contributions of our (Welling & Teh, 2011; Chen et al., 2015; Ma et al., 2015; paper as follows: Baker et al., 2018). Despite the efficiency improvements for large-scale Bayesian inference problems, SG-HMC has • many drawbacks. The variance of stochastic gradients may We develop a general framework for characterizing lead to inaccurate solutions to the Hamilton’s equation (1.1). the convergence rate of HMC algorithm when using Additionally, it is no longer tractable to perform MH correc- stochastic gradients. In particular, we prove that as tion step since (1) the proposal distribution of HMC does long as the stochastic gradient is unbiased, and its vari- not have an explicit formula and is not time-reversible, and ance along the algorithm trajectory is upper bounded (2) one cannot exactly query the entire training dataset to (not require a uniform upper bound), the stochastic compute the MH acceptance probability. These two short- gradient HMC algorithm provably converges to the target distribution π / exp(−f(x)) in 2-Wasserstein comings prevent SG-HMC from achieving as accurate sam- p O( η) pling as the standard HMC (Betancourt, 2015; Bardenet distance with a sampling error up to . et al., 2017; Dang et al., 2019), and hurdle the application • We apply four commonly used stochastic gradient es- of SG-HMC in many sampling tasks with a high-precision timators to the HMC algorithm for sampling from the requirement. Pn target distribution of form π / exp(− i=1 fi(x)), Despite the pros and cons of SG-HMC discussed in the which gives rise to four variants of stochastic gradient aforementioned works, most of them are empirical studies. HMC algorithms, including SG-HMC, SVRG-HCM, Unlike stochastic gradient Langevin dynamics (SGLD) and SAGA-HMC, and CVG-HMC. We establish their con- stochastic gradient underdamped Langevin dynamics (SG- vergence guarantees under the proposed framework.p ULD)2 that have been extensively studied in theory, little Our analysis suggests that in order to achieve / n- work has been done to provide a theoretical understanding sampling error in 2-Wasserstein distance, the gradi- 3 of SG-HMC. It remains illusive whether SG-HMC can be ent complexity of SG-HMC, CVG-HMC, SVRG- 2 2 guaranteed to converge and how it performs in different HMC and SAGA-HMC are Oe(n/ ), Oe(1/ + 1/), 2=3 2=3 2=3 2=3 regimes. Moreover, there has also emerged many other Oe(n / + 1/), and Oe(n / + 1/) respec- stochastic gradient estimators that exhibit smaller variance tively. This explains the inefficiency of SG-HMC ob- than the standard mini-batch stochastic gradient estimator, served in prior work, and reveals the prospects of CVG- such as stochastic variance reduced gradient (SVRG) (John- HMC, SVRG-HMC and SAGA-HMC for large-scale son & Zhang, 2013), stochastic averaged gradient (SAG) sampling problems. (Defazio et al., 2014), and control variates gradient (CVG) • We carry out numerical experiments on both synthetic (Baker et al., 2018). These stochastic gradient estimators and real-world dataset. The results show all stochastic 1In fact, in addition to making use of stochastic gradients, Chen gradient HMC algorithms converge but SG-HMC has et al.(2014) also introduces a friction term and an additional Brow- a significantly larger bias compared with other algo- nian term to mitigate the bias and variance brought by stochastic rithms. Additionally, SVRG-HMC performs the best gradients. when the sample size is small while CVG-HMC be- 2In some existing works this algorithm is also referred to as SGHMC (Zou et al., 2018a; Gao et al., 2018b;a).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us