On Variational Bounds of Mutual Information

On Variational Bounds of Mutual Information

On Variational Bounds of Mutual Information Ben Poole 1 Sherjil Ozair 1 2 Aaron¨ van den Oord 3 Alexander A. Alemi 1 George Tucker 1 Abstract Estimating and optimizing Mutual Informa- tion (MI) is core to many problems in machine learning; however, bounding MI in high dimen- sions is challenging. To establish tractable and scalable objectives, recent work has turned to vari- ational bounds parameterized by neural networks, but the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this prob- Figure 1. Schematic of variational bounds of mutual information presented in this paper. Nodes are colored based on their tractabil- lem, we introduce a continuum of lower bounds ity for estimation and optimization: green bounds can be used that encompasses previous bounds and flexibly for both, yellow for optimization but not estimation, and red for trades off bias and variance. On high-dimensional, neither. Children are derived from their parents by introducing controlled problems, we empirically characterize new approximations or assumptions. the bias and variance of the bounds and their gradi- ents and demonstrate the effectiveness of our new samples but not the underlying distributions (Paninski, 2003; bounds for estimation and representation learning. McAllester & Stratos, 2018). Existing sample-based esti- mators are brittle, with the hyperparameters of the estimator impacting the scientific conclusions (Saxe et al., 2018). 1. Introduction Beyond estimation, many methods use upper bounds on Estimating the relationship between pairs of variables is a MI to limit the capacity or contents of representations. For fundamental problem in science and engineering. Quan- example in the information bottleneck method (Tishby et al., tifying the degree of the relationship requires a metric 2000; Alemi et al., 2016), the representation is optimized to that captures a notion of dependency. Here, we focus solve a downstream task while being constrained to contain on mutual information (MI), denoted I(X; Y ), which is as little information as possible about the input. These a reparameterization-invariant measure of dependency: techniques have proven useful in a variety of domains, from restricting the capacity of discriminators in GANs (Peng p(xjy) p(yjx) et al., 2018) to preventing representations from containing arXiv:1905.06922v1 [cs.LG] 16 May 2019 I(X; Y ) = log = log : Ep(x;y) p(x) Ep(x;y) p(y) information about protected attributes (Moyer et al., 2018). Mutual information estimators are used in computational Lastly, there are a growing set of methods in representation neuroscience (Palmer et al., 2015), Bayesian optimal exper- learning that maximize the mutual information between a imental design (Ryan et al., 2016; Foster et al., 2018), un- learned representation and an aspect of the data. Specif- derstanding neural networks (Tishby et al., 2000; Tishby & ically, given samples from a data distribution, x ∼ p(x), Zaslavsky, 2015; Gabrie´ et al., 2018), and more. In practice, the goal is to learn a stochastic representation of the data estimating MI is challenging as we typically have access to pθ(yjx) that has maximal MI with X subject to constraints on the mapping (e.g. Bell & Sejnowski, 1995; Krause et al., 1Google Brain 2MILA 3DeepMind. Correspondence to: Ben 2010; Hu et al., 2017; van den Oord et al., 2018; Hjelm Poole <[email protected]>. et al., 2018; Alemi et al., 2017). To maximize MI, we can Proceedings of the 36 th International Conference on Machine compute gradients of a lower bound on MI with respect to Learning, Long Beach, California, PMLR 97, 2019. Copyright the parameters θ of the stochastic encoder pθ(yjx), which 2019 by the author(s). may not require directly estimating MI. On Variational Bounds of Mutual Information While many parametric and non-parametric (Nemenman conditional densities when they are available. A schematic et al., 2004; Kraskov et al., 2004; Reshef et al., 2011; Gao of the bounds we consider is presented in Fig.1. We begin et al., 2015) techniques have been proposed to address MI by reviewing the classic upper and lower bounds of Bar- estimation and optimization problems, few of them scale ber & Agakov(2003) and then show how to derive the up to the dataset size and dimensionality encountered in lower bounds of Donsker & Varadhan(1983); Nguyen modern machine learning problems. et al.(2010); Belghazi et al.(2018) from an unnormal- ized variational distribution. Generalizing the unnormalized To overcome these scaling difficulties, recent work com- bounds to the multi-sample setting yields the bound pro- bines variational bounds (Blei et al., 2017; Donsker & Varad- posed in van den Oord et al.(2018), and provides the basis han, 1983; Barber & Agakov, 2003; Nguyen et al., 2010; for our interpolated bound. Foster et al., 2018) with deep learning (Alemi et al., 2016; 2017; van den Oord et al., 2018; Hjelm et al., 2018; Belghazi et al., 2018) to enable differentiable and tractable estima- 2.1. Normalized upper and lower bounds tion of mutual information. These papers introduce flexible Upper bounding MI is challenging, but is possible when the parametric distributions or critics parameterized by neural conditional distribution p(yjx) is known (e.g. in deep repre- networks that are used to approximate unkown densities sentation learning where y is the stochastic representation). p(xjy) p(yjx) (p(y), p(yjx)) or density ratios ( p(x) = p(y) ). We can build a tractable variational upper bound by intro- ducing a variational approximation q(y) to the intractable In spite of their effectiveness, the properties of existing marginal p(y) = R dx p(x)p(yjx). By multiplying and di- variational estimators of MI are not well understood. In this viding the integrand in MI by q(y) and dropping a negative paper, we introduce several results that begin to demystify KL term, we get a tractable variational upper bound (Barber these approaches and present novel bounds with improved & Agakov, 2003): properties (see Fig.1 for a schematic): p(yjx) I(X; Y ) ≡ Ep(x;y) log • We provide a review of existing estimators, discussing p(y) their relationships and tradeoffs, including the first p(yjx)q(y) = log proof that the noise contrastive loss in van den Oord Ep(x;y) q(y)p(y) et al.(2018) is a lower bound on MI, and that the p(yjx) heuristic “bias corrected gradients” in Belghazi et al. = log − KL(p(y)kq(y)) Ep(x;y) q(y) (2018) can be justified as unbiased estimates of the gradients of a different lower bound on MI. ≤ Ep(x) [KL(p(yjx)kq(y))] , R; (1) • We derive a new continuum of multi-sample lower which is often referred to as the rate in generative models bounds that can flexibly trade off bias and variance, (Alemi et al., 2017). This bound is tight when q(y) = generalizing the bounds of (Nguyen et al., 2010; p(y), and requires that computing log q(y) is tractable. This van den Oord et al., 2018). variational upper bound is often used as a regularizer to limit the capacity of a stochastic representation (e.g. Rezende • We show how to leverage known conditional structure et al., 2014; Kingma & Welling, 2013; Burgess et al., 2018). yielding simple lower and upper bounds that sandwich In Alemi et al.(2016), this upper bound is used to prevent MI in the representation learning context when pθ(yjx) the representation from carrying information about the input is tractable. that is irrelevant for the downstream classification task. • We systematically evaluate the bias and variance of Unlike the upper bound, most variational lower bounds on MI estimators and their gradients on controlled high- mutual information do not require direct knowledge of any dimensional problems. conditional densities. To establish an initial lower bound on mutual information, we factor MI the opposite direction • We demonstrate the utility of our variational upper and as the upper bound, and replace the intractable conditional lower bounds in the context of decoder-free disentan- distribution p(xjy) with a tractable optimization problem gled representation learning on dSprites (Matthey et al., over a variational distribution q(xjy). As shown in Barber 2017). & Agakov(2003), this yields a lower bound on MI due to the non-negativity of the KL divergence: 2. Variational bounds of MI q(xjy) I(X; Y ) = log Ep(x;y) p(x) Here, we review existing variational bounds on MI in a (2) unified framework, and present several new bounds that + Ep(y) [KL(p(xjy)jjq(xjy))] trade off bias and variance and naturally leverage known ≥ Ep(x;y) [log q(xjy)] + h(X) , IBA; On Variational Bounds of Mutual Information where h(X) is the differential entropy of X. The bound is objective, but produces an upper bound on Eq.4 (which is it- tight when q(xjy) = p(xjy), in which case the first term self a lower bound on mutual information). Thus evaluating equals the conditional entropy h(XjY ). IDV using a Monte-Carlo approximation of the expectations as in MINE (Belghazi et al., 2018) produces estimates that Unfortunately, evaluating this objective is generally in- are neither an upper or lower bound on MI. Recent work tractable as the differential entropy of X is often unknown. has studied the convergence and asymptotic consistency of If h(X) is known, this provides a tractable estimate of a such nested Monte-Carlo estimators, but does not address lower bound on MI. Otherwise, one can still compare the the problem of building bounds that hold with finite samples amount of information different variables (e.g., Y and Y ) 1 2 (Rainforth et al., 2018; Mathieu et al., 2018).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us