Variational Bayesian Quantization

Variational Bayesian Quantization

Variational Bayesian Quantization Yibo Yang * 1 Robert Bamler * 1 Stephan Mandt 1 Abstract for a particular compression objective. Here, we study a We propose a novel algorithm for quantizing con- related but different problem: given a trained model, what tinuous latent representations in trained models. is the best way to encode the information contained in its Our approach applies to deep probabilistic mod- continuous latent variables? els, such as variational autoencoders (VAEs), and As we show, our proposed solution provides a new “plug- enables both data and model compression. Un- and-play” approach to lossy compression of both data in- like current end-to-end neural compression meth- stances (represented by local latent variables, e.g., in a VAE) ods that cater the model to a fixed quantization as well as model parameters (represented by global latent scheme, our algorithm separates model design variables that serve as parameters of a Bayesian statisti- and training from quantization. Consequently, cal model). Our approach separates the compression task our algorithm enables “plug-and-play” compres- from model design and training, thus implementing variable- sion with variable rate-distortion trade-off, using bitrate compression as an independent post-processing step a single trained model. Our algorithm can be in a wide class of existing latent variable models. seen as a novel extension of arithmetic coding to the continuous domain, and uses adaptive quan- At the heart of our proposed method lies a novel quanti- tization accuracy based on estimates of posterior zation scheme that optimizes a rate-distortion trade-off by uncertainty. Our experimental results demonstrate exploiting posterior uncertainty estimates. Quantization is the importance of taking into account posterior central to lossy compression, as continuous-valued data like uncertainties, and show that image compression natural images, videos, and distributed representations ul- with the proposed algorithm outperforms JPEG timately need to be discretized to a finite number of bits over a wide range of bit rates using only a single for digital storage or transmission. Lossy compression al- standard VAE. Further experiments on Bayesian gorithms therefore typically find a discrete approximation neural word embeddings demonstrate the versatil- of some semantic representation of the data, which is then ity of the proposed method. encoded with a lossless compression method. In classical lossy compression methods such as JPEG or MP3, the semantic representation is carefully designed to 1. Introduction support compression at variable bitrates. By contrast, state- Latent-variable models have become a mainstay of modern of-the-art deep learning based approaches to lossy data com- machine learning. Scalable approximate Bayesian infer- pression (Balle´ et al., 2017; 2018; Rippel & Bourdev, 2017; ence methods, in particular Black Box Variational Inference Mentzer et al., 2018; Lombardo et al., 2019) are trained to (Ranganath et al., 2014; Rezende et al., 2014), have spurred minimize a distortion metric at a fixed bitrate. To support variable-bitrate compression in this approach, one has to arXiv:2002.08158v2 [eess.IV] 7 Sep 2020 the development of increasingly large and expressive proba- bilistic models, including deep generative probabilistic mod- train several models for different bitrates. While training els such as variational autoencoders (Kingma & Welling, several models may be viable in many cases, a bigger issue 2014b) and Bayesian neural networks (MacKay, 1992; Blun- is the increase in decoder size as the decoder has to store dell et al., 2015). One natural application of deep latent the parameters of not one but several deep neural networks variable modeling is data compression, and recent work has for each bitrate setting. In applications like video streaming focused on end-to-end procedures that optimize a model under fluctuating connectivity, the decoder further has to load a new deep learning model into memory every time a * 1 Equal contribution Department of Computer Science, Uni- change in bandwidth requires adjusting the bitrate. versity of California, Irvine. Correspondence to: Yibo Yang <[email protected]>, Robert Bamler <[email protected]>. By contrast, we propose a a quantization method for latent th variable models that decouples training from compression, Proceedings of the 37 International Conference on Machine and that enables variable-bitrate compression with a single Learning, Online, PMLR 119, 2020. Copyright 2020 by the au- thor(s). model. We generalize a classical entropy coding algorithm, Variational Bayesian Quantization Arithmetic Coding (Witten et al., 1987; MacKay, 2003), Berger, 1972; Chou et al., 1989). Optimal vector quan- from the discrete to continuous domain. Our proposed algo- tization, although theoretically well-motivated (Gallager, rithm, Variational Bayesian Quantization, exploits posterior 1968), is not tractable in high-dimensional spaces (Gersho uncertainty estimates to automatically reduce the quanti- & Gray, 2012) and not scalable in practice. Therefore most zation accuracy of latent variables for which the model is classical lossy compression algorithms map data to a suit- uncertain anyway. This strategy is analogous to the way hu- ably designed semantic representation, in such a way that mans communicate quantitative information. For example, coordinate-wise scalar quantization can be fruitfully applied. Wikipedia lists the population of Rome in 2017 with the Recent machine-learning-based data compression methods specific number 2,879,728. By contrast, its population in learn such hand-designed representation from data, but sim- the year 500 AD is estimated by the round number 100,000 ilar to classical methods, most such ML methods directly because the high uncertainty would make a more precise take quantization into account in the generative model de- number meaningless. Our ablation studies show that this sign or training. Various approaches have been proposed posterior-informed quantization scheme is crucial to obtain- to approximate the non-differentiable quantization opera- ing competitive performance. tion during training, such as stochastic binarization (Toderici In detail, our contributions are as follows: et al., 2016; 2017), additive uniform noise (Balle´ et al., 2017; 2018; Habibian et al., 2019), or other differentiable approxi- A new discretization scheme. We present a novel ap- mation (Agustsson et al., 2017; Theis et al., 2017; Mentzer • proach to discretizing latent variables in a variational et al., 2018; Rippel & Bourdev, 2017); many such schemes inference framework. Our approach generalizes arith- result in quantization with a uniformly-spaced grid, with the metic coding from discrete to continuous distributions exception of (Agustsson et al., 2017), which optimizes for and takes posterior uncertainty into account. quantization grid points. Yang et al.(2020) considers opti- mal quantization at compression time, but assumes a fixed Single-model compression at variable bitrates. The quantization scheme of (Balle´ et al., 2017) during training. • decoupling of modeling and compression allows us We depart from such approaches by treating quantization to adjust the trade-off between bitrate and distortion as a post-processing step decoupled from model design and in post-processing. This is in contrast to existing ap- training. Crucial to our approach is a new quantization proaches to both data and model compression, which scheme that automatically adapts to different length scales often require specially trained models for each bitrate. in the representation space based on posterior uncertainty Automatic self-pruning. Deep latent variable models estimates. To our best knowledge, the only prior work that • often exhibit posterior collapse, i.e., the variational uses posterior uncertainty for compression is in the context posterior collapses to the model prior. In our approach, of bits-back coding (Honkela & Valpola, 2004; Townsend latent dimensions with collapsed posteriors require et al., 2019), but these works focus on lossless compression, close to zero bits, thus don’t require manual pruning. with the recent exception of (Yang et al., 2020). Most existing neural image compression methods require Competitive experimental performance. We show that • training a separate machine learning model for each desired our method outperforms JPEG over a wide range of bitrate setting (Balle´ et al., 2017; 2018; Mentzer et al., 2018; bitrates using only a single model. We also show that Theis et al., 2017; Lombardo et al., 2019). In fact, Alemi we can successfully compress word embeddings with et al.(2018) showed that any particular fitted VAE model minimal loss, as evaluated on semantic reasoning task. only targets one specific point on the rate-distortion curve. Our approach has the same benefit of variable-bitrate single- The paper is structured as follows: Section2 reviews related model compression as methods based on recurrent VAEs work in neural compression; Section3 proposes our Varia- (Gregor et al., 2016; Toderici et al., 2016; 2017; Johnston tional Bayesian Quantization algorithm. We give empirical et al., 2018); but unlike these methods, which use dedicated results in Section4, and conclude in Section5. Section6 model architecture for progressive image reconstruction, we provides additional theoretical insight about our method. instead focus more broadly on quantizing latent representa- tions in a given generative

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us