Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia

Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia

UI*&&&"$.*OUFSOBUJPOBM4ZNQPTJVNPO$MVTUFS $MPVEBOE(SJE$PNQVUJOH $$(3*% Distributed MCMC Inference in Dirichlet Process Mixture Models Using Julia Or Dinari* Angel Yu* Oren Freifeld John W. Fisher III Department of Computer Science CSAIL Department of Computer Science CSAIL Ben-Gurion University MIT Ben-Gurion University MIT Beer-Sheva, Israel Cambridge MA, USA Beer-Sheva, Israel Cambridge MA, USA [email protected] [email protected] [email protected][email protected] Abstract —Due to the increasing availability of large data Sample Parameters sets, the need for general-purpose massively-parallel analysis Propose Splits Propose Merges tools become ever greater. In unsupervised learning, Bayesian Model Parameters (Master) nonparametric mixture models, exemplified by the Dirichlet- Process Mixture Model (DPMM), provide a principled Bayesian Sufficient Stats approach to adapt model complexity to the data. Despite their potential, however, DPMMs have yet to become a popular tool. This is partly due to the lack of friendly software tools that Data1 Data2 Data3 Data4 can handle large datasets efficiently. Here we show how, using Labels1 Labels2 Labels3 Labels4 Julia, one can achieve efficient and easily-modifiable implemen- tation of distributed inference in DPMMs. Particularly, we show how a recent parallel MCMC inference algorithm – originally Sample Assignments implemented in C++ for a single multi-core machine – can be distributed efficiently across multiple multi-core machines using Fig. 1: The architecture of the proposed multi-machine multi- a distributed-memory model. This leads to speedups, alleviates core implementation. In this figure, for concreteness, we show memory and storage limitations, and lets us learn DPMMs from parallelization over 4 machines. significantly larger datasets and of higher dimensionality. It also turned out that even on a single machine the proposed Julia implementation handles higher dimensions more gracefully (at least for Gaussians) than the original C++ implementation. practitioners, largely due to computational bottlenecks in cur- Finally, we use the proposed implementation to learn a model of rent implementations. We argue that useful implementations image patches and apply the learned model for image denoising. must be able to leverage parallel and distributed computing While we speculate that a highly-optimized distributed imple- mentation in, say, C++ could have been faster than the proposed resources in order for DPMMs to be a practical choice for implementation in Julia, from our perspective as machine- analysis. One of the missing pieces, we argue, is the avail- learning researchers (as opposed to HPC researchers), the latter ability of friendly software tools that can efficiently handle also offers a practical and monetary value due to the ease of DPMM inference in large datasets; i.e., we need to be able development and abstraction level. Our code is publicly available to leverage parallel- and distributed-computing resources for at https://github.com/dinarior/dpmm subclusters.jl DPMM inference. This is because not only potential speedups I. INTRODUCTION but also memory and storage requirements. This is especially In unsupervised learning, Bayesian nonparametric mixture true for distributed mobile robotic sensing applications where models, exemplified by the Dirichlet-Process Mixture Model robotics agents have limited computational and communica- (DPMM), provide a principled approach for Bayesian mod- tion resources. In an analogy, consider how advances in GPU eling while adapting the model complexity to the data. This computing contributed to the success of deep learning (at contrasts with finite mixture models whose complexity is de- least in supervised learning). That said, here we are more termined manually or via model-selection methods. A DPMM interested in distributing computations across multiple CPU example is the Dirichlet-Process Gaussian Mixture Model cores and multiple machines. This is not only because not (DP-GMM), an ∞-dimensional extension of the Bayesian all computation types are appropriate for GPU hardware or variant of the (finite) Gaussian Mixture Model (GMM). De- that CPU resources are more available but also because it spite their potential, however, and although many researchers is useful during algorithm development to easily distribute have used them successfully in many applications over the computations as an abstraction. last decade, DPMMs have not enjoyed wide popularity among While DPMMs are theoretically ideal for handling large, un- labeled, datasets, current implementations of DPMM inference *indicates that both these authors contributed equally. This work was do not scale well with increases in size of the data sets and/or partially supported by NSF award 1622501 and by the Lynn and William Frankel Center for Computer Science at BGU. Or Dinari was partially dimensionality (exceptions are few recent implementations supported by Trax. designed for streaming, typically low-dimensional, data). This ¥*&&& %0*$$(3*% Authorized licensed use limited to: MIT Libraries. Downloaded on September 11,2020 at 17:44:59 UTC from IEEE Xplore. Restrictions apply. speed, than [5]. This is important as computer vision usually deals with higher-dimensional data than standard machine- learning problems. Finally, we use our implementation to learn image-patch models and apply them to image denoising. We emphasize that the goal of this paper is not to claim that Julia is faster/better than language X. For example, we speculate that a highly-optimized distributed implementation in, say, C++ could have been faster than the proposed implementation in Julia. However, from our perspective as machine-learning researchers (as opposed to HPC researchers), distributed im- plementations in Julia, ours included, also offer a practical and monetary value due to the ease of development and abstraction level. That is, even if other implementations might be faster, and even though the monetary cost of several machines is of course greater than a single machine, the rapid development Fig. 2: 2D Synthetic data. 106 points drawn from a 6- and ease of modifications to the model in Julia offer significant component GMM. The correctly-inferred clusters are shown, value in terms of our time as machine-learning algorithm in different colors, as ellipses at 5 standard deviations (the developers while still being able to massively parallelize correct value of K =6was inferred and was not assumed computations. known. II. RELATED WORK Bayesian non-parameteric mixture models in general, and is partly since most existing implementations are serial; i.e., DPMMs in particular, have been used in computer vision by they do not harness the power of distributed computing. This several authors, e.g.[4], [6], [8]–[10], [13], [16]–[19], [21], does not mean that there do not exist distributed algorithms [22], [24], [25], [27], [28], [30], [32]–[35], [37]. While this for DPMM inference. There is, however, a large practical gap list may seem long, note it represents almost an entire decade, between designing such an algorithm and having it imple- yet it is almost exhaustive. mented efficiently in a way that fully utilizes all the available A few DPMM-inference software packages, in several dif- computing resources. Our work is an attempt to close this gap ferent languages (e.g., R [15], Python [12], and Matlab [20]), in a way which is useful for typical large-scale applications are publicly available; however, their implementations are (such as in, e.g., computer vision). serial. A few works use GPU to distribute DPMM inference in More generally than the topic of DPMMs, distributed im- low-dimensional streaming-data applications using, e.g., either plementations, at least in traditional programming languages small-variance asymptotics [30] or variational methods [3]. used for such implementations, tend to be hard to debug, read, Our work differs from those in that we use a multi-machine, and modify. This clashes with the usual workflow of algo- multi-core, pure-CPU distributed implementation, that we rely rithm development. Julia, a recent high-level/high-performance on MCMC inference, and in that we can handle large, high- programming language for technical computing, substantially dimensional data in batch mode. addresses this issue. It is “easy to write (and read)”, having Julia has been little explored for Bayesian nonparametric syntax similar to other technical computing environments (e.g., mixture models, though there are counterexamples, e.g.[36]. MATLAB or Python’s NumPy). It provides a sophisticated Likewise, Lin [23] use Julia for inference in Bayesian nonpara- compiler, distributed parallel execution, numerical accuracy, metric mixture models. In both [36] and [23] the implementa- and an extensive mathematical function. tions are serial. While there exist few more such examples, it In this work we show how to implement, using Julia, seems that the strong (and remarkably painless) support Julia efficient distributed DPMM inference. Particularly, we demon- offers for parallel and distributed computing has yet to be strate how a recent parallel MCMC inference algorithm [5]– utilized in this context. originally implemented in C++ for a single multi-core machine In this work we use Julia to implement a parallel MCMC in a highly-specialized fashion using a shared-memory model sampler for DPMM proposed by Chang and Fisher [5] for — can be implemented such that it is distributed efficiently both Gaussian and

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us