Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes

Total Page:16

File Type:pdf, Size:1020Kb

Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes Yves-Laurent Kom Samo [email protected] Stephen Roberts [email protected] Deparment of Engineering Science and Oxford-Man Institute, University of Oxford Abstract 2. Related Work In this paper we propose an efficient, scalable Non-parametric inference on point processes has been ex- non-parametric Gaussian process model for in- tensively studied in the literature. Rathbum & Cressie ference on Poisson point processes. Our model (1994) and Moeller et al. (1998) used a finite-dimensional does not resort to gridding the domain or to intro- piecewise constant log-Gaussian for the intensity function. ducing latent thinning points. Unlike competing Such approximations are limited in that the choice of the 3 models that scale as O(n ) over n data points, grid on which to represent the intensity function is arbitrary 2 our model has a complexity O(nk ) where k and one has to trade-off precision with computational com- n. We propose a MCMC sampler and show that plexity and numerical accuracy, with the complexity being the model obtained is faster, more accurate and cubic in the precision and exponential in the dimension of generates less correlated samples than competing the input space. Kottas (2006) and Kottas & Sanso (2007) approaches on both synthetic and real-life data. used a Dirichlet process mixture of Beta distributions as Finally, we show that our model easily handles prior for the normalised intensity function of a Poisson data sizes not considered thus far by alternate ap- process. Cunningham et al. (2008a) proposed a model proaches. using Gaussian processes evaluated on a fixed grid for the estimation of intensity functions of renewal processes with log-concave renewal distributions. They turned hyper- 1. Introduction parameters inference into an iterative series of convex opti- Point processes are a standard model when the objects of mization problems, where ordinarily cubic complexity op- study are the number and repartition of otherwise identical erations such as Cholesky decompositions are evaluated points on a domain, usually time or space. The Poisson in O(n log n) leveraging the uniformity of the grid and point process is probably the most commonly used point the log-concavity of the renewal distribution. Adams et process. It is fully characterised by an intensity function al. (2009) proposed an exact Markov Chain Monte Carlo that is inferred from the data. Gaussian processes have been (MCMC) inference scheme for the posterior intensity func- successfully used to form a prior over the (log-) intensity tion of a Poisson process with a sigmoid Gaussian prior function for applications such as astronomy (Gregory & intensity, or equivalently a Cox process (Cox, 1955) with Loredo, 1992), forestry (Heikkinen & Arjas, 1999), finance sigmoid Gaussian stochastic intensity. The authors simpli- (Basu & Dassios, 2002), and neuroscience (Cunningham fied the likelihood of a Cox process by introducing latent et al., 2008b). We offer extensions to existing work as fol- thinning points. The proposed scheme has a complexity ex- lows: we develop an exact non-parametric Bayesian model ponential in the dimension of the input space, cubic in the that enables inference on Poisson processes. Our method number of data and thinning points, and performs particu- scales linearly with the number of data points and does not larly poorly when the data are sparse. Gunter et al. (2014) resort to gridding the domain. We derive a MCMC sam- extended this model to structured point processes. Rao & pler for core components of the model and show that our Teh (2011) used uniformization to produce exact samples approach offers a faster and more accurate solution, as well from a non-stationary renewal process whose hazard func- as producing less correlated samples, compared to other ap- tion is modulated by a Gaussian process, and consequently proaches on both real-life and synthetic data. proposed a MCMC sampler to sample from the posterior intensity of a unidimensional point process. Although the nd Proceedings of the 32 International Conference on Machine authors have illustrated that their model is faster than that Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copy- of Adams et al. (2009) on some synthetic and real-life data, right 2015 by the author(s). Scalable Nonparametric Bayesian Inference on Point Processes with Gaussian Processes their method still scales cubically in the number of thinned fact, for every finite-dimensional prior over the vector in and data points, and is not applicable to data in dimension Equation (2), there exists a Cox process with an a.s. C1 higher than 1, such as spatial point processes. intensity process that coincides with the postulated prior (see appendix for the proof). 3. Model This approach is similar to that of (Kottas, 2006). The au- R thor regarded I = S λ(s)ds as a random variable and 3.1. Setup λ(s) noted that p(s) = R can be regarded as a pdf S λ(s)ds We are tasked with making non-parametric Bayesian infer- whose support is the domain S. He then made infer- ence on the intensity function of a Poisson point process ence on (I; p(s1); :::; p(sn)), postulating as prior that I and assumed to have generated a dataset D = fs1; :::; sng. To (p(s1); :::; p(sn)) are independent, I has a Jeffrey’s prior simplify the discourse without loss of generality, we will and (s ; :::; s ) are i.i.d. draws from a Dirichlet process d 1 n assume that data points take values in R . mixture of Beta with pdf p. Firstly, let us recall that a Poisson point process (PPP) on The model we present in the following section d a bounded domain S ⊂ R with non-negative intensity puts an appropriate finite-dimensional prior on function λ is a locally finite random collection of points 0 0 R (λ(s1); :::; λ(sn); λ(s1); :::; λ(sk); S λ(s)ds) for some in S such that the numbers of points occurring in disjoint 0 inducing points sj rather than putting a functional prior on parts Bi of S are independent and each follows a Poisson the intensity function directly. distribution with mean R λ(s)ds. Bi The likelihood of a PPP is given by: 3.3. Our model n 3.3.1. INTUITION Z Y L(λjs1; :::; sn) = exp − λ(s)ds λ(si): (1) S i=1 The intuition behind our model is that the data are not a ‘natural grid’ at which to infer the value of the inten- 3.2. Tractability discussion sity function. For instance, if the data consists of 200,000 points on the interval [0; 24] as in one of our experi- The approach adopted thus far in the literature to make ments, it might not be necessary to infer the value of a non-parametric Bayesian inference on point process us- function at 200,000 points to characterise it on [0; 24]. ing Gaussian processes (GPs) (Rasmussen & Williams, Instead, we find a small set of inducing points D0 = 0 0 2006) consists of putting a functional prior on the inten- fs1; :::; skg; k n on our domain, through which we will sity function in the form of a positive function of a GP: define the prior over the vector in Equation (2) augmented 0 0 λ(s) = f(g(s)) where g is drawn from a GP and f is a pos- with λ(s1); :::; λ(sk). The set of inducing points will be 0 0 itive function. Examples of such f include the exponential chosen so that knowing λ(s1); :::; λ(sk) would result in function and a scaled sigmoid function (Adams et al., 2009; knowing the values of the intensity function elsewhere on Rao & Teh, 2011). This approach can be seen as a Cox the domain, in particular λ(s1); :::; λ(sn), with ‘arbitrary process where the stochastic intensity follows the same dy- certainty’. We will then analytically integrate out the de- namics as the functional prior. When the Gaussian process pendency in λ(s1); :::; λ(sn) from the posterior, thereby re- used has almost surely continuous paths, the random vector ducing the complexity from cubic to linear in the number of Z data points without ‘loss of information’, and reformulating (λ(s1); :::; λ(sn); λ(s)ds) (2) our problem as that of making exact Bayesian inference on S the value of the intensity function at the inducing points. provably admits a probability density function (pdf). More- We will then describe how to obtain predictive mean and over, we note that any piece of information not contained in variance of the intensity function elsewhere on the domain the implied pdf over the vector in Equation (2) will be lost from training. as the likelihood only depends on those variables. Hence, given a functional prior postulated on the intensity func- 3.3.2. MODEL SPECIFICATION tion, the only necessary piece of information to be able to Let us denote by λ∗ a positive stochastic process on S make a full Bayesian treatment is the implied joint pdf over such that log λ∗ is a stationary Gaussian process with co- the vector in Equation (2). ∗ ∗ variance kernel γ :(s1; s2) ! γ (s1; s2) and constant For many useful transformations f and covariance struc- mean m∗. Let us further denote by λ^ a positive stochas- tures for the GP, the aforementioned implied pdf might not tic process on S such that log λ^ is a conditional Gaus- be available analytically. We note however that there is no sian process coinciding with log λ∗ at k inducing points 0 0 0 ^ need to put a functional prior on the intensity function.
Recommended publications
  • Deep Neural Networks As Gaussian Processes
    Published as a conference paper at ICLR 2018 DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES Jaehoon Lee∗y, Yasaman Bahri∗y, Roman Novak , Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein Google Brain fjaehlee, yasamanb, romann, schsam, jpennin, [email protected] ABSTRACT It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as co- variance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide deep net- works and GPs. We further develop a computationally efficient pipeline to com- pute the covariance function for these GPs. We then use the resulting GPs to per- form Bayesian inference for wide deep neural networks on MNIST and CIFAR- 10. We observe that trained neural network accuracy approaches that of the corre- sponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test perfor- mance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks.
    [Show full text]
  • Gaussian Process Dynamical Models for Human Motion
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 30, NO. 2, FEBRUARY 2008 283 Gaussian Process Dynamical Models for Human Motion Jack M. Wang, David J. Fleet, Senior Member, IEEE, and Aaron Hertzmann, Member, IEEE Abstract—We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low- dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces. Index Terms—Machine learning, motion, tracking, animation, stochastic processes, time series analysis. Ç 1INTRODUCTION OOD statistical models for human motion are important models such as hidden Markov model (HMM) and linear Gfor many applications in vision and graphics, notably dynamical systems (LDS) are efficient and easily learned visual tracking, activity recognition, and computer anima- but limited in their expressiveness for complex motions. tion. It is well known in computer vision that the estimation More expressive models such as switching linear dynamical of 3D human motion from a monocular video sequence is systems (SLDS) and nonlinear dynamical systems (NLDS), highly ambiguous.
    [Show full text]
  • Modelling Multi-Object Activity by Gaussian Processes 1
    LOY et al.: MODELLING MULTI-OBJECT ACTIVITY BY GAUSSIAN PROCESSES 1 Modelling Multi-object Activity by Gaussian Processes Chen Change Loy School of EECS [email protected] Queen Mary University of London Tao Xiang E1 4NS London, UK [email protected] Shaogang Gong [email protected] Abstract We present a new approach for activity modelling and anomaly detection based on non-parametric Gaussian Process (GP) models. Specifically, GP regression models are formulated to learn non-linear relationships between multi-object activity patterns ob- served from semantically decomposed regions in complex scenes. Predictive distribu- tions are inferred from the regression models to compare with the actual observations for real-time anomaly detection. The use of a flexible, non-parametric model alleviates the difficult problem of selecting appropriate model complexity encountered in parametric models such as Dynamic Bayesian Networks (DBNs). Crucially, our GP models need fewer parameters; they are thus less likely to overfit given sparse data. In addition, our approach is robust to the inevitable noise in activity representation as noise is modelled explicitly in the GP models. Experimental results on a public traffic scene show that our models outperform DBNs in terms of anomaly sensitivity, noise robustness, and flexibil- ity in modelling complex activity. 1 Introduction Activity modelling and automatic anomaly detection in video have received increasing at- tention due to the recent large-scale deployments of surveillance cameras. These tasks are non-trivial because complex activity patterns in a busy public space involve multiple objects interacting with each other over space and time, whilst anomalies are often rare, ambigu- ous and can be easily confused with noise caused by low image quality, unstable lighting condition and occlusion.
    [Show full text]
  • Financial Time Series Volatility Analysis Using Gaussian Process State-Space Models
    Financial Time Series Volatility Analysis Using Gaussian Process State-Space Models by Jianan Han Bachelor of Engineering, Hebei Normal University, China, 2010 A thesis presented to Ryerson University in partial fulfillment of the requirements for the degree of Master of Applied Science in the Program of Electrical and Computer Engineering Toronto, Ontario, Canada, 2015 c Jianan Han 2015 AUTHOR'S DECLARATION FOR ELECTRONIC SUBMISSION OF A THESIS I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I authorize Ryerson University to lend this thesis to other institutions or individuals for the purpose of scholarly research. I further authorize Ryerson University to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. I understand that my dissertation may be made electronically available to the public. ii Financial Time Series Volatility Analysis Using Gaussian Process State-Space Models Master of Applied Science 2015 Jianan Han Electrical and Computer Engineering Ryerson University Abstract In this thesis, we propose a novel nonparametric modeling framework for financial time series data analysis, and we apply the framework to the problem of time varying volatility modeling. Existing parametric models have a rigid transition function form and they often have over-fitting problems when model parameters are estimated using maximum likelihood methods. These drawbacks effect the models' forecast performance. To solve this problem, we take Bayesian nonparametric modeling approach.
    [Show full text]
  • Gaussian-Random-Process.Pdf
    The Gaussian Random Process Perhaps the most important continuous state-space random process in communications systems in the Gaussian random process, which, we shall see is very similar to, and shares many properties with the jointly Gaussian random variable that we studied previously (see lecture notes and chapter-4). X(t); t 2 T is a Gaussian r.p., if, for any positive integer n, any choice of coefficients ak; 1 k n; and any choice ≤ ≤ of sample time tk ; 1 k n; the random variable given by the following weighted sum of random variables is Gaussian:2 T ≤ ≤ X(t) = a1X(t1) + a2X(t2) + ::: + anX(tn) using vector notation we can express this as follows: X(t) = [X(t1);X(t2);:::;X(tn)] which, according to our study of jointly Gaussian r.v.s is an n-dimensional Gaussian r.v.. Hence, its pdf is known since we know the pdf of a jointly Gaussian random variable. For the random process, however, there is also the nasty little parameter t to worry about!! The best way to see the connection to the Gaussian random variable and understand the pdf of a random process is by example: Example: Determining the Distribution of a Gaussian Process Consider a continuous time random variable X(t); t with continuous state-space (in this case amplitude) defined by the following expression: 2 R X(t) = Y1 + tY2 where, Y1 and Y2 are independent Gaussian distributed random variables with zero mean and variance 2 2 σ : Y1;Y2 N(0; σ ). The problem is to find the one and two dimensional probability density functions of the random! process X(t): The one-dimensional distribution of a random process, also known as the univariate distribution is denoted by the notation FX;1(u; t) and defined as: P r X(t) u .
    [Show full text]
  • Gaussian Markov Processes
    C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006, ISBN 026218253X. c 2006 Massachusetts Institute of Technology. www.GaussianProcess.org/gpml Appendix B Gaussian Markov Processes Particularly when the index set for a stochastic process is one-dimensional such as the real line or its discretization onto the integer lattice, it is very interesting to investigate the properties of Gaussian Markov processes (GMPs). In this Appendix we use X(t) to define a stochastic process with continuous time pa- rameter t. In the discrete time case the process is denoted ...,X−1,X0,X1,... etc. We assume that the process has zero mean and is, unless otherwise stated, stationary. A discrete-time autoregressive (AR) process of order p can be written as AR process p X Xt = akXt−k + b0Zt, (B.1) k=1 where Zt ∼ N (0, 1) and all Zt’s are i.i.d. Notice the order-p Markov property that given the history Xt−1,Xt−2,..., Xt depends only on the previous p X’s. This relationship can be conveniently expressed as a graphical model; part of an AR(2) process is illustrated in Figure B.1. The name autoregressive stems from the fact that Xt is predicted from the p previous X’s through a regression equation. If one stores the current X and the p − 1 previous values as a state vector, then the AR(p) scalar process can be written equivalently as a vector AR(1) process. Figure B.1: Graphical model illustrating an AR(2) process.
    [Show full text]
  • FULLY BAYESIAN FIELD SLAM USING GAUSSIAN MARKOV RANDOM FIELDS Huan N
    Asian Journal of Control, Vol. 18, No. 5, pp. 1–14, September 2016 Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/asjc.1237 FULLY BAYESIAN FIELD SLAM USING GAUSSIAN MARKOV RANDOM FIELDS Huan N. Do, Mahdi Jadaliha, Mehmet Temel, and Jongeun Choi ABSTRACT This paper presents a fully Bayesian way to solve the simultaneous localization and spatial prediction problem using a Gaussian Markov random field (GMRF) model. The objective is to simultaneously localize robotic sensors and predict a spatial field of interest using sequentially collected noisy observations by robotic sensors. The set of observations consists of the observed noisy positions of robotic sensing vehicles and noisy measurements of a spatial field. To be flexible, the spatial field of interest is modeled by a GMRF with uncertain hyperparameters. We derive an approximate Bayesian solution to the problem of computing the predictive inferences of the GMRF and the localization, taking into account observations, uncertain hyperparameters, measurement noise, kinematics of robotic sensors, and uncertain localization. The effectiveness of the proposed algorithm is illustrated by simulation results as well as by experiment results. The experiment results successfully show the flexibility and adaptability of our fully Bayesian approach ina data-driven fashion. Key Words: Vision-based localization, spatial modeling, simultaneous localization and mapping (SLAM), Gaussian process regression, Gaussian Markov random field. I. INTRODUCTION However, there are a number of inapplicable situa- tions. For example, underwater autonomous gliders for Simultaneous localization and mapping (SLAM) ocean sampling cannot find usual geometrical models addresses the problem of a robot exploring an unknown from measurements of environmental variables such as environment under localization uncertainty [1].
    [Show full text]
  • Mean Field Methods for Classification with Gaussian Processes
    Mean field methods for classification with Gaussian processes Manfred Opper Neural Computing Research Group Division of Electronic Engineering and Computer Science Aston University Birmingham B4 7ET, UK. opperm~aston.ac.uk Ole Winther Theoretical Physics II, Lund University, S6lvegatan 14 A S-223 62 Lund, Sweden CONNECT, The Niels Bohr Institute, University of Copenhagen Blegdamsvej 17, 2100 Copenhagen 0, Denmark winther~thep.lu.se Abstract We discuss the application of TAP mean field methods known from the Statistical Mechanics of disordered systems to Bayesian classifi­ cation models with Gaussian processes. In contrast to previous ap­ proaches, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given. 1 Modeling with Gaussian Processes Bayesian models which are based on Gaussian prior distributions on function spaces are promising non-parametric statistical tools. They have been recently introduced into the Neural Computation community (Neal 1996, Williams & Rasmussen 1996, Mackay 1997). To give their basic definition, we assume that the likelihood of the output or target variable T for a given input s E RN can be written in the form p(Tlh(s)) where h : RN --+ R is a priori assumed to be a Gaussian random field. If we assume fields with zero prior mean, the statistics of h is entirely defined by the second order correlations C(s, S') == E[h(s)h(S')], where E denotes expectations 310 M Opper and 0. Winther with respect to the prior. Interesting examples are C(s, s') (1) C(s, s') (2) The choice (1) can be motivated as a limit of a two-layered neural network with infinitely many hidden units with factorizable input-hidden weight priors (Williams 1997).
    [Show full text]
  • When Gaussian Process Meets Big Data
    IEEE 1 When Gaussian Process Meets Big Data: A Review of Scalable GPs Haitao Liu, Yew-Soon Ong, Fellow, IEEE, Xiaobo Shen, and Jianfei Cai, Senior Member, IEEE Abstract—The vast quantity of information brought by big GP paradigms. It is worth noting that this review mainly data as well as the evolving computer hardware encourages suc- focuses on scalable GPs for large-scale regression but not on cess stories in the machine learning community. In the meanwhile, all forms of GPs or other machine learning models. it poses challenges for the Gaussian process (GP) regression, a d n well-known non-parametric and interpretable Bayesian model, Given n training points X = xi R i=1 and their { ∈ n } which suffers from cubic complexity to data size. To improve observations y = yi = y(xi) R , GP seeks to { ∈ }i=1 the scalability while retaining desirable prediction quality, a infer the latent function f : Rd R in the function space variety of scalable GPs have been presented. But they have not (m(x), k(x, x′)) defined by the7→ mean m(.) and the kernel yet been comprehensively reviewed and analyzed in order to be kGP(., .). The most prominent weakness of standard GP is that well understood by both academia and industry. The review of it suffers from a cubic time complexity (n3) because of scalable GPs in the GP community is timely and important due O to the explosion of data size. To this end, this paper is devoted the inversion and determinant of the n n kernel matrix × to the review on state-of-the-art scalable GPs involving two main Knn = k(X, X).
    [Show full text]
  • Stochastic Optimal Control Using Gaussian Process Regression Over Probability Distributions
    Stochastic Optimal Control Using Gaussian Process Regression over Probability Distributions Jana Mayer1, Maxim Dolgov2, Tobias Stickling1, Selim Özgen1, Florian Rosenthal1, and Uwe D. Hanebeck1 Abstract— In this paper, we address optimal control of non- In this paper, we will refer to the term value function without linear stochastic systems under motion and measurement uncer- loss of generality. tainty with finite control input and measurement spaces. Such A solution to a stochastic control problem can be found problems can be formalized as partially-observable Markov decision processes where the goal is to find policies via dy- using the dynamic programming (DP) algorithm that is based namic programming that map the information available to the on Bellman’s principle of optimality [1]. This algorithm relies controller to control inputs while optimizing a performance on two key principles. First, it assumes the existence of criterion. However, they suffer from intractability in scenarios sufficient statistics in form of a probability distribution that with continuous state spaces and partial observability which encompasses all the information up to the moment of policy makes approximations necessary. Point-based value iteration methods are a class of global approximate methods that regress evaluation. And second, it computes the policy by starting at the value function given the values at a set of reference points. the end of the planning horizon and going backwards in time In this paper, we present a novel point-based value iteration to the current time step and at every time step choosing a approach for continuous state spaces that uses Gaussian pro- control input that minimize the cumulative costs maintained cesses defined over probability distribution for the regression.
    [Show full text]
  • The Variational Gaussian Process
    Published as a conference paper at ICLR 2016 THE VARIATIONAL GAUSSIAN PROCESS Dustin Tran Harvard University [email protected] Rajesh Ranganath Princeton University [email protected] David M. Blei Columbia University [email protected] ABSTRACT Variational inference is a powerful tool for approximate inference, and it has been recently applied for representation learning with deep generative models. We de- velop the variational Gaussian process (VGP), a Bayesian nonparametric varia- tional family, which adapts its shape to match complex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity. We prove a universal approximation theorem for the VGP, demonstrating its representative power for learning any model. For inference we present a variational objective inspired by auto-encoders and perform black box inference over a wide class of models. The VGP achieves new state-of-the-art re- sults for unsupervised learning, inferring models such as the deep latent Gaussian model and the recently proposed DRAW. 1 INTRODUCTION Variational inference is a powerful tool for approximate posterior inference. The idea is to posit a family of distributions over the latent variables and then find the member of that family closest to the posterior. Originally developed in the 1990s (Hinton & Van Camp, 1993; Waterhouse et al., 1996; Jordan et al., 1999), variational inference has enjoyed renewed interest around developing scalable optimization for large datasets (Hoffman et al., 2013), deriving generic strategies for easily fitting many models (Ranganath et al., 2014), and applying neural networks as a flexible parametric family of approximations (Kingma & Welling, 2014; Rezende et al., 2014).
    [Show full text]
  • Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models
    Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models Alessandro Davide Ialongo 1 2 Mark van der Wilk 3 James Hensman 3 Carl Edward Rasmussen 1 Abstract the distribution of the outcome (von Neumann et al., 1944). A striking example of this is model based policy search We identify a new variational inference scheme (Deisenroth & Rasmussen, 2011). for dynamical systems whose transition function is modelled by a Gaussian process. Inference in There are many approaches to time series prediction. Within this setting has either employed computationally the machine learning community, autoregressive (AR) intensive MCMC methods, or relied on factorisa- (Billings, 2013) and state-space (SSMs) models are pop- tions of the variational posterior. As we demon- ular, in part due to the relative ease with which predictions strate in our experiments, the factorisation be- can be made. AR predictions are obtained by a learned map- tween latent system states and transition function ping from the H last observations to the next one, whereas can lead to a miscalibrated posterior and to learn- SSMs model the underlying dynamical system by learning ing unnecessarily large noise terms. We eliminate a transition function which maps a system state forward this factorisation by explicitly modelling the de- in time. At each point in time, the state contains sufficient pendence between state trajectories and the Gaus- information for predicting both future states and observa- sian process posterior. Samples of the latent states tions. While AR models can be easier to train, SSMs have can then be tractably generated by conditioning the potential to be more data efficient, due to their minimal on this representation.
    [Show full text]