
Gaussian Process Priors with ARMA Noise Models R. Murray-Smith and A. Girard Department of Computing Science University of Glasgow Glasgow G12 8QQ rod,agathe ¡ @dcs.gla.ac.uk Abstract We extend the standard covariance function used in the Gaussian Process prior nonparametric modelling approach to include correlated (ARMA) noise models. The improvement in performance is illustrated on some simulation examples of data generated by nonlinear static functions corrupted with additive ARMA noise. 1 Gaussian Process priors In recent years many flexible parametric and semi-parametric approaches to empirical identification of nonlinear systems have been used. In this paper we use nonparametric models which retain the available data and perform inference conditional on the current state and local data (called ‘smoothing’ in some frameworks). This direct use of the data has potential advantages in many control contexts. The uncertainty of model predictions can be made dependent on local data density, and the model complexity is automatically related to the amount of available data (more complex models need more evidence to make them likely). The nonparametric model used in this paper is a Gaussian Process prior, as developed by O’Hagan [1] and reviewed in [2, 3]. An application to modelling a system within a control context is described in [4], and further developments relating to their use in gain scheduling are described in [5]. Most previous published work has focused on regression tasks with independent identically distributed noise characteristics. Input-dependent noise is described in [6], but we are not aware of previous work with coloured noise covariance functions in Gaussian Process priors. This paper shows how knowledge about correlation structure of additive unmeasured noise or disturbances can be incorporated into the model1. This improves the performance of the model in finding optimal parameters for describing the deterministic aspects of the system, and can be used to make online prediction more accurately. We expect this will make the use of Gaussian Process priors more attractive for use in control and signal processing contexts. 2 Modelling with GPs £¤ ¥ ¤ ¦ We assume that we are modelling an unknown nonlinear system ¢ , with known inputs , using observed outputs . These ¨ ¥ ¢ £¤ © ¥ § © have been corrupted by an additive discrete-time process § £ . Here we assume that and are independent. Let ¦ ¦ , a set of observed data or targets be such that © © © ¢ £¤ ¥ § ¦ (1) 2.1 The Gaussian Process prior approach £¤ ¥ A prior is placed directly on the space of functions for modelling the above system. We assume that the values of the function ¢ ¤ ¦ ¦ at inputs ¤ , outputs , constitute a set of random variables which we assume will have a joint -dimensional 2 © £¤ ¤ ¥ multivariate Normal distribution. The Gaussian Process is then fully specified by its mean and covariance function . We note ¦ ¥ £ ¥ £¦ (2) £¤ ¤ ¥ © © where is the covariance matrix whose entries are given by . We now have a prior distribution for the target values which is a multivariate Normal: % % '() * + ¥ £ ! " ¥ # $ # & # , £ (3) ! 1See a standard text such as [7] for a discussion of disturbance models in the linear system identification context. 2In what follows, we assume a zero mean process. 1 Useful notations What follows is more staightforward if we partition the full set of points into a training and a test part. ¦ ¦ . § § - ¤ ¤ Let - , and the sets of training outputs, disturbances and inputs / respectively. - are the corresponding terms for the test data. In this paper we will consider single point prediction, and use / 0 0 1 1 ¤ the notation ¦ , to indicate that we are predicting the th point, given the training data, and the new input . We then have - * * 2 , £ ¥ , / with covariance matrix partitioned into (4) 2 2 where , and 2 2 3 © © © § ¥ ¤ ¥ £ § 1 5 1 4 £¤ are the covariances between the training data (matrix ), 3 4 0 0 1 5 £¤ ¤ ¥ £ § § ¥ is the vector ( ) of covariances between the test and the training targets 2 2 and 3 0 0 0 0 4 £¤ ¤ ¥ £ § § ¥ is the variance of the test point. 2 We can view the joint probability as the combination of a marginal multinormal distribution and a conditional multinormal distribution. The marginal term gives us the likelihood of the training data % % '() * + - - - - £ ¥ £ ! " ¥ & , 6 # # # (5) ! / while the conditional term gives us the output posterior density conditional on the training data at the test points % * % % '() + + 8 + 8 6 0 0 0 0 & , - - # # # ¤ ¥ £ ! " ¥ £¦ ¥ £¦ ¥ £¦ (6) 2 7 2 7 ! where 8 # - (7) 2 + # (8) 2 2 7 2 2 8 0 ¦ 1 5 1 We use as a mean estimate 9 for the test point, with a variance of . Note that the inversion of an matrix is 2 7 3 : computationally nontrivial for 1 , so the GP approach is currently suited to small and medium-sized data sets. 3 The covariance function The covariance function has a central role in the GP modelling framework, expressing the expected covariance between two ¦ ¤ outputs ¦ and . The covariance function is constrained to lead to a positive definite covariance matrix for any inputs . We © 4 £ ¥ ¢ £¤ ¥ £ ¥ view the total covariance as being composed of covariance functions due to the underlying system model and due to the noise process § . © © £¤ ¤ ¥ 4 £¤ ¤ ¥ £ ; ¥ (9) > ? 3.1 The `model' covariance function < = 4 ¤ The covariance function associated with the ’model’ of the system, , is a function of the inputs only. We choose this covariance to be such that inputs ‘close’ together will have outputs that are highly correlated and thus are likely to be quite £¤ ¥ ¤ similar. It corresponds to a prior giving higher probability to functions ¢ which are smoothly varying in . A commonly-used covariance function which assumes smoothness is D '() B E E + + E 2 H © © 4 ¤ ¥ I A £¤ £¤ ¤ ¥ @A (10) ! C EF G in which 3 @A controls the vertical scale of variation of a typical function, 3 IA allows the function to be offset away from and 3 E allow different distance measure for each input dimension. G 4 £ ¥ In practice, a small ‘jitter’, JK is added to , for numerical reasons (this adds a small diagonal term to improving the © £ @ A I A ¥ condition of the matrix). The hyperparameters L 4 can be provided as prior knowledge, or identified from training data. G 3See [8] for discussion of ways to speed up the inversion. 2 > ? 3.2 The `noise' covariance function < M 3.2.1 Uncorrelated white noise £ ¥ The simplest choice of noise model is to assume a Gaussian white noise: . In that case, the covariance function is simply O 2 © © £ § § ¥ N K (11) O P P 2 N 1 5 1 so the noise covariance matrix when predicting a single future point, from training data is with the identity matrix. 3.2.2 Parametric correlated-noise models ¨ ¥ We assume that correlations between the disturbances on individual outputs exist, § £ is a coloured noise/disturbance process. ¨ ¦ © § § © , the disturbance associated with the point occurs at time , and depends on previous values of . We consider two dynamic linear models: the auto regressive (AR) and moving average© (MA) models separately, and their combination into an ARMA disturbance model. For such models, the partitioned components of the noise covariance matrix are Q ¨ ¨ ¨ £ § £ ¥ § £ T ¥ UVW 1 XYZ T 1 R ¨ ¨ ¨ 0 £ § £ ¥ § £ T ¥ UVW 1 XYZ T 1 (12) 2 ¨ ¨ S 0 0 £ § £ ¥ § £ ¥ ¥ 2 where T is the lag. Example plots of such noise covariance matrices are shown in Figure 1. AR noise model [ The auto regressive model of order [ , AR( ), can be written ¨ ¨ ¨ ¨ + + + ] ] ] + + § £ ¥ \ £ ¥ I § £ ¥ I § £ ¥ [ (13) ^ _ 2 \ N ` where is a Gaussian white noise with variance . In transfer operator notation, where # is the delay function, ¨ ¨ a \ £ ¥ ¥ § £ (14) £` ¥ a ] ] ] £` ¥ I ` I ` ^ # where # . The covariance function is ^ Q + + + R I £T ¥ I £T ! ¥ I b £T ¥ UVW T : [ 2 £T ¥ (15) S _ 2 I £ ¥ I £ ! ¥ I £ [ ¥ N UVW T 2 ^ £ ¥ [ The first [ elements of the covariance function are estimated by solving the Yule-Walker equations. The rest by applying the AR process iteratively. £ [ ¥ If identifying the parameters of an AR model of order [ , there are then parameters to estimate. Note that for the a process to be stationary, the roots of £` ¥ must lie outside the unit circle, so the optimization process must be constrained. MA noise model The MA( c ) model ¨ ¨ ¨ ¨ ¨ + ] ] ] + c ¥ \ £ ¥ d \ £ ¥ d \ £ ¥ f £` ¥ \ £ ¥ § £ (16) e has the following covariance function Q R 2 2 2 d UVW T d d 2 e _ 2 £T ¥ N (17) S h h h h d g g d g g d d g g d d g g d UVW i T i c 2 e 0 2 0 e # In that case, we see the covariance is zero beyond the order of the model (this leads to a band-diagonal noise covariance matrix). ARMA noise model The ARMA noise model is a combination of the above ¨ ¨ ¨ ¨ ¨ ¨ f £` ¥ ¨ + ] ] ] + + + + ] ] ] + + a § £ ¥ \ £ ¥ d \ £ ¥ d \ £ c ¥ I § £ ¥ I § £ ¥ \ £ ¥ [ (18) e ^ £` ¥ 3 3.3 Training: learning the hyperparameters We have a parametric form for the covariance functions, depending on a set of hyperparameters j . If we take a maximum a posteriori approach, for a new modelling task we will need to learn these hyperparameters from the training data. This is done by maximising the likelihood of the training data (equation
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-