
Generalised Wishart Processes Andrew Gordon Wilson∗ Zoubin Ghahramani Department of Engineering Department of Engineering University of Cambridge, UK University of Cambridge, UK Abstract and Clive Granger won the 2003 Nobel prize in eco- nomics \for methods of analysing economic time series with time-varying volatility". The returns on major We introduce a new stochastic process called equity indices and currency exchanges are thought to the generalised Wishart process (GWP). It have a time changing variance and zero mean, and is a collection of positive semi-definite ran- GARCH (Bollerslev, 1986), a generalisation of En- dom matrices indexed by any arbitrary in- gle's ARCH (Engle, 1982), is arguably unsurpassed at put variable. We use this process as a prior predicting the volatilities of returns on these equity over dynamic (e.g. time varying) covariance indices and currency exchanges (Poon and Granger, matrices Σ(t). The GWP captures a diverse 2005; Hansen and Lunde, 2005; Brownlees et al., 2009). class of covariance dynamics, naturally han- Multivariate volatility models can be used to under- dles missing data, scales nicely with dimen- stand the dynamic correlations (or co-movement) be- sion, has easily interpretable parameters, and tween equity indices, and can make better univariate can use input variables that include covari- predictions than univariate models. A good estimate ates other than time. We describe how to of the covariance matrix Σ(t) is also necessary for port- construct the GWP, introduce general proce- folio management. An optimal portfolio allocation w∗ dures for inference and prediction, and show is said to maximise the Sharpe ratio (Sharpe, 1966): that it outperforms its main competitor, mul- tivariate GARCH, even on financial data that Portfolio return w>r(t) especially suits GARCH. = ; (1) Portfolio risk pw>Σ(t)w where r(t) are expected returns for each asset and Σ(t) 1 INTRODUCTION is the predicted covariance matrix for these returns. One may also wish to maximise the portfolio return > p Modelling the dependencies between random variables w r(t) for a fixed level of risk: w>Σ(t)w = λ. Mul- is fundamental in machine learning and statistics. Co- tivariate volatility models are also used to understand variance matrices provide the simplest measure of contagion: the transmission of a financial shock from dependency, and therefore much attention has been one entity to another (Bae et al., 2003). And generally placed on modelling covariance matrices. However, { in econometrics, machine learning, climate science, the often implausible assumption of constant variances or otherwise { it is useful to know input dependent un- and covariances can have a significant impact on sta- certainty, and the dynamic correlations between mul- tistical inferences. tiple entities. In this paper, we are concerned with modelling the dy- Despite their importance, existing multivariate volatil- namic covariance matrix Σ(t) = cov[yjt] (multivariate ity models suffer from tractability issues and a lack volatility), for high dimensional vector valued obser- of generality. For example, multivariate GARCH vations y(t). These models are especially important (MGARCH) has a number of free parameters that in econometrics. Brownlees et al. (2009) remark that scales with dimension to the fourth power, and in- \The price of essentially every derivative security is af- terpretation and estimation of these parameters is fected by swings in volatility." Indeed, Robert Engle difficult to impossible (Silvennoinen and Ter¨asvirta, 2009; Gouri´eroux, 1997), given the constraint that ∗ http://mlg.eng.cam.ac.uk/andrew Σ(t) must be positive definite at all points in time. Thus MGARCH, and alternative multivariate stochas- specifications. In the next section, we review Gaus- tic volatility models, are generally limited to studying sian processes (GPs), which are used to construct the processes with fewer than 5 components (Gouri´eroux GWP we use in this paper. In the following sec- et al., 2009). Recent efforts have led to simpler but tions we then review the Wishart distribution, present less general models, which make assumptions such as a GWP construction (which we use as a prior over constant correlations (Bollerslev, 1990). Σ(t) for all t), introduce procedures to sample from the posterior over Σ(t), review the main competitor, We hope to unite machine learning and economet- MGARCH, and present experiments comparing the rics in an effort to solve these problems. We intro- GWP to MGARCH on simulated and financial data. duce a stochastic process, the generalised Wishart pro- These experiments include a 5 dimensional data set, cess (GWP), which we use as a prior over covari- based on returns for NASDAQ, FTSE, NIKKEI, TSE, ance matrices Σ(t) at all times t. We call it the gen- and the Dow Jones Composite, and a set of returns eralised Wishart process, since it is a generalisation for 3 foreign currency exchanges. We also have a 200 of the first Wishart process defined by Bru (1991).1 dimensional experiment to show how the GWP can be To great acclaim, Bru's Wishart process has recently used to study high dimensional problems. been used (Gouri´erouxet al., 2009) in multivariate stochastic volatility models (Philipov and Glickman, Also, although it is not the focus of this paper, we 2006; Harvey et al., 1994). This prior work is lim- show in the inference section how the GWP can be ited for several reasons: 1) it cannot scale to greater used as part of a new GP based regression model that than 5 × 5 covariance matrices, 2) it assumes the in- accounts for changing correlations. In other words, it put variable is a scalar, 3) it is restricted to using can be used to predict the mean µ(t) together with an Ornstein-Uhlenbeck (Brownian motion) covariance the covariance matrix Σ(t) of a multivariate process. structure (which means Σ(t + a) and Σ(t − a) are in- Alternative GP based multivariate regression models dependent given Σ(t), and complex interdependencies for µ(t), which account for fixed correlations, were re- cannot be captured), 4) it is autoregressive, and 5) cently introduced by Bonilla et al. (2008), Teh et al. there are no general learning and inference procedures. (2005), and Boyle and Frean (2004). We develop this The generalised Wishart process (GWP) addresses all extension, and many others, in a forthcoming paper. of these issues. Specifically, in our GWP formulation, 2 GAUSSIAN PROCESSES • Estimation of Σ(t) is tractable in at least 200 di- mensions, even without a factor representation. We briefly review Gaussian processes, since the gener- alised Wishart process is constructed from GPs. For • The input variable can come from any arbitrary more detail, see Rasmussen and Williams (2006). index set X , just as easily as it can represent time. This allows one to condition on covariates like in- A Gaussian process is a collection of random variables, terest rates. any finite number of which have a joint Gaussian dis- tribution. Using a Gaussian process, we can define a • One can easily handle missing data. distribution over functions u(x): • One can easily specify a vast range of co- u(x) ∼ GP(m(x); k(x; x0)) ; (2) variance structures (periodic, smooth, Ornstein- Uhlenbeck, . ). where x is an arbitrary (potentially vector valued) in- put variable, and the mean m(x) and kernel function • We develop Bayesian inference procedures to k(x; x0) are respectively defined as make predictions, and to learn distributions over m(x) = E[u(x)] ; (3) any relevant parameters. Aspects of the covari- 0 0 ance structure are learned from data, rather than k(x; x ) = cov(u(x); u(x )) : (4) being a fixed property of the model. This means that any collection of function values has a joint Gaussian distribution: Overall, the GWP is versatile and simple. It does (u(x ); u(x ); : : : ; u(x ))> ∼ N (µ;K) ; (5) not require any free parameters, and any optional pa- 1 2 N rameters are easy to interpret. For this reason, it where the N × N covariance matrix K has entries also scales well with dimension. Yet, the GWP pro- Kij = k(xi; xj), and the mean µ has entries µi = vides an especially general description of multivariate m(xi). The properties of these functions (smoothness, volatility { more so than the most general MGARCH periodicity, etc.) are determined by the kernel func- tion. The squared exponential kernel is popular: 1Our model is also related to Gelfand et al. (2004)'s coregionalisation model, which we discuss in section 5. k(x; x0) = exp(−0:5jjx − x0jj2=l2) : (6) Functions drawn from a Gaussian process with this by replacing these Gaussian distributions with Gaus- kernel function are smooth, and can display long range sian processes, we define a process with Wishart trends. The length-scale hyperparameter l is easy to marginals { an example of a generalised Wishart pro- interpret: it determines how much the function values cess. It is a collection of positive semi-definite ran- u(x) and u(x + a) depend on one another, for some dom matrices indexed by any arbitrary (potentially constant a. high dimensional) variable x. For clarity, we assume that time is the input variable, even though it takes Autoregressive processes such as no more effort to use a vector-valued variable x from u(t + 1) = u(t) + (t) ; (7) any arbitrary set. Everything still applies if we re- place t with x. In an upcoming journal submission (t) ∼ N (0; 1) ; (8) (Wilson and Ghahramani, 2011b), we introduce sev- are widely used in time series modelling and are a par- eral new constructions, some of which do not have ticularly simple special case of Gaussian processes. Wishart marginals. Suppose we have νD independent Gaussian process 3 WISHART DISTRIBUTION functions, uid(t) ∼ GP(0; k), where i = 1; : : : ; ν and 0 d = 1;:::;D.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-