
J. R. Statist. Soc. B (2002) 64, Part 4, pp. 583-639 Bayesian measures of model complexity and fit David J. Spiegelhalter, Medical Research Council Biostatistics Unit, Cambridge, UK Nicola G. Best, Imperial College School of Medicine, London, UK Bradley P. Carlin University of Minnesota, Minneapolis, USA and Angelika van der Linde University of Bremen, Germany [Read before The Royal Statistical Society at a meeting organized by the Research Section on Wednesday, March 13th, 2002, Professor D. Firth in the Chair] Summary. We consider the problem of comparing complex hierarchical models in which the number of parameters is not clearly defined. Using an information theoretic argument we derive a measure p~ for the effective number of parameters in a model as the difference between the posterior mean of the deviance and the deviance at the posterior means of the parameters of interest. In general p~ approximately corresponds to the trace of the product of Fisher's information and the posterior covariance, which in normal models is the trace of the 'hat' matrix projecting observations onto fitted values. Its properties in exponential families are explored. The posterior mean deviance is suggested as a Bayesian measure of fit or adequacy, and the contributions of individual observations to the fit and complexity can give rise to a diagnostic plot of deviance residuals against leverages. Adding p~ to the posterior mean deviance gives a deviance information criterion for comparing models, which is related to other information criteria and has an approximate decision theoretic justification. The procedure is illustrated in some examples, and comparisons are drawn with alternative Bayesian and classical proposals. Throughout it is emphasized that the quantities required are trivial to compute in a Markov chain Monte Carlo analysis. Keywords: Bayesian model comparison; Decision theory; Deviance information criterion; Effective number of parameters; Hierarchical models; Information theory; Leverage; Markov chain Monte Carlo methods: Model dimension 1. Introduction The development of Markov chain Monte Carlo (MCMC) methods has made it possible to fit increasingly large classes of models with the aim of exploring real world complexities of data (Gilks et al., 1996). This ability naturally leads us to wish to compare alternative model formulations with the aim of identifying a class of succinct models which appear to describe the information in the data adequately: for example, we might ask whether we need to incorporate Address for correspondence: David J. Spiegelhalter, Medical Research Council Biostatistics Unit, Institute of Public Health, Robinson Way, Cambridge, CB2 2SR, UK. E-mail:[email protected] 02002 Royal Statistical Society 1369-741 2/02/64583 584 D. J. Spiegelhalter, N. G.Best, B. I? Carlin and A. van der Linde a random effect to allow for overdispersion, what distributional forms to assume for responses and random effects, and so on. Within the classical modelling framework, model comparison generally takes place by defin- ing a measure offit, typically a deviance statistic, and complexity, the number of free parameters in the model. Since increasing complexity is accompanied by a better fit, models are compared by trading off these two quantities and, following early work of Akaike (1973), proposals are often formally based on minimizing a measure of expected loss on a future replicate data set: see, for example, Efron (1986), Ripley (1996) and Burnham and Anderson (1998). A model comparison using the Bayesian information criterion also requires the specification of the num- ber of parameters in each model (Kass and Raftery, 1995), but in complex hierarchical models parameters may outnumber observations and these methods clearly cannot be directly applied (Gelfand and Dey, 1994). The most ambitious attempts to tackle this problem appear in the smoothing and neural network literature (Wahba, 1990; Moody, 1992; MacKay, 1995; Ripley, 1996). This paper suggests Bayesian measures of complexity and fit that can be combined to compare models of arbitrary structure. In the next section we use an information theoretic argument to motivate a complexity mea- sure p~ for the effective number of parameters in a model, as the difference between the posterior mean of the deviance and the deviance at the posterior estimates of the parameters of inter- est. This quantity can be trivially obtained from an MCMC analysis and algebraic forms and approximations are unnecessary for its use. We nevertheless investigate some of its formal prop- erties in the following three sections: Section 3 shows that p~ is approximately the trace of the product of Fisher's information and the posterior covariance matrix, whereas in Section 4 we show that for normal models p~ corresponds to the trace of the 'hat' matrix projecting observa- tions onto fitted values and we illustrate its form for various hierarchical models. Its properties in exponential families are explored in Section 5. The posterior mean deviance D can be taken as a Bayesian measure of fit or 'adequacy', and Section 6 shows how in exponential family models an observation's contributions to D and p~ can be used as residual and leverage diagnostics respectively. In Section 7 we tentatively suggest that the adequacy D and complexity p~ may be added to form a deviance iiformation criterion DIC which may be used for comparing models. We describe how this parallels the development of non-Bayesian information criteria and provide a somewhat heuristic decision theoretic justification. In Section 8 we illustrate the use of this technique on some reason- ably complex examples. Finally, Section 9 draws some conclusions concerning these proposed techniques. 2. The complexity of a Bayesian model 2.1. 'Focused' full probability models Parametric statistical modelling of data y involves the specification of a probability model p(ylQ), Q E O. For a Bayesian 'full' probability model, we also specify a prior distribution p(Q)which may give rise to a marginal distribution Particular choices of p(yl0) and p(0) will be termed a model 'focused' on O. Note that we might further parameterize our prior with unknown 'hyperparameters' y to create a hierarchical model, so that the full probability model factorizes as Model Complexity and Fit 585 P(Y>Q, $1 = P(Y,Q) p(Ql$J>p($>. Then, depending on the parameters in focus, the model may compose the likelihood p(yl0) and prior or the likelihood and prior p($). Both these models lead to the same marginal distribution (1) but can be consid- ered as having different numbers of parameters. A consequence is that in hierarchical modelling we cannot uniquely define a 'likelihood' or 'model complexity' without specifying the level of the hierarchy that is the focus of the modelling exercise (Gelfand and Trevisani, 2002). In fact, by focusing our models on a particular set of parameters O, we essentially reduce all models to non-hierarchical structures. For example, consider an unbalanced random-effects one-way analysis of variance (ANOVA) focused on the group means: This model could also be focused on the overall mean $ to give in which case it could reasonably be considered as having a different complexity. It is natural to wish to measure the complexity of a focused model, both in its own right, say to assess the degrees of freedom of estimators, and as a contribution to model choice: for example, criteria such as BIC (Schwarz, 1978), AIC (Akaike, 1973), TIC (Takeuchi, 1976) and NIC (Murata et al., 1994) all trade off model fit against a measure of the effective number of parameters in the model. However, the foregoing discussion suggests that such measures of com- plexity may not be unique and will depend on the number of parameters in focus. Furthermore, the inclusion of a prior distribution induces a dependence between parameters that is likely to reduce the effective dimensionality, although the degree of reduction may depend on the data that are available. Heuristically, complexity reflects the 'difficulty in estimation' and hence it seems reasonable that a measure of complexity may depend on both the prior information concerning the parameters in focus and the specific data that are observed. 2.2. Is there a true model? We follow Box (1976) in believing that 'all models are wrong, but some are useful'. However, it can be useful to posit a 'true' distribution pt(Y) of unobserved future data Y since, for any focused model, this defines a 'pseudotrue' parameter value 8' (Sawa, 1978) which specifies a likelihood p(YIOt) that minimizes the Kullback-Leibler distance ~~[log{~~(Y))/~(YlQ~)]from pt(Y). Having observed data y, under reasonably broad conditions (Berk, 1966; Bunke and Milhaud, 1998) p(Q1y)converges to 0' as information on the components of 0 increases. Thus Bayesian analysis implicitly relies on p(YIQt)being a reasonable approximation to pt (Y), and we shall indicate where we make use of this 'good model' assumption. 586 D. J. Spiegelhalter, N. G.Best, B. I? Carlin and A. van der Linde 2.3. True and estimated residual information The residual information in data y conditional on 0 may be defined (up to a multiplicative constant) as -2 log{p(ylO)) (Kullback and Leibler, 1951; Burnham and Anderson, 1998) and can be interpreted as a measure of 'surprise' (Good, 1956), logarithmic penalty (Bernardo, 1979) or uncertainty. Suppose that we have an estimator B(y) of the pseudotrue parameter Ot. Then the excess of the true over the estimated residual information will be denoted This can be thought of as the reduction in surprise or uncertainty due to estimation, or alter- natively the degree of 'overfitting' due to B(y) adapting to the data y. We now argue that do may form the basis for both classical and Bayesian measures of model dimensionality, with each approach differing in how it deals with the unknown true parameters in do.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages57 Page
-
File Size-