<<

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

Statistical methods

A lot of statistical methods have been used for statistical analyses. A very brief list of four of the more popular methods is:

· General : A widely used model on which various statistical methods are based (e.g. t test, ANOVA, ANCOVA, MANOVA). Usable for assessing the effect of several predictors on one or more continuous dependent variables. · : An extension of the general linear model for discrete dependent variables. · Structural equation modelling: Usable for assessing latent structures from measured manifest variables. · Item response theory: Models for (mostly) assessing one latent variable from several binary measured variables (e.g. an exam). General linear model

The general linear model (GLM) is a statistical linear model. It may be written as[1]

where Y is a with series of multivariate measurements, X is a matrix that might be a , B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors or noise. The errors are usually assumed to follow a multivariate . If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U.

The general linear model incorporates a number of different statistical models: ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary , t-test and F-test. The general linear model is a generalization of multiple linear regression model to the case of more than one dependent variable. If Y, B, and U were column vectors, the matrix equation above would represent multiple linear regression.

Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of Y are tested together, whereas in univariate tests the columns of Y are tested independently, i.e., as multiple univariate tests with the same design matrix.

Contents

· 1 Multiple Linear Regression · 2 Applications · 3 See also · 4 Notes · 5 References

1

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

Multiple Linear Regression

Multiple linear regression, is a generalization of linear regression, by considering more than one independent variable, and a specific case of general linear models formed by restricting the number of dependent variables to one. The basic model for linear regression is

In the formula above we consider n observations of one dependent variable and p th th independent variables. Thus, Yi is the i observation of the dependent variable, Xij is i th observation of the j independent variable, j = 1, 2, ..., p. The values βj represent parameters th to be estimated, and εi is the i independent identically distributed normal error.

Applications

An application of the general linear model appears in the analysis of multiple brain scans in scientific where Y contains from brain scanners, X contains experimental design variables and confounds. It is usually tested in a univariate way (usually referred to a mass-univariate in this setting) and is often referred to as statistical parametric mapping.[2]

Bayesian multivariate linear regression

Consider a regression problem where the dependent variable to be predicted is not a single real-valued scalar but an m-length vector of correlated real numbers. As in the standard regression setup, there are n observations, where each observation i consists of k-1 explanatory variables, grouped into a vector of length k (where a dummy variable with a value of 1 has been added to allow for an intercept coefficient). This can be viewed as a set of m related regression problems for each observation i:

where the set of errors are all correlated. Equivalently, it can be viewed as a single regression problem where the outcome is a row vector and the regression coefficient vectors are stacked next to each other, as follows:

2

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

The coefficient matrix B is a matrix where the coefficient vectors for each regression problem are stacked horizontally:

The noise vector for each observation i is jointly normal, so that the outcomes for a given observation are correlated:

We can write the entire regression problem in matrix form as:

where Y and E are matrices. The design matrix X is an matrix with the observations stacked vertically, as in the standard linear regression setup:

The classical, frequentists linear solution is to simply estimate the matrix of regression coefficients using the Moore-Penrose pseudoinverse:

.

To obtain the Bayesian solution, we need to specify the conditional likelihood and then find the appropriate conjugate prior. As with the univariate case of linear Bayesian regression, we will find that we can specify a natural conditional conjugate prior (which is scale dependent).

Let us write our conditional likelihood as

writing the error in terms of and yields

We seek a natural conjugate prior—a joint density which is of the same functional form as the likelihood. Since the likelihood is quadratic in , we re-write the likelihood so it is normal in (the deviation from classical sample estimate)

3

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

Using the same technique as with Bayesian linear regression, we decompose the exponential term using a matrix-form of the sum-of-squares technique. Here, however, we will also need to use the Matrix Differential Calculus (Kronecker product and vectorization transformations).

First, let us apply sum-of-squares to obtain new expression for the likelihood:

We would like to develop a conditional form for the priors:

where is an inverse-Wishart distribution and is some form of normal distribution in the matrix . This is accomplished using the vectorization transformation, which converts the likelihood from a function of the matrices to a function of the vectors .

Write

Let

where denotes the Kronecker product of matrices A and B, a generalization of the outer product which multiplies an matrix by a matrix to generate an matrix, consisting of every combination of products of elements from the two matrices.

Then

which will lead to a likelihood which is normal in .

With the likelihood in a more tractable form, we can now find a natural (conditional) conjugate prior.

References

This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations. (November

2010)

4

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

· Bradley P. Carlin and Thomas A. Louis, Bayes and Empirical Bayes Methods for Data Analysis, Chapman & Hall/CRC, Second edition 2000,

· Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, Bayesian and Marketing, John Wiley & Sons, Ltd, 2006

Notes

1. ^ K. V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN 0-12- 471252-5. 2. ^ K.J. Friston, A.P. Holmes, K.J. Worsley, J.-B. Poline, C.D. Frith and .S.J. Frackowiak (1995). "Statistical Parametric Maps in functional imaging: A general linear approach". Human Brain Mapping 2: 189–210. doi:10.1002/hbm.460020402.

References

· Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2. · Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6, ISBN 0-521-86842-4. MR 2283455. · Rawlings, John O.; Pantula, Sastry G.; Dickey, David A., eds. (1998). Applied . Springer Texts in Statistics. doi:10.1007/b98890. ISBN 0-387-98454-2.

Bayesian linear regression

In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of . When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the distributions of the model's parameters.

Model setup

Consider a standard linear regression problem, in which for we specify the conditional distribution of given a predictor vector :

where is a vector, and the is independent and identically-distributed and normally distributed random variables:

This corresponds to the following :

The solution is to estimate the coefficient vector using the Moore- Penrose pseudoinverse:

5

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

where is the design matrix, each row of which is a predictor vector ; and is the column -vector .

This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about . In the Bayesian approach, the data are supplemented with additional information in the form of a distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters and . The prior can take different functional forms depending on the domain and the information that is available a priori. With conjugate priors

Conjugate prior distribution

For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so called conjugate prior for which the posterior distribution can be derived analytically.

A prior is conjugate to this likelihood function if it has of the same functional form with respect to and . Since the log-likelihood is quadratic in , the log-likelihood is re- written such that the likelihood becomes normal in . Write

The likelihood as is now re-written as

where

where is the number of regression coefficients.

This suggests a form for the prior:

where is an inverse-gamma distribution

6

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

with and and is a normal distribution

with and as the prior values of and , respectively.

Posterior distribution

With the prior now specified, the posterior distribution can be expressed as

With some re-arrangement, the posterior can be re-written so that the posterior of the parameter vector can be expressed in terms of the least squares estimator and the prior mean , with the strength of the prior indicated by the prior precision matrix

To justify that is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a quadratic form in .[1]

Now the posterior can be expressed as a normal distribution times an inverse-gamma distribution:

Therefore the posterior distribution can be parametrized as follows.

7

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

This can be interpreted as Bayesian learning where the parameters are updated according to the following equations.

In this article is called .

Model evidence

The model evidence is the probability of the data given the model . It is also known as the marginal likelihood, and as the prior predictive density. Here, the model is defined by the likelihood function and the prior distribution on the parameters, i.e. . The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating over all possible values of and .

This integral can be computed analytically and the solution is given in the following equation[2].

Here denotes the gamma function. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of and .

Note that this equation is nothing but a re-arrangement of Bayes theorem. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.

8

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only.

Other cases

In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an approximate Bayesian inference method such as Monte Carlo [3] or variational Bayes.

The special case is called ridge regression.

A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian estimation of matrices: see Bayesian multivariate linear regression.

Notes

1. ^ The intermediate steps are in Fahrmeir et al. (2009) on page 188. 2. ^ The intermediate steps of this computation can be found in O'Hagan (1994) on page 257. 3. ^ Carlin and Louis(2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression.

References

· Box, G.E.P. and Tiao, G.C. (1973) Bayesian Inference in Statistical Analysis, Wiley, ISBN 0-471- 57428-7 · Carlin, Bradley P. and Louis, Thomas A. (2008). Bayesian Methods for Data Analysis, Third Edition. Boca Raton, FL: Chapman and Hall/CRC. ISBN 1-58488-697-8. · O'Hagan, Anthony (1994). Bayesian Inference. Kendall's Advanced Theory of Statistics. 2B (First ed.). Halsted. ISBN 0-340-52922-9. · Gelman, Andrew, Carlin, John B., Stern, Hal S. and Rubin, Donald B. (2003). Bayesian Data Analysis, Second Edition. Boca Raton, FL: Chapman and Hall/CRC. ISBN 1-58488-388-X. · Gero Walter, and Thomas Augustin (2009). Bayesian Linear Regression—Different Conjugate Models and Their (In)Sensitivity to Prior-Data Conflict, Technical Report Number 069, Department of Statistics, University of Munich. · Michael Goldstein, David Wooff (2007) Bayes Linear Statistics, Theory & Methods, Wiley. ISBN 978- 0-470-01562-9 · Fahrmeir, L., Kneib, T., and Lang, S. (2009). Regression. Modelle, Methoden und Anwendungen, Second Edition. Springer, Heidelberg. doi:10.1007/978-3-642-01837-4. ISBN 978-3-642-01836-7. · Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, Bayesian Statistics and Marketing, John Wiley & Sons, Ltd, 2006 · Thomas P. Minka (2001) Bayesian Linear Regression, Microsoft research web page

9