General Linear Model: a Widely Used Model on Which Various Statistical Methods Are Based (E.G

General Linear Model: a Widely Used Model on Which Various Statistical Methods Are Based (E.G

Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only. Statistical methods A lot of statistical methods have been used for statistical analyses. A very brief list of four of the more popular methods is: · General linear model: A widely used model on which various statistical methods are based (e.g. t test, ANOVA, ANCOVA, MANOVA). Usable for assessing the effect of several predictors on one or more continuous dependent variables. · Generalized linear model: An extension of the general linear model for discrete dependent variables. · Structural equation modelling: Usable for assessing latent structures from measured manifest variables. · Item response theory: Models for (mostly) assessing one latent variable from several binary measured variables (e.g. an exam). General linear model The general linear model (GLM) is a statistical linear model. It may be written as[1] where Y is a matrix with series of multivariate measurements, X is a matrix that might be a design matrix, B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors or noise. The errors are usually assumed to follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U. The general linear model incorporates a number of different statistical models: ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression, t-test and F-test. The general linear model is a generalization of multiple linear regression model to the case of more than one dependent variable. If Y, B, and U were column vectors, the matrix equation above would represent multiple linear regression. Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of Y are tested together, whereas in univariate tests the columns of Y are tested independently, i.e., as multiple univariate tests with the same design matrix. Contents · 1 Multiple Linear Regression · 2 Applications · 3 See also · 4 Notes · 5 References 1 Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only. Multiple Linear Regression Multiple linear regression, is a generalization of linear regression, by considering more than one independent variable, and a specific case of general linear models formed by restricting the number of dependent variables to one. The basic model for linear regression is In the formula above we consider n observations of one dependent variable and p th th independent variables. Thus, Yi is the i observation of the dependent variable, Xij is i th observation of the j independent variable, j = 1, 2, ..., p. The values βj represent parameters th to be estimated, and εi is the i independent identically distributed normal error. Applications An application of the general linear model appears in the analysis of multiple brain scans in scientific experiments where Y contains data from brain scanners, X contains experimental design variables and confounds. It is usually tested in a univariate way (usually referred to a mass-univariate in this setting) and is often referred to as statistical parametric mapping.[2] Bayesian multivariate linear regression Consider a regression problem where the dependent variable to be predicted is not a single real-valued scalar but an m-length vector of correlated real numbers. As in the standard regression setup, there are n observations, where each observation i consists of k-1 explanatory variables, grouped into a vector of length k (where a dummy variable with a value of 1 has been added to allow for an intercept coefficient). This can be viewed as a set of m related regression problems for each observation i: where the set of errors are all correlated. Equivalently, it can be viewed as a single regression problem where the outcome is a row vector and the regression coefficient vectors are stacked next to each other, as follows: 2 Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only. The coefficient matrix B is a matrix where the coefficient vectors for each regression problem are stacked horizontally: The noise vector for each observation i is jointly normal, so that the outcomes for a given observation are correlated: We can write the entire regression problem in matrix form as: where Y and E are matrices. The design matrix X is an matrix with the observations stacked vertically, as in the standard linear regression setup: The classical, frequentists linear least squares solution is to simply estimate the matrix of regression coefficients using the Moore-Penrose pseudoinverse: . To obtain the Bayesian solution, we need to specify the conditional likelihood and then find the appropriate conjugate prior. As with the univariate case of linear Bayesian regression, we will find that we can specify a natural conditional conjugate prior (which is scale dependent). Let us write our conditional likelihood as writing the error in terms of and yields We seek a natural conjugate prior—a joint density which is of the same functional form as the likelihood. Since the likelihood is quadratic in , we re-write the likelihood so it is normal in (the deviation from classical sample estimate) 3 Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only. Using the same technique as with Bayesian linear regression, we decompose the exponential term using a matrix-form of the sum-of-squares technique. Here, however, we will also need to use the Matrix Differential Calculus (Kronecker product and vectorization transformations). First, let us apply sum-of-squares to obtain new expression for the likelihood: We would like to develop a conditional form for the priors: where is an inverse-Wishart distribution and is some form of normal distribution in the matrix . This is accomplished using the vectorization transformation, which converts the likelihood from a function of the matrices to a function of the vectors . Write Let where denotes the Kronecker product of matrices A and B, a generalization of the outer product which multiplies an matrix by a matrix to generate an matrix, consisting of every combination of products of elements from the two matrices. Then which will lead to a likelihood which is normal in . With the likelihood in a more tractable form, we can now find a natural (conditional) conjugate prior. References This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations. (November 2010) 4 Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only. · Bradley P. Carlin and Thomas A. Louis, Bayes and Empirical Bayes Methods for Data Analysis, Chapman & Hall/CRC, Second edition 2000, · Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, Bayesian Statistics and Marketing, John Wiley & Sons, Ltd, 2006 Notes 1. ^ K. V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN 0-12- 471252-5. 2. ^ K.J. Friston, A.P. Holmes, K.J. Worsley, J.-B. Poline, C.D. Frith and R.S.J. Frackowiak (1995). "Statistical Parametric Maps in functional imaging: A general linear approach". Human Brain Mapping 2: 189–210. doi:10.1002/hbm.460020402. References · Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2. · Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6, ISBN 0-521-86842-4. MR 2283455. · Rawlings, John O.; Pantula, Sastry G.; Dickey, David A., eds. (1998). Applied Regression Analysis. Springer Texts in Statistics. doi:10.1007/b98890. ISBN 0-387-98454-2. Bayesian linear regression In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters. Model setup Consider a standard linear regression problem, in which for we specify the conditional distribution of given a predictor vector : where is a vector, and the is independent and identically-distributed and normally distributed random variables: This corresponds to the following likelihood function: The ordinary least squares solution is to estimate the coefficient vector using the Moore- Penrose pseudoinverse: 5 Generated by Foxit PDF Creator © Foxit Software http://www.foxitsoftware.com For evaluation only. where is the design matrix, each row of which is a predictor vector ; and is the column -vector . This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about . In the Bayesian approach, the data are supplemented with additional information in the form of a prior probability distribution. The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters and . The prior can take different functional forms depending on the domain and the information that is available a priori. With conjugate priors Conjugate prior distribution For an arbitrary prior distribution, there may be no analytical solution for the posterior distribution. In this section, we will consider a so called conjugate prior for which the posterior distribution can be derived analytically. A prior is conjugate to this likelihood function if it has of the same functional form with respect to and . Since the log-likelihood is quadratic in , the log-likelihood is re- written such that the likelihood becomes normal in .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us