Inferences About Multivariate Means

Inferences About Multivariate Means

Inferences about Multivariate Means Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 16-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 1 Copyright Copyright c 2017 by Nathaniel E. Helwig Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 2 Outline of Notes 1) Single Mean Vector 2) Multiple Mean Vectors Introduction Introduction Hotelling’s T 2 Paired Comparisons Likelihood Ratio Tests Repeated Measures Confidence Regions Two Populations Simultaneous T 2 CIs One-Way MANOVA Bonferroni’s Method Simultaneous CIs Large Sample Inference Equal Cov Matrix Test Prediction Regions Two-Way MANOVA Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 3 Inferences about a Single Mean Vector Inferences about a Single Mean Vector Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 4 Inferences about a Single Mean Vector Introduction Univariate Reminder: Student’s One-Sample t Test Let (x1;:::; xn) denote a sample of iid observations sampled from a 2 iid 2 normal distribution with mean µ and variance σ , i.e., xi ∼ N(µ, σ ). Suppose σ2 is unknown, and we want to test the hypotheses H0 : µ = µ0 versus H1 : µ 6= µ0 where µ0 is some known value specified by the null hypothesis. We use Student’s t test, where the t test statistic is given by x¯ − µ t = p 0 s= n ¯ 1 Pn 2 1 Pn ¯ 2 where x = n i=1 xi and s = n−1 i=1(xi − x) . Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 5 Inferences about a Single Mean Vector Introduction Univariate Reminder: Student’s t Test (continued) Under H0, the t test statistic follows a Student’s t distribution with degrees of freedom ν = n − 1. We reject H0 if jtj is large relative to what we would expect 2 Same as rejecting H0 if t is larger than we expect We can rewrite the (squared) t test statistic as 2 2 −1 t = n(x¯ − µ0)(s ) (x¯ − µ0) which emphasizes the quadratic form of the t test statistic. 2 2 t gets larger as (x¯ − µ0) gets larger (for fixed s and n) 2 2 t gets larger as s gets smaller (for fixed x¯ − µ0 and n) 2 2 t get larger as n gets larger (for fixed x¯ − µ0 and s ) Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 6 Inferences about a Single Mean Vector Introduction Multivariate Extensions of Student’s t Test iid Now suppose that xi ∼ N(µ; Σ) where 0 xi = (xi1;:::; xip) is the i-th observation’s p × 1 vector 0 µ = (µ1; : : : ; µp) is the p × 1 mean vector Σ = fσjk g is the p × p covariance matrix Suppose Σ is unknown, and we want to test the hypotheses H0 : µ = µ0 versus H1 : µ 6= µ0 where µ0 is some known vector specified by the null hypothesis. Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 7 Inferences about a Single Mean Vector Hotelling’s T 2 Hotelling’s T 2 Test Statistic Hotellings T 2 is multivariate extension of (squared) t test statistic 2 0 −1 T = n(x¯ − µ0) S (x¯ − µ0) where ¯ 1 Pn x = n i=1 xi is the sample mean vector 1 Pn ¯ ¯ 0 S = n−1 i=1(xi − x)(xi − x) is the sample covariance matrix 1 ¯ n S is the sample covariance matrix of x Letting X = fxij g denote the n × p data matrix, we could write ¯ 1 Pn −1 0 x = n i=1 xi = n X 1n 1 Pn ¯ ¯ 0 1 0 S = n−1 i=1(xi − x)(xi − x) = n−1 XcXc 1 0 Xc = CX with C = In − n 1n1n denoting a centering matrix Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 8 Inferences about a Single Mean Vector Hotelling’s T 2 Inferences using Hotelling’s T 2 2 Under H0, Hotelling’s T follows a scaled F distribution (n − 1)p T 2 ∼ F (n − p) p;n−p where Fp;n−p denotes an F distribution with p numerator degrees of freedom and n − p denominator degrees of freedom. 2 This implies that α = P(T > [p(n − 1)=(n − p)]Fp;n−p(α)) Fp;n−p(α) denotes upper (100α)th percentile of Fp;n−p distribution We reject the null hypothesis if T 2 is too large, i.e., if (n − 1)p T 2 > F (α) (n − p) p;n−p where α is the significance level of the test. Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 9 Inferences about a Single Mean Vector Hotelling’s T 2 Comparing Student’s t2 and Hotelling’s T 2 Student’s t2 and Hotelling’s T 2 have a similar form p p 2 0 −1 Tp;n−1 = n(x¯ − µ0) [S] n(x¯ − µ0) Wishart matrix−1 = (MVN vector)0 (MVN vector) df p p 2 2 −1 tn−1 = n(x¯ − µ0)[s ] n(x¯ − µ0) −1 scaled χ2 variable = (UVN variable) (UVN variable) df where MVN (UVN) = multivariate (univariate) normal and df = n − 1. Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 10 Inferences about a Single Mean Vector Hotelling’s T 2 Define Hotelling’s T 2 Test Function T.test <- function(X, mu=0){ X <- as.matrix(X) n <- nrow(X) p <- ncol(X) df2 <- n - p if(df2 < 1L) stop("Need nrow(X) > ncol(X).") if(length(mu) != p) mu <- rep(mu[1], p) xbar <- colMeans(X) S <- cov(X) T2 <- n * t(xbar - mu) %*% solve(S) %*% (xbar - mu) Fstat <- T2 / (p * (n-1) / df2) pval <- 1 - pf(Fstat, df1=p, df2=df2) data.frame(T2=as.numeric(T2), Fstat=as.numeric(Fstat), df1=p, df2=df2, p.value=as.numeric(pval), row.names="") } Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 11 Inferences about a Single Mean Vector Hotelling’s T 2 Hotelling’s T 2 Test Example in R # get data matrix > data(mtcars) > X <- mtcars[,c("mpg","disp","hp","wt")] > xbar <- colMeans(X) # try Hotelling T^2 function > xbar mpg disp hp wt 20.09062 230.72188 146.68750 3.21725 > T.test(X) T2 Fstat df1 df2 p.value 7608.14 1717.967 4 28 0 > T.test(X, mu=c(20,200,150,3)) T2 Fstat df1 df2 p.value 10.78587 2.435519 4 28 0.07058328 > T.test(X, mu=xbar) T2 Fstat df1 df2 p.value 0 0 4 28 1 Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 12 Inferences about a Single Mean Vector Hotelling’s T 2 Hotelling’s T 2 Using lm Function in R > y <- as.matrix(X) > anova(lm(y ~ 1)) Analysis of Variance Table Df Pillai approx F num Df den Df Pr(>F) (Intercept) 1 0.99594 1718 4 28 < 2.2e-16 *** Residuals 31 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > y <- as.matrix(X) - matrix(c(20,200,150,3),nrow(X),ncol(X),byrow=T) > anova(lm(y ~ 1)) Analysis of Variance Table Df Pillai approx F num Df den Df Pr(>F) (Intercept) 1 0.25812 2.4355 4 28 0.07058 . Residuals 31 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 > y <- as.matrix(X) - matrix(xbar,nrow(X),ncol(X),byrow=T) > anova(lm(y ~ 1)) Analysis of Variance Table Df Pillai approx F num Df den Df Pr(>F) (Intercept) 1 1.8645e-31 1.3052e-30 4 28 1 Residuals 31 Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 13 Inferences about a Single Mean Vector Likelihood Ratio Tests Multivariate Normal MLE Reminder Reminder: the log-likelihood function for n independent samples from a p-variate normal distribution has the form n np n 1 X LL(µ; ΣjX) = − log(2π) − log(jΣj) − (x − µ)0Σ−1(x − µ) 2 2 2 i i i=1 and the MLEs of the mean vector µ and covariance matrix Σ are n 1 X µ^ = x = x¯ n i i=1 n 1 X Σ^ = (x − µ^)(x − µ^)0 n i i i=1 Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 14 Inferences about a Single Mean Vector Likelihood Ratio Tests Multivariate Normal Maximized Likelihood Function Plugging the MLEs of µ and Σ into the likelihood function gives exp(−np=2) max L(µ; ΣjX) = µ;Σ (2π)np=2jΣ^ jn=2 If we assume that µ = µ0 under H0, then we have that exp(−np=2) max L(Σjµ ; X) = 0 np=2 ^ n=2 Σ (2π) jΣ0j ^ 1 Pn 0 where Σ0 = n i=1(xi − µ0)(xi − µ0) is the MLE of Σ under H0. Nathaniel E. Helwig (U of Minnesota) Inferences about Multivariate Means Updated 16-Jan-2017 : Slide 15 Inferences about a Single Mean Vector Likelihood Ratio Tests Likelihood Ratio Test Statistic (and Wilks’ lambda) Want to test H0 : µ = µ0 versus H1 : µ 6= µ0 The likelihood ratio test statistic is !n=2 max L(Σjµ ; X) jΣ^ j Λ = Σ 0 = ^ maxµ;Σ L(µ; ΣjX) jΣ0j and we reject H0 if the observed value of Λ is too small.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    91 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us