8 the Likelihood Ratio Test

Total Page:16

File Type:pdf, Size:1020Kb

8 the Likelihood Ratio Test 8TheLikelihoodRatioTest 8.1 The likelihood ratio We often want to test in situations where the adopted probability model involves several unknown parameters. Thus we may denote an element of the parameter space by θ =(θ1,θ2,...θk) Some of these parameters may be nuisance parameters, (e.g. testing hypothe- ses on the unknown mean of a normal distribution with unknown variance, where the variance is regarded as a nuisance parameter). We use the likelihood ratio, λ(x),definedas sup {L(θ; x):θ ∈ Θ } λ(x)= 0 , x ∈ Rn . sup {L(θ; x):θ ∈ Θ} X The informal argument for this is as follows. For a realisation x, determine its best chance of occurrence under H0 and also its best chance overall. The ratio of these two chances can never exceed unity, but, if small, would constitute evidence for rejection of the null hypothesis. A likelihood ratio test for testing H0 : θ ∈ Θ0 against H1 : θ ∈ Θ1 is a test with critical region of the form C1 = {x : λ(x) ≤ k} , where k is a real number between 0 and 1. Clearly the test will be at significance level α if k can be chosen to satisfy sup {P (λ(X) ≤ k; θ ∈ Θ0)} = α. If H0 is a simple hypothesis with Θ0 = {θ0}, we have the simpler form P (λ(X) ≤ k; θ0)=α. To determine k, we must look at the c.d.f. of the random variable λ(X), where the random sample X has joint p.d.f. fX(x; θ0). 69 Example Exponential distribution Test H0 : θ = θ0 against H1 : θ>θ0. Here Θ0 = {θ0}, Θ1 =[θ0, ∞). The likelihood function is n n −θ xi L(θ; x)= f(xi; θ)=θ e . i=1 The numerator of the likelihood ratio is n −nθ0x L(θ0; x)=θ0 e . We need to find the supremum as θ ranges over the interval [θ0, ∞).Now l(θ; x)=n log θ − nθx so that ∂l(θ; x) n = − nx ∂θ θ which is zero only when θ =1/x. Since L(θ; x) is an increasing function for θ<1/x and decreasing for θ>1/x, x−ne−n, if 1/x ≥ θ sup {L(θ; x):θ ∈ Θ} = 0 . n −nθ0x θ0 e if 1/x<θ0 70 L(θ;x) sup{ L(θ;x):θ ∋ Θ} θ 1/x θ0 L(θ;x) sup{ L(θ;x):θ ∋ Θ} θ θ0 1/x θne−nθ0x 0 , 1/x ≥ θ λ(x)= −n −n 0 x e 1, 1/x<θ0 θnxne−nθ0xen, 1/x ≥ θ = 0 0 1, 1/x<θ0 Since d xne−nθ0x = nxn−1e−nθ0x (1 − θ x) dx 0 is positive for values of x between 0 and 1/θ0 where θ0 > 0, it follows that λ(x) is a non-decreasing function of x. Therefore the critical region of the likelihood ratio test is of the form n C1 = x : xi ≤ c . i=1 Example The one-sample t-test The null hypothesis is H0 : θ = θ0 for the mean of a normal distribution with unknown variance σ2. 71 We have Θ = {(θ, σ2):θ ∈ R,σ2 ∈ R+} 2 2 + Θ0 = {(θ, σ ):θ = θ0,σ ∈ R } and 1 1 f(x; θ, σ2)=√ exp − (x − θ)2 ,x∈ R. 2πσ2 2σ2 The likelihood function is 1 n L(θ, σ2; x)=(2πσ2)−n/2 exp − (x − θ)2 2σ2 i i=1 Since n 1 n l(θ ,σ2; x)=− log(2πσ2) − (x − θ )2 0 2 2σ2 i 0 i=1 and ∂l n 1 n = − + (x − θ )2, ∂σ2 2σ2 2σ4 i 0 i=1 which is zero when 1 n σ2 = (x − θ )2 n i 0 i=1 we conclude that −n/2 2π n sup L(θ ,σ2; x) = (x − θ )2 e−n/2 . 0 n i 0 i=1 For the denominator, we already know from previous examples that the m.l.e. of θ is x,so −n/2 2π n sup L(θ, σ2; x) = (x − x)2 e−n/2 n i i=1 and n (x − θ )2 −n/2 λ(x)= i=1 i 0 . n 2 i=1(xi − x) This may be written in a more convenient form. Note that n n 2 2 (xi − θ0) = ((xi − x)+(x − θ0)) i=1 i=1 n 2 2 = (xi − x) + n(x − θ0) i=1 72 so that n(x − θ )2 −n/2 λ(x)= 1+ 0 . n 2 i=1(xi − x) The critical region is C1 = {x : λ(x) ≤ k} so it follows that H0 is to be rejected when the value of |x − θ | 0 n 2 i=1(xi − x) exceeds some constant. Now we have already seen that X − θ √ ∼ t(n − 1) S / n where 1 n S2 = (X − X)2. n − 1 i i=1 Therefore it makes sense to write the critical region in the form |x − θ | C = x : √ 0 ≥ c 1 s / n which is the standard form of the two-sided t-test for a single sample. 73 8.2 The likelihood ratio statistic Since the function −2logλ(x) is a decreasing function, it follows that the critical region of the likelihood ratio test can also be expressed in the form C1 = {x : −2 log λ(x) ≥ c} . Writing Λ(x)=−2logλ(x)=2 l(θ : x) − l(θ0 : x) the critical region may be written as C1 = {x :Λ(x) ≥ c} and Λ(X) is called the likelihood ratio statistic. We have been using the idea that values of θ close to θ are well supported by the data so, if θ0 is a possible value of θ, then it turns out that, for large samples, D 2 Λ(X) → χp where p = dim(θ). Letusseewhy. 8.2.1 The asymptotic distribution of the likelihood ratio statistic Write 1 l(θ )=l(θ)+(θ − θ )l(θ)+ (θ − θ )2l(θ)+... 0 0 2 0 and, remembering that l(θ)=0, we have 2 Λ (θ − θ0) −l (θ) 2 =(θ − θ0) J(θ) J(θ) 2 =(θ − θ0) I(θ0) . I(θ0) But J(θ) 1/2 D P (θ − θ0)I(θ0) → N(0, 1) and → 1 I(θ0) so 2 D 2 (θ − θ0) I(θ0) → χ1 74 and Slutsky’s theorem gives D 2 Λ → χ1 provided θ0 is the true value of θ. Example Poisson distribution Let X =(X1,...,Xn) be a random sample from a Poisson distribution with parameter θ, and test H0 : θ = θ0 against H1 : θ = θ0 at significance level 0.05. The p.m.f. is e−θθx p(x; θ)= ,x=0, 1,... x! so that n n l(θ : x)=−nθ + xi log θ − log xi! i=1 i=1 and ∂l(θ : x) 1 n = −n + x ∂θ θ i i=1 giving θ = x. Therefore x Λ=2n θ0 − x + x log . θ0 2 2 The distribution of Λ under H0 is approximately χ1 and χ1(0.95)=3.84,so the critical region of the test is x C1 = x :2n θ0 − x + x log ≥ 3.84 . θ0 75 8.3 Testing goodness-of-fit for discrete distributions The data below were collected by the ecologist E.C. Pielou, who was inter- ested in the pattern of healthy and diseased trees. The subject of her re- search was Armillaria root rot in a plantation of Douglas firs. She recorded the lengths of 109 runs of diseased trees and these are given below. Runlength 1 23456 Number of runs 71 28 5221 On biological grounds, Pielou proposed a geometric distribution as a proba- bility model. Is this plausible? Let’s try to answer this by first looking at the general case. th Suppose we have k groups with ni in the i group. Thus Group 1 2 3 4 ··· k Number n1 n2 n3 n4 ··· nk where i ni = n. Suppose further that we have a probability model such thatπi(θ),i= th 1, 2,...,k, is the probability of being in the i group. Clearly i πi(θ)=1. The likelihood is k π (θ)ni L(θ)=n! i n ! i=1 i and the log-likelihood is k k l(θ)= ni log πi(θ)+logn! − log ni! i=1 i=1 Suppose θ maximises l(θ), being the solution of l(θ)=0. The general alternative is to take πi as unrestricted by rthe model and subject only to i πi =1. Thus we maximise k k l(π)= ni log πi +logn! − log ni! with g(π)= πi =1. i=1 i=1 i Using Lagrange multiplier γ we obtain the set of k equations ∂l ∂g − γ =0, 1 ≤ i ≤ k, ∂πi ∂πi 76 or n i − γ =0, 1 ≤ i ≤ k. πi Writing this as ni − γπi =0, 1 ≤ i ≤ k and summing over i we find γ = n and n π = i . i n The likelihood ratio statistic is k n Λ=2 n log i − k n log π (θ) i n i=1 i i i=1 k n =2 n log i . i i=1 nπi(θ) General statement of asymptotic result for the likelihood ratio statistic Testing H0 : θ ∈ Θ0 ⊂ Θ against H1 : θ ∈ Θ, the likelihood ratio statistic D 2 Λ=2 sup l(θ) − sup l(θ) → χp, θ∈Θ θ∈Θ0 where p =dimΘ− dim Θ0 In the general case above where k n Λ=2 n log i , i i=1 nπi(θ) k the restriction i=1 πi =1means that dim Θ = k − 1.
Recommended publications
  • On the Meaning and Use of Kurtosis
    Psychological Methods Copyright 1997 by the American Psychological Association, Inc. 1997, Vol. 2, No. 3,292-307 1082-989X/97/$3.00 On the Meaning and Use of Kurtosis Lawrence T. DeCarlo Fordham University For symmetric unimodal distributions, positive kurtosis indicates heavy tails and peakedness relative to the normal distribution, whereas negative kurtosis indicates light tails and flatness. Many textbooks, however, describe or illustrate kurtosis incompletely or incorrectly. In this article, kurtosis is illustrated with well-known distributions, and aspects of its interpretation and misinterpretation are discussed. The role of kurtosis in testing univariate and multivariate normality; as a measure of departures from normality; in issues of robustness, outliers, and bimodality; in generalized tests and estimators, as well as limitations of and alternatives to the kurtosis measure [32, are discussed. It is typically noted in introductory statistics standard deviation. The normal distribution has a kur- courses that distributions can be characterized in tosis of 3, and 132 - 3 is often used so that the refer- terms of central tendency, variability, and shape. With ence normal distribution has a kurtosis of zero (132 - respect to shape, virtually every textbook defines and 3 is sometimes denoted as Y2)- A sample counterpart illustrates skewness. On the other hand, another as- to 132 can be obtained by replacing the population pect of shape, which is kurtosis, is either not discussed moments with the sample moments, which gives or, worse yet, is often described or illustrated incor- rectly. Kurtosis is also frequently not reported in re- ~(X i -- S)4/n search articles, in spite of the fact that virtually every b2 (•(X i - ~')2/n)2' statistical package provides a measure of kurtosis.
    [Show full text]
  • The Probability Lifesaver: Order Statistics and the Median Theorem
    The Probability Lifesaver: Order Statistics and the Median Theorem Steven J. Miller December 30, 2015 Contents 1 Order Statistics and the Median Theorem 3 1.1 Definition of the Median 5 1.2 Order Statistics 10 1.3 Examples of Order Statistics 15 1.4 TheSampleDistributionoftheMedian 17 1.5 TechnicalboundsforproofofMedianTheorem 20 1.6 TheMedianofNormalRandomVariables 22 2 • Greetings again! In this supplemental chapter we develop the theory of order statistics in order to prove The Median Theorem. This is a beautiful result in its own, but also extremely important as a substitute for the Central Limit Theorem, and allows us to say non- trivial things when the CLT is unavailable. Chapter 1 Order Statistics and the Median Theorem The Central Limit Theorem is one of the gems of probability. It’s easy to use and its hypotheses are satisfied in a wealth of problems. Many courses build towards a proof of this beautiful and powerful result, as it truly is ‘central’ to the entire subject. Not to detract from the majesty of this wonderful result, however, what happens in those instances where it’s unavailable? For example, one of the key assumptions that must be met is that our random variables need to have finite higher moments, or at the very least a finite variance. What if we were to consider sums of Cauchy random variables? Is there anything we can say? This is not just a question of theoretical interest, of mathematicians generalizing for the sake of generalization. The following example from economics highlights why this chapter is more than just of theoretical interest.
    [Show full text]
  • Theoretical Statistics. Lecture 5. Peter Bartlett
    Theoretical Statistics. Lecture 5. Peter Bartlett 1. U-statistics. 1 Outline of today’s lecture We’ll look at U-statistics, a family of estimators that includes many interesting examples. We’ll study their properties: unbiased, lower variance, concentration (via an application of the bounded differences inequality), asymptotic variance, asymptotic distribution. (See Chapter 12 of van der Vaart.) First, we’ll consider the standard unbiased estimate of variance—a special case of a U-statistic. 2 Variance estimates n 1 s2 = (X − X¯ )2 n n − 1 i n Xi=1 n n 1 = (X − X¯ )2 +(X − X¯ )2 2n(n − 1) i n j n Xi=1 Xj=1 n n 1 2 = (X − X¯ ) − (X − X¯ ) 2n(n − 1) i n j n Xi=1 Xj=1 n n 1 1 = (X − X )2 n(n − 1) 2 i j Xi=1 Xj=1 1 1 = (X − X )2 . n 2 i j 2 Xi<j 3 Variance estimates This is unbiased for i.i.d. data: 1 Es2 = E (X − X )2 n 2 1 2 1 = E ((X − EX ) − (X − EX ))2 2 1 1 2 2 1 2 2 = E (X1 − EX1) +(X2 − EX2) 2 2 = E (X1 − EX1) . 4 U-statistics Definition: A U-statistic of order r with kernel h is 1 U = n h(Xi1 ,...,Xir ), r iX⊆[n] where h is symmetric in its arguments. [If h is not symmetric in its arguments, we can also average over permutations.] “U” for “unbiased.” Introduced by Wassily Hoeffding in the 1940s. 5 U-statistics Theorem: [Halmos] θ (parameter, i.e., function defined on a family of distributions) admits an unbiased estimator (ie: for all sufficiently large n, some function of the i.i.d.
    [Show full text]
  • Lecture 14 Testing for Kurtosis
    9/8/2016 CHE384, From Data to Decisions: Measurement, Kurtosis Uncertainty, Analysis, and Modeling • For any distribution, the kurtosis (sometimes Lecture 14 called the excess kurtosis) is defined as Testing for Kurtosis 3 (old notation = ) • For a unimodal, symmetric distribution, Chris A. Mack – a positive kurtosis means “heavy tails” and a more Adjunct Associate Professor peaked center compared to a normal distribution – a negative kurtosis means “light tails” and a more spread center compared to a normal distribution http://www.lithoguru.com/scientist/statistics/ © Chris Mack, 2016Data to Decisions 1 © Chris Mack, 2016Data to Decisions 2 Kurtosis Examples One Impact of Excess Kurtosis • For the Student’s t • For a normal distribution, the sample distribution, the variance will have an expected value of s2, excess kurtosis is and a variance of 6 2 4 1 for DF > 4 ( for DF ≤ 4 the kurtosis is infinite) • For a distribution with excess kurtosis • For a uniform 2 1 1 distribution, 1 2 © Chris Mack, 2016Data to Decisions 3 © Chris Mack, 2016Data to Decisions 4 Sample Kurtosis Sample Kurtosis • For a sample of size n, the sample kurtosis is • An unbiased estimator of the sample excess 1 kurtosis is ∑ ̅ 1 3 3 1 6 1 2 3 ∑ ̅ Standard Error: • For large n, the sampling distribution of 1 24 2 1 approaches Normal with mean 0 and variance 2 1 of 24/n 3 5 • For small samples, this estimator is biased D. N. Joanes and C. A. Gill, “Comparing Measures of Sample Skewness and Kurtosis”, The Statistician, 47(1),183–189 (1998).
    [Show full text]
  • Covariances of Two Sample Rank Sum Statistics
    JOURNAL OF RESEARCH of the National Bureou of Standards - B. Mathematical Sciences Volume 76B, Nos. 1 and 2, January-June 1972 Covariances of Two Sample Rank Sum Statistics Peter V. Tryon Institute for Basic Standards, National Bureau of Standards, Boulder, Colorado 80302 (November 26, 1971) This note presents an elementary derivation of the covariances of the e(e - 1)/2 two-sample rank sum statistics computed among aU pairs of samples from e populations. Key words: e Sample proble m; covariances, Mann-Whitney-Wilcoxon statistics; rank sum statistics; statistics. Mann-Whitney or Wilcoxon rank sum statistics, computed for some or all of the c(c - 1)/2 pairs of samples from c populations, have been used in testing the null hypothesis of homogeneity of dis­ tribution against a variety of alternatives [1, 3,4,5).1 This note presents an elementary derivation of the covariances of such statistics under the null hypothesis_ The usual approach to such an analysis is the rank sum viewpoint of the Wilcoxon form of the statistic. Using this approach, Steel [3] presents a lengthy derivation of the covariances. In this note it is shown that thinking in terms of the Mann-Whitney form of the statistic leads to an elementary derivation. For comparison and completeness the rank sum derivation of Kruskal and Wallis [2] is repeated in obtaining the means and variances. Let x{, r= 1,2, ... , ni, i= 1,2, ... , c, be the rth item in the sample of size ni from the ith of c populations. Let Mij be the Mann-Whitney statistic between the ith andjth samples defined by n· n· Mij= ~ t z[J (1) s=1 1' = 1 where l,xJ~>xr Zr~= { I) O,xj";;;x[ } Thus Mij is the number of times items in the jth sample exceed items in the ith sample.
    [Show full text]
  • Analysis of Covariance (ANCOVA) with Two Groups
    NCSS Statistical Software NCSS.com Chapter 226 Analysis of Covariance (ANCOVA) with Two Groups Introduction This procedure performs analysis of covariance (ANCOVA) for a grouping variable with 2 groups and one covariate variable. This procedure uses multiple regression techniques to estimate model parameters and compute least squares means. This procedure also provides standard error estimates for least squares means and their differences, and computes the T-test for the difference between group means adjusted for the covariate. The procedure also provides response vs covariate by group scatter plots and residuals for checking model assumptions. This procedure will output results for a simple two-sample equal-variance T-test if no covariate is entered and simple linear regression if no group variable is entered. This allows you to complete the ANCOVA analysis if either the group variable or covariate is determined to be non-significant. For additional options related to the T- test and simple linear regression analyses, we suggest you use the corresponding procedures in NCSS. The group variable in this procedure is restricted to two groups. If you want to perform ANCOVA with a group variable that has three or more groups, use the One-Way Analysis of Covariance (ANCOVA) procedure. This procedure cannot be used to analyze models that include more than one covariate variable or more than one group variable. If the model you want to analyze includes more than one covariate variable and/or more than one group variable, use the General Linear Models (GLM) for Fixed Factors procedure instead. Kinds of Research Questions A large amount of research consists of studying the influence of a set of independent variables on a response (dependent) variable.
    [Show full text]
  • What Is Statistic?
    What is Statistic? OPRE 6301 In today’s world. ...we are constantly being bombarded with statistics and statistical information. For example: Customer Surveys Medical News Demographics Political Polls Economic Predictions Marketing Information Sales Forecasts Stock Market Projections Consumer Price Index Sports Statistics How can we make sense out of all this data? How do we differentiate valid from flawed claims? 1 What is Statistics?! “Statistics is a way to get information from data.” Statistics Data Information Data: Facts, especially Information: Knowledge numerical facts, collected communicated concerning together for reference or some particular fact. information. Statistics is a tool for creating an understanding from a set of numbers. Humorous Definitions: The Science of drawing a precise line between an unwar- ranted assumption and a forgone conclusion. The Science of stating precisely what you don’t know. 2 An Example: Stats Anxiety. A business school student is anxious about their statistics course, since they’ve heard the course is difficult. The professor provides last term’s final exam marks to the student. What can be discerned from this list of numbers? Statistics Data Information List of last term’s marks. New information about the statistics class. 95 89 70 E.g. Class average, 65 Proportion of class receiving A’s 78 Most frequent mark, 57 Marks distribution, etc. : 3 Key Statistical Concepts. Population — a population is the group of all items of interest to a statistics practitioner. — frequently very large; sometimes infinite. E.g. All 5 million Florida voters (per Example 12.5). Sample — A sample is a set of data drawn from the population.
    [Show full text]
  • Lecture 8 — October 15 8.1 Bayes Estimators and Average Risk
    STATS 300A: Theory of Statistics Fall 2015 Lecture 8 | October 15 Lecturer: Lester Mackey Scribe: Hongseok Namkoong, Phan Minh Nguyen Warning: These notes may contain factual and/or typographic errors. 8.1 Bayes Estimators and Average Risk Optimality 8.1.1 Setting We discuss the average risk optimality of estimators within the framework of Bayesian de- cision problems. As with the general decision problem setting the Bayesian setup considers a model P = fPθ : θ 2 Ωg, for our data X, a loss function L(θ; d), and risk R(θ; δ). In the frequentist approach, the parameter θ was considered to be an unknown deterministic quan- tity. In the Bayesian paradigm, we consider a measure Λ over the parameter space which we call a prior. Assuming this measure defines a probability distribution, we interpret the parameter θ as an outcome of the random variable Θ ∼ Λ. So, in this setup both X and θ are random. Conditioning on Θ = θ, we assume the data is generated by the distribution Pθ. Now, the optimality goal for our decision problem of estimating g(θ) is the minimization of the average risk r(Λ; δ) = E[L(Θ; δ(X))] = E[E[L(Θ; δ(X)) j X]]: An estimator δ which minimizes this average risk is a Bayes estimator and is sometimes referred to as being Bayes. Note that the average risk is an expectation over both the random variables Θ and X. Then by using the tower property, we showed last time that it suffices to find an estimator δ which minimizes the posterior risk E[L(Θ; δ(X))jX = x] for almost every x.
    [Show full text]
  • AP Statistics Chapter 7 Linear Regression Objectives
    AP Statistics Chapter 7 Linear Regression Objectives: • Linear model • Predicted value • Residuals • Least squares • Regression to the mean • Regression line • Line of best fit • Slope intercept • se 2 • R 2 Fat Versus Protein: An Example • The following is a scatterplot of total fat versus protein for 30 items on the Burger King menu: The Linear Model • The correlation in this example is 0.83. It says “There seems to be a linear association between these two variables,” but it doesn’t tell what that association is. • We can say more about the linear relationship between two quantitative variables with a model. • A model simplifies reality to help us understand underlying patterns and relationships. The Linear Model • The linear model is just an equation of a straight line through the data. – The points in the scatterplot don’t all line up, but a straight line can summarize the general pattern with only a couple of parameters. – The linear model can help us understand how the values are associated. The Linear Model • Unlike correlation, the linear model requires that there be an explanatory variable and a response variable. • Linear Model 1. A line that describes how a response variable y changes as an explanatory variable x changes. 2. Used to predict the value of y for a given value of x. 3. Linear model of the form: 6 The Linear Model • The model won’t be perfect, regardless of the line we draw. • Some points will be above the line and some will be below. • The estimate made from a model is the predicted value (denoted as yˆ ).
    [Show full text]
  • Unit 4: Measures of Center
    Unit 4: Measures of Center Summary of Video One number most people pay a lot of attention to is the one on their paycheck! Today’s workforce is pretty evenly split between men and women, but is the salary distribution for women the same as for men? The histograms in Figure 4.1 show the weekly wages for Americans in 2011, separated by gender. Figure 4.1. Histograms comparing men’s and women’s wages. Both histograms are skewed to the right with most people making moderate salaries while a few make much more. For comparison’s sake, it would help to numerically describe the centers of these distributions. A statistic called the median splits the distribution in half as shown in Figure 4.2 – half the wages lie above it, and half below. The median wage for men in 2011 was $865. The median wage for women was only $692, about 80% of what men make. Unit 4: Measures of Center | Student Guide | Page 1 Figure 4.2. Locating the median on histograms of wages. Simply using the median, we have identified a real disparity in wages, but it is much harder to figure out why it exists. Some of the difference can be accounted for by differences in education, age, and years in the workforce. Another reason for the earnings gap is that women tend to be concentrated in lower-paying jobs – but that begs the question: Are these jobs worth less in some sense? Or are these jobs lower paid because they are primarily held by women? This is the central issue in the debate over comparable worth – the idea that men and women should be paid equally, not only for the same job but for different jobs of equal worth.
    [Show full text]
  • Chapter 9: Serial Correlation
    CHAPTER 9: SERIAL CORRELATION Serial correlation (or autocorrelation) is the violation of Assumption 4 (observations of the error term are uncorrelated with each other). Pure Serial Correlation This type of correlation tends to be seen in time series data. To denote a time series data set we will use a subscript. This type of serial correlation occurs when the error in one period is correlated with the errors in other periods. The model is assumed to be correctly specified. The most common form of serial correlation is called first-order serial correlation in which the error in time is related to the previous (1 period’s error: , 11 The new parameter is called the first-order autocorrelation coefficient. The process for the error term is referred to as a first-order autoregressive process or AR(1). Page 1 of 19 CHAPTER 9: SERIAL CORRELATION The magnitude of tells us about the strength of the serial correlation and the sign indicates the nature of the serial correlation. 0 indicates no serial correlation 0 indicates positive serial correlation – the error terms will tend to have the same sign from one period to the next Page 2 of 19 CHAPTER 9: SERIAL CORRELATION 0 indicates negative serial correlation – the error terms will tend to have a different sign from one period to the next Page 3 of 19 CHAPTER 9: SERIAL CORRELATION Impure Serial Correlation This type of serial correlation is caused by a specification error such as an omitted variable or ignoring nonlinearities. Suppose the true regression equation is given by but in our model we do not include .
    [Show full text]
  • The Multivariate Normal Distribution
    The Multivariate Normal Distribution Why should we consider the multivariate normal distribution? It would seem that applied problems are so complex that it would only be interesting from a mathematical perspective. 1. It is mathematically tractable for a large number of problems, and, therefore, progress towards answers to statistical questions can be provided, even if only approximately so. 2. Because it is tractable for so many problems, it provides insight into techniques based upon other distributions or even non-parametric techniques. For this, it is often a benchmark against which other methods are judged. 3. For some problems it serves as a reasonable model of the data. In other instances, transfor- mations can be applied to the set of responses to have the set conform well to multivariate normality. 4. The sampling distribution of many (multivariate) statistics are normal, regardless of the parent distribution (Multivariate Central Limit Theorems). Thus, for large sample sizes, we may be able to make use of results from the multivariate normal distribution to answer our statistical questions, even when the parent distribution is not multivariate normal. Consider first the univariate normal distribution with parameters µ (the mean) and σ (the variance) for the random variable x, 2 1 − 1 (x−µ) f(x)=√ e 2 σ2 (1) 2πσ2 for −∞ <x<∞, −∞ <µ<∞,andσ2 > 0. Now rewrite the exponent (x − µ)2/σ2 using the linear algebra formulation of (x − µ)(σ2)−1(x − µ). This formulation matches that for the generalized or Mahalanobis squared distance (x − µ)Σ−1(x − µ), where both x and µ are vectors.
    [Show full text]