Lecture 1 Least Squares

Lecture 1 Least Squares

RS – Lecture 1 Lecture 1 Least Squares 1 What is Econometrics? • Ragnar Frisch, Econometrica Vol.1 No. 1 (1933) revisited “Experience has shown that each of these three view-points, that of statistics, economic theory, and mathematics, is a necessary, but not by itself a sufficient, condition for a real understanding of the quantitative relations in modern economic life. It is the unification of all three aspects that is powerful. And it is this unification that constitutes econometrics.” Mathematical Statistics Econometrics Data Economic Theory 1 RS – Lecture 1 What is Econometrics? • Economic Theory: - The CAPM Ri - Rf = i (RMP - Rf ) • Mathematical Statistics: - Method to estimate CAPM. For example, Linear regression: Ri - Rf = αi + i (RMP - Rf ) + i - Properties of bi (the LS estimator of i) - Properties of different tests of CAPM. For example, a t-test for H0: αi =0. • Data: Ri, Rf, and RMP - Typical problems: Missing data, Measurement errors, Survivorship bias, Auto- and Cross-correlated returns, Time-varying moments. Estimation • Two philosophies regarding models (assumptions) in statistics: (1) Parametric statistics. It assumes data come from a type of probability distribution and makes inferences about the parameters of the distribution. Models are parameterized before collecting the data. Example: Maximum likelihood estimation. (2) Non-parametric statistics. It assumes no probability distribution –i.e., they are “distribution free.” Models are not imposed a priori, but determined by the data. Examples: histograms, kernel density estimation. • In general, in parametric statistics we make more assumptions. 2 RS – Lecture 1 Least Squares Estimation • Old method: Gauss (1795, 1801) used it in astronomy. Idea: Carl F. Gauss (1777 – 1855, Germany) • There is a functional form relating a dependent variable Y and k variables X. This function depends on unknown parameters, θ. The relation between Y and X is not exact. There is an error, . We have T observations of Y and X. • We will assume that the functional form is known: yi = f(xi, θ) + i i = 1, 2, ...., T. • We will estimate the parameters θ by minimizing a sum of squared 2 errors: minθ {S(xi, θ) =Σi i } Least Squares Estimation Example: We want to study the effect of a CEO’s network (x) on a firm’s CEO’s compensation (y). We build a CEO’s compensation model including a CEO’s network (x) and other “control variables” (W: education, experience, etc.), controlling for other features that make one CEO’s compensation different from another. That is, yi = f(xi, Wi, θ) + i i = 1, 2, ...., T. The term represents the effects of individual variation that have not been controlled for with W or x and θ is a vector of parameters. Usually, f(x, θ) is linear. Then, the compensation model becomes: yi = xi + Wi, γ + i In this model, we are interested in estimating , our parameter of interest. 3 RS – Lecture 1 Least Squares Estimation Example: We want to study the effect of a CEO’s network (x) on a firm’s CEO’s compensation (y). We also include in the compensation model additional “control variables” (W: education, gender, experience, etc.), controlling for other features that make one CEO’s compensation different from another. That is, the model is: yi = f(xi, Wi, θ) + i i = 1, 2, ...., T. The term represents the effects of individual variation that have not been controlled for with W or x. Usually, f(x, θ) is linear. Then, the compensation model becomes: yi = xi + Wi, γ + i In this model, we are interested in estimating , our parameter of interest. Least Squares Estimation • We will use linear algebra notation. That is, y = f(X, θ) + Vectors will be column vectors: y, xk, and are Tx1 vectors: ⋮ ⇒y’ = [y1 y2 .... yT] ⋮ xk ⇒xk’= [xk1 xk2 .... xkT] ⋮ k ⇒ ’ = [1 2 .... T] X is a Txk matrix. 4 RS – Lecture 1 Least Squares Estimation X is a Txk matrix. Its columns are the kTx1 vectors xk. It is common to treat x1 as vector of ones: ⋮ x1 ⋮ ⇒ x1’= [1 1 .... 1] = ί’ Note: Pre-multiplying a vector (1xT) by ί (or ί’ xk ) produces a scalar: xk’ ί = ί’ xk = xk1 + xk2 +....+ xkT = ∑j xkj Least Squares Estimation - Assumptions • Typical Assumptions (A1) DGP: y = f(X, θ) + is correctly specified. For example, f(x, θ) = X (A2) E[|X] = 0 2 (A3) Var[|X] = σ IT (A4) X has full column rank –rank(X)=k–, where T ≥ k. • Assumption (A1) is called correct specification. We know how the data is generated. We call y = f(X, θ) + the Data Generating Process. Note: The errors, , are called disturbances. They are not something we add to f(X, θ) because we don’t know precisely f(X, θ). No. The errors are part of the DGP. 5 RS – Lecture 1 Least Squares Estimation - Assumptions • Assumption (A2) is called regression. From Assumption (A2) we get: (i) E[|X] = 0 E[y|X]= f(X, θ) + E[|X] = f(X, θ) That is, the observed y will equal E[y|X] + random variation. (ii) Using the Law of Iterated Expectations (LIE): E[] = EX[E[|X]] = EX[0] = 0 (iii) There is no information about in X Cov(, X)=0. Cov(,X)= E[( -0)(X - μX)] = E[X] E[X] = EX[E[X|X]] = EX[X E[|X]] = 0 (using LIE) That is, E[X] = 0 ⇒ X. Least Squares Estimation - Assumptions • From Assumption (A3) we get 2 2 Var[|X] = IT Var[] = IT 2 Proof:Var[] = Ex[Var[|X]] + Varx[E[|X]] = IT. ▪ This assumption implies 2 2 (i) homoscedasticity E[i |X] = for all i. (ii) no serial/cross correlation E[ i j |X] = 0 for i≠j. • From Assumption (A4) the k independent variables in X are linearly independent. Then, the kxk matrix X’X will also have full rank –i.e., rank(X’X) = k. To get asymptotic results we will need more assumption about X. 6 RS – Lecture 1 Least Squares Estimation – f.o.c. 2 • Objective function: S(xi, θ) = Σi i • We want to minimize w.r.t to θ. That is, 2 2 minθ {S(xi, θ) = Σi i = Σi [yi – f(xi, θ)] } d S(xi, θ)/d θ = - 2 Σi [yi – f(xi, θ)] f ‘(xi, θ) f.o.c. -2 Σi [yi – f(xi, θLS)] f ‘(xi, θLS) = 0 Note: The f.o.c. deliver the normal equations. The solution to the normal equation, θLS, is the LS estimator. The estimator θLS is a function of the data (yi ,xi). CLM - OLS • Suppose we assume a linear functional form for f(x, θ): (A1’) DGP: y = f(X, θ) + = X + Now, we have all the assumptions behind classical linear regression model (CLM): (A1) DGP: y = X + is correctly specified. (A2) E[|X] = 0 2 (A3) Var[|X] = σ IT (A4) X has full column rank –rank(X) = k, where T ≥ k. 2 Objective function: S(xi, θ) =Σi i = ′ = (y- X)′ (y- X) Normal equations: -2 Σi [yi - f(xi, θLS)] f ‘(xi, θLS) = -2 (y – Xb)′ X =0 Solving for b b = (X′X)-1 X′y 7 RS – Lecture 1 CLM - OLS • Example: One explanatory model. (A1’) DGP: y = 1+ 2 x + 2 2 Objective function: S(xi, θ) =Σi i = Σi (yi – 1 – 2 xi) F.o.c. (2 equations, 2 unknowns): (1): -2 Σi (yi –b1 –b2 xi) (-1) = 0 Σi (yi –b1 –b2 xi) = 0 (1) 2 (2): -2 Σi (yi –b1 –b2 xi)(-xi) = 0 Σi (yi xi –b1xi –b2 xi ) = 0 (2) From (1): Σi yi – Σi b1 –b2 Σi xi = 0 ⇒ b1 = –b2 ̅ ∑ 2 From (2): Σi yi xi –( –b2 ̅) Σi xi –b2 Σi xi = 0 ⇒ b2 = ∑̅ ∑̅ , or, more elegantly, b2 = ∑̅ OLS Estimation: Second Order Condition • OLS estimator: b = (X′X)-1 X′y Note: (i) b = OLS. (Ordinary LS. Ordinary = linear) (ii) b is a (linear) function of the data (yi ,xi). (iii) X′(y-Xb) = X′y-X′X(X′X)-1X′y = X′e= 0 ⇒ e X. • Q: Is b is a minimum? We need to check the s.o.c. (y - Xb)'(y - Xb) 2X'(y - Xb) b (y - Xb)'(y - Xb) 2 (y - Xb)'(y - Xb) b = bb b column vector = row vector = 2X'X 8 RS – Lecture 1 OLS Estimation: Second Order Condition nn2 n ii11x iii 112xx... iiiK 11 xx 2e'e nnx xx2 ... n xx 2X'X = 2 iii121 ii 12 iiiK 12 bb' ... ... ... ... nn n2 iiKi11xx iiKi 12 xx... iiK 1 x If there were a single b, we would require this to be n positive, which it would be; 2x'x = 2x2 0. i1 i The matrix counterpart of a positive number is a positive definite matrix. OLS Estimation: Second Order Condition nn22 n ii11x iii 112xx... iiiK 11 xx x i 1 xx ii 1 2 ... xx iiK 1 nnx x x22... n xx xx x ... xx X'X = i121 ii i 12 i i 12 iiK= n ii 21 i 2 iiK 2 ... ... ... ...i1 ... ... ... ... nn n2 2 iiKi11xx iiKi 12 xx... iiK 1 x xx iKi 1 xx iKi 2... xiK xi1 x = n i 2 xx ... x iiiiK112... xik n = iii1xx Definition: A matrix A is positive definite (pd) if z′A z >0 for any z. In general, we need eigenvalues to check this. For some matrices it is easy to check. Let A = X′X. Then, z′A z = z′X′X z = v′v >0. ⇒ X′X is pd ⇒ b is a min! 9 RS – Lecture 1 OLS Estimation - Properties • The LS estimator of LS when f(x, θ) = X is linear is -1 b = (X′X) X′ y ⇒ b is a (linear) function of the data (yi ,xi).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us