ECON 4160, Autumn Term 2017. Lecture 2

Total Page:16

File Type:pdf, Size:1020Kb

ECON 4160, Autumn Term 2017. Lecture 2 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ECON 4160, Autumn term 2017. Lecture 2 Ragnar Nymoen University of Oslo 31 August 2017 1 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) References to Lecture 2 I HN: CH: 6,7,8 Matrix algebra and the multivariate regression model I HN: CH 9, about mis-specification tests in for regression models for cross-section data. See also the note about standard tests of mis-specification posted under Computer Class material on the semester page. I Lecture note 2 (posted on the semester page) is referred to several places below. 2 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ML estimation of the multivariate modelI I Ch. 7 in HN I The assumptions of the statistical model are identical to Ch. 5 in HN. I But the model has two or more regressors (plus constant term). I Let k denote the number of regressors I k = 2 in Ch 5 in HN and k = 1 in Ch 3 of HN(3011) because they k counts the the constant. I The likelihood function is constructed from the assumed normality of the conditional pdf: f (Y X1, ... , Xk ) j 3 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ML estimation of the multivariate modelII I Assume identical distributions of n independent sets of variables n f (Y1, ... Yn X11, ... , Xkn) = ∏ f (Yi X1i , ... , Xki ) j i=1 j I Assume that each of the n conditional pdfs are normal. I By direct analogy to Ch. 5, the log-likelihood function is: n 1 n l = log(2 2) (Y k X )2 Y1,...,Yn X ps 2 ∑ i ∑j=1 bj ji j 2 2ps i=1 (1) I MLEs for b1,. ,bk are found by using OLS for the second part of the expression: Therefore OLS estimators of the b’s are MLE, and vise versa. 4 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ML estimation of the multivariate model III 2 I Next, from the concentrated log-likelihood function for s , we find the MLE of the scale parameter as: n 2 1 2 sˆ = ∑ eˆi n i=1 where the residuals are from the k-variable model of course. 5 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) Marginal effects and location shiftsI I In multivariate regression, b2, b3,. ,bk are partial effects or marginal effects, when the Xj -variables (j = 2, 3, .., k) are continuous. I The interpretation of estimated regression coeffi cients as partial effects is also clear from the famous Frisch-Waugh theorem, see for example Lecture Note 2. I The economic interpretation depends on how the raw variables have been transformed prior to the specification of the regression model (also knows as choice of functional form). 6 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) Marginal effects and location shifts II DIY Exercise: 2.1: For each case below: Calculate the partial derivative, the partial elasticity and the semi-elasticity of Y with respect to a small change in Xj Regressand Regressor a) YXj b) ln(Y ) ln(Xj ) c) Y ln(Xj ) 1 d) YX I If one or more of the X’sare dummies (for example 0, 0, 1, -1, 1,0,0...), also called indicator variables, the interpretation of bj is different: Shift in the intercept, location shift 7 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) Marginal effects and location shifts III DIY Exercise: 2.2: Assume that we have n1 mutually independent vectors (Yi ,Zi ) where both variables are continuous, and that we sample (independently) another n2 mutually independent vectors (Yi ,Zi ). The two conditional expectations are E (Yi Zi ) = g1 + g2Zi , i = 1, 2, ... , n1 j E (Yi Zi ) = l1 + l2Zi , i = 1, 2, ... , n2 j where g1, g2, l1, l2 are parameters. a) How can you write this as regression equation with one intercept and three regressors (so k = 4 in HN’scounting) with the aid an indicator variable? b) Which of the parameters in your model equation are derivative coeffi cients, and which are location parameters? 8 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) Partial correlation coeffi cientsI I In case of k = 3, we can calculate the multiple correlation coeffi cient R2 n n 2 ˆ 2 2 R = ∑ Yi / ∑ Yi i=1 i=1 "ESS" "TSS" | {z } | {z } and three correlations coeffi cients: rY ,X2 , r Y ,X3 and r X2,X3 . I In addition, we can calculate two partial correlation coeffi cients, between Y and X2, and between Y and X3: rY ,X2 X3 and rY ,X3 X2 . j j 9 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) Partial correlation coeffi cientsII I rY ,X2 X3 is constructed from the residuals from two simple j regressions: between Y and X3 and between X2 and X3: #ˆiY X3 and #ˆiX2 X3 : j j 1 n n ∑i=1#ˆiY X3 #ˆiX2 X3 j j rY ,X2 X3 = j 1 n #ˆ2 1 n #ˆ2 n ∑i=1 iY X3 n ∑i=1 iX1 X3 j j q q DIY Exercise: 2.3: Show that rY ,X2 X3 can be expressed as j 1 n #ˆ2 n ∑i=1 iX2 X3 ˆ j rY ,X2 X3 = b2 j q 1 n 2 n ∑i=1#ˆiY X 3 j q where bˆ2 is the OLS estimator of b2 in the k = 3 model equation Yi = b1 + b2X2i + b3X3i + ei . 10 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) Heuristically: 2 R = all squared simple and partial correlation coeffi cient and this is supported by the relationship in the next exercise: DIY Exercise: 2.4: Try to show that, in the case of k = 3 (1 R2) = (1 r 2 )(1 r 2 ) Y ,X2 Y ,X3 X2 j by following the hints in Exercise 7.8 in HN. A slightly different argument will be given in the answers note. 11 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) The regression model in matrix notationI Ch 6 and 8 in HN. Let X denote a n k matrix with the regressors of the model y = Xb + e (2) where y is n 1and # is the n 1 vector with disturbances and the parameter vector b is k 1. Notation convention: Use lowercase bold for data vectors. Uppercase bold for data matrices. Y1 X11 X12 ... X1k b1 e1 Y2 X21 X22 ... X2k b2 e2 2 . 3 = 2 . 3 2 . 3 + 2 . 3 . 6 7 6 7 6 7 6 7 6 Yn 7 6 Xn1 Xn2 ... X 7 6 b 7 6 en 7 6 7 6 nk 7 6 k 7 6 7 4 5 4 5 4 5 4 5 12 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) The regression model in matrix notationII th if we let Xi denote the i row in X. (1 k matrix), a typical row in equation (2) is: k Yi = Xi b+ #i = ∑ Xij bj + ei , i = 1, 2, ... , n (3) j=1 Unless both the regressand and all the regressors are measured as deviations from their means, there is an intercept in the model. When we need to make this explicit, we can rewrite X as the partitioned matrix: X = i X2 13 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) The regression model in matrix notation III where 1 1 i = 2 . 3 . 6 7 6 1 7 6 7n 1 4 5 X12 ... X1k X22 ... X2k X2 = 2 . 3 . 6 7 6 Xn2 ... X 7 6 nk 7n (k 1) 4 5 14 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ML estimatorI I By solving Exercise C to the first seminar, you will show that OLS gives the estimator 1 bˆ = (X0X) X0y (4) for b. 1 I X0 is the transpose matrix and (X0X) is the inverse of the X0X matrix with (uncentered) moments between the regressors. I For the inverse to exist, rank(X0X) = k, (full rank). This is the generalization of the “absence of perfect multicollinearity” condition. 15 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ML estimatorII I Under the assumption of the statistical regression model IID, normal conditional pdf and exogenous regressors, the likelihood is n 1 l = log(2 2) (y X ) (y X ) Y1,...,Yn X ps 2 b 0 b j 2 2ps I bˆ in (4) minimizes (y Xb)0(y Xb) meaning that is also the MLE of b. I By solving Exercise C to the first seminar, you can show that ˆ0 ˆ ˆ0 b = ( b1 b2 ) and 1 bˆ = (X2 X2)0(X2 X2) (X2 X2)0y (5) 2 In X2, the typical row is iX¯ i , i = 2, ... , k. 16 / 44 MLE and multivariate regression Matrixnotation Inference Non-linear regression GLS and SUR (for reference) ML estimator III I (5) is the generalization of the “x-deviation from mean” form of the slope coeffi cient in simple regression.
Recommended publications
  • Lecture 22: Bivariate Normal Distribution Distribution
    6.5 Conditional Distributions General Bivariate Normal Let Z1; Z2 ∼ N (0; 1), which we will use to build a general bivariate normal Lecture 22: Bivariate Normal Distribution distribution. 1 1 2 2 f (z1; z2) = exp − (z1 + z2 ) Statistics 104 2π 2 We want to transform these unit normal distributions to have the follow Colin Rundel arbitrary parameters: µX ; µY ; σX ; σY ; ρ April 11, 2012 X = σX Z1 + µX p 2 Y = σY [ρZ1 + 1 − ρ Z2] + µY Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 1 / 22 6.5 Conditional Distributions 6.5 Conditional Distributions General Bivariate Normal - Marginals General Bivariate Normal - Cov/Corr First, lets examine the marginal distributions of X and Y , Second, we can find Cov(X ; Y ) and ρ(X ; Y ) Cov(X ; Y ) = E [(X − E(X ))(Y − E(Y ))] X = σX Z1 + µX h p i = E (σ Z + µ − µ )(σ [ρZ + 1 − ρ2Z ] + µ − µ ) = σX N (0; 1) + µX X 1 X X Y 1 2 Y Y 2 h p 2 i = N (µX ; σX ) = E (σX Z1)(σY [ρZ1 + 1 − ρ Z2]) h 2 p 2 i = σX σY E ρZ1 + 1 − ρ Z1Z2 p 2 2 Y = σY [ρZ1 + 1 − ρ Z2] + µY = σX σY ρE[Z1 ] p 2 = σX σY ρ = σY [ρN (0; 1) + 1 − ρ N (0; 1)] + µY = σ [N (0; ρ2) + N (0; 1 − ρ2)] + µ Y Y Cov(X ; Y ) ρ(X ; Y ) = = ρ = σY N (0; 1) + µY σX σY 2 = N (µY ; σY ) Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 2 / 22 Statistics 104 (Colin Rundel) Lecture 22 April 11, 2012 3 / 22 6.5 Conditional Distributions 6.5 Conditional Distributions General Bivariate Normal - RNG Multivariate Change of Variables Consequently, if we want to generate a Bivariate Normal random variable Let X1;:::; Xn have a continuous joint distribution with pdf f defined of S.
    [Show full text]
  • Laws of Total Expectation and Total Variance
    Laws of Total Expectation and Total Variance Definition of conditional density. Assume and arbitrary random variable X with density fX . Take an event A with P (A) > 0. Then the conditional density fXjA is defined as follows: 8 < f(x) 2 P (A) x A fXjA(x) = : 0 x2 = A Note that the support of fXjA is supported only in A. Definition of conditional expectation conditioned on an event. Z Z 1 E(h(X)jA) = h(x) fXjA(x) dx = h(x) fX (x) dx A P (A) A Example. For the random variable X with density function 8 < 1 3 64 x 0 < x < 4 f(x) = : 0 otherwise calculate E(X2 j X ≥ 1). Solution. Step 1. Z h i 4 1 1 1 x=4 255 P (X ≥ 1) = x3 dx = x4 = 1 64 256 4 x=1 256 Step 2. Z Z ( ) 1 256 4 1 E(X2 j X ≥ 1) = x2 f(x) dx = x2 x3 dx P (X ≥ 1) fx≥1g 255 1 64 Z ( ) ( )( ) h i 256 4 1 256 1 1 x=4 8192 = x5 dx = x6 = 255 1 64 255 64 6 x=1 765 Definition of conditional variance conditioned on an event. Var(XjA) = E(X2jA) − E(XjA)2 1 Example. For the previous example , calculate the conditional variance Var(XjX ≥ 1) Solution. We already calculated E(X2 j X ≥ 1). We only need to calculate E(X j X ≥ 1). Z Z Z ( ) 1 256 4 1 E(X j X ≥ 1) = x f(x) dx = x x3 dx P (X ≥ 1) fx≥1g 255 1 64 Z ( ) ( )( ) h i 256 4 1 256 1 1 x=4 4096 = x4 dx = x5 = 255 1 64 255 64 5 x=1 1275 Finally: ( ) 8192 4096 2 630784 Var(XjX ≥ 1) = E(X2jX ≥ 1) − E(XjX ≥ 1)2 = − = 765 1275 1625625 Definition of conditional expectation conditioned on a random variable.
    [Show full text]
  • Multicollinearity
    Chapter 10 Multicollinearity 10.1 The Nature of Multicollinearity 10.1.1 Extreme Collinearity The standard OLS assumption that ( xi1; xi2; : : : ; xik ) not be linearly related means that for any ( c1; c2; : : : ; ck ) xik =6 c1xi1 + c2xi2 + ··· + ck 1xi;k 1 (10.1) − − for some i. If the assumption is violated, then we can find ( c1; c2; : : : ; ck 1 ) such that − xik = c1xi1 + c2xi2 + ··· + ck 1xi;k 1 (10.2) − − for all i. Define x12 ··· x1k xk1 c1 x22 ··· x2k xk2 c2 X1 = 0 . 1 ; xk = 0 . 1 ; and c = 0 . 1 : . B C B C B C B xn2 ··· xnk C B xkn C B ck 1 C B C B C B − C @ A @ A @ A Then extreme collinearity can be represented as xk = X1c: (10.3) We have represented extreme collinearity in terms of the last explanatory vari- able. Since we can always re-order the variables this choice is without loss of generality and the analysis could be applied to any non-constant variable by moving it to the last column. 10.1.2 Near Extreme Collinearity Of course, it is rare, in practice, that an exact linear relationship holds. Instead, we have xik = c1xi1 + c2xi2 + ··· + ck 1xi;k 1 + vi (10.4) − − 133 134 CHAPTER 10. MULTICOLLINEARITY or, more compactly, xk = X1c + v; (10.5) where the v's are small relative to the x's. If we think of the v's as random variables they will have small variance (and zero mean if X includes a column of ones). A convenient way to algebraically express the degree of collinearity is the sample correlation between xik and wi = c1xi1 +c2xi2 +···+ck 1xi;k 1, namely − − cov( xik; wi ) cov( wi + vi; wi ) rx;w = = (10.6) var(xi;k)var(wi) var(wi + vi)var(wi) d d Clearly, as the variancep of vi grows small, thisp value will go to unity.
    [Show full text]
  • CONDITIONAL EXPECTATION Definition 1. Let (Ω,F,P)
    CONDITIONAL EXPECTATION 1. CONDITIONAL EXPECTATION: L2 THEORY ¡ Definition 1. Let (­,F ,P) be a probability space and let G be a σ algebra contained in F . For ¡ any real random variable X L2(­,F ,P), define E(X G ) to be the orthogonal projection of X 2 j onto the closed subspace L2(­,G ,P). This definition may seem a bit strange at first, as it seems not to have any connection with the naive definition of conditional probability that you may have learned in elementary prob- ability. However, there is a compelling rationale for Definition 1: the orthogonal projection E(X G ) minimizes the expected squared difference E(X Y )2 among all random variables Y j ¡ 2 L2(­,G ,P), so in a sense it is the best predictor of X based on the information in G . It may be helpful to consider the special case where the σ algebra G is generated by a single random ¡ variable Y , i.e., G σ(Y ). In this case, every G measurable random variable is a Borel function Æ ¡ of Y (exercise!), so E(X G ) is the unique Borel function h(Y ) (up to sets of probability zero) that j minimizes E(X h(Y ))2. The following exercise indicates that the special case where G σ(Y ) ¡ Æ for some real-valued random variable Y is in fact very general. Exercise 1. Show that if G is countably generated (that is, there is some countable collection of set B G such that G is the smallest σ algebra containing all of the sets B ) then there is a j 2 ¡ j G measurable real random variable Y such that G σ(Y ).
    [Show full text]
  • Conditional Expectation and Prediction Conditional Frequency Functions and Pdfs Have Properties of Ordinary Frequency and Density Functions
    Conditional expectation and prediction Conditional frequency functions and pdfs have properties of ordinary frequency and density functions. Hence, associated with a conditional distribution is a conditional mean. Y and X are discrete random variables, the conditional frequency function of Y given x is pY|X(y|x). Conditional expectation of Y given X=x is Continuous case: Conditional expectation of a function: Consider a Poisson process on [0, 1] with mean λ, and let N be the # of points in [0, 1]. For p < 1, let X be the number of points in [0, p]. Find the conditional distribution and conditional mean of X given N = n. Make a guess! Consider a Poisson process on [0, 1] with mean λ, and let N be the # of points in [0, 1]. For p < 1, let X be the number of points in [0, p]. Find the conditional distribution and conditional mean of X given N = n. We first find the joint distribution: P(X = x, N = n), which is the probability of x events in [0, p] and n−x events in [p, 1]. From the assumption of a Poisson process, the counts in the two intervals are independent Poisson random variables with parameters pλ and (1−p)λ (why?), so N has Poisson marginal distribution, so the conditional frequency function of X is Binomial distribution, Conditional expectation is np. Conditional expectation of Y given X=x is a function of X, and hence also a random variable, E(Y|X). In the last example, E(X|N=n)=np, and E(X|N)=Np is a function of N, a random variable that generally has an expectation Taken w.r.t.
    [Show full text]
  • Section 1: Regression Review
    Section 1: Regression review Yotam Shem-Tov Fall 2015 Yotam Shem-Tov STAT 239A/ PS 236A September 3, 2015 1 / 58 Contact information Yotam Shem-Tov, PhD student in economics. E-mail: [email protected]. Office: Evans hall room 650 Office hours: To be announced - any preferences or constraints? Feel free to e-mail me. Yotam Shem-Tov STAT 239A/ PS 236A September 3, 2015 2 / 58 R resources - Prior knowledge in R is assumed In the link here is an excellent introduction to statistics in R . There is a free online course in coursera on programming in R. Here is a link to the course. An excellent book for implementing econometric models and method in R is Applied Econometrics with R. This is my favourite for starting to learn R for someone who has some background in statistic methods in the social science. The book Regression Models for Data Science in R has a nice online version. It covers the implementation of regression models using R . A great link to R resources and other cool programming and econometric tools is here. Yotam Shem-Tov STAT 239A/ PS 236A September 3, 2015 3 / 58 Outline There are two general approaches to regression 1 Regression as a model: a data generating process (DGP) 2 Regression as an algorithm (i.e., an estimator without assuming an underlying structural model). Overview of this slides: 1 The conditional expectation function - a motivation for using OLS. 2 OLS as the minimum mean squared error linear approximation (projections). 3 Regression as a structural model (i.e., assuming the conditional expectation is linear).
    [Show full text]
  • Assumptions of the Simple Classical Linear Regression Model (CLRM)
    ECONOMICS 351* -- NOTE 1 M.G. Abbott ECON 351* -- NOTE 1 Specification -- Assumptions of the Simple Classical Linear Regression Model (CLRM) 1. Introduction CLRM stands for the Classical Linear Regression Model. The CLRM is also known as the standard linear regression model. Three sets of assumptions define the CLRM. 1. Assumptions respecting the formulation of the population regression equation, or PRE. Assumption A1 2. Assumptions respecting the statistical properties of the random error term and the dependent variable. Assumptions A2-A4 3. Assumptions respecting the properties of the sample data. Assumptions A5-A8 ECON 351* -- Note 1: Specification of the Simple CLRM … Page 1 of 16 pages ECONOMICS 351* -- NOTE 1 M.G. Abbott • Figure 2.1 Plot of Population Data Points, Conditional Means E(Y|X), and the Population Regression Function PRF Y Fitted values 200 $ PRF = 175 e, ur β0 + β1Xi t i nd 150 pe ex n o i t 125 p um 100 cons y ekl e W 75 50 60 80 100 120 140 160 180 200 220 240 260 Weekly income, $ Recall that the solid line in Figure 2.1 is the population regression function, which takes the form f (Xi ) = E(Yi Xi ) = β0 + β1Xi . For each population value Xi of X, there is a conditional distribution of population Y values and a corresponding conditional distribution of population random errors u, where (1) each population value of u for X = Xi is u i X i = Yi − E(Yi X i ) = Yi − β0 − β1 X i , and (2) each population value of Y for X = Xi is Yi X i = E(Yi X i ) + u i = β0 + β1 X i + u i .
    [Show full text]
  • Lectures Prepared By: Elchanan Mossel Yelena Shvets
    Introduction to probability Stat 134 FAll 2005 Berkeley Lectures prepared by: Elchanan Mossel Yelena Shvets Follows Jim Pitman’s book: Probability Sections 6.1-6.2 # of Heads in a Random # of Tosses •Suppose a fair die is rolled and let N be the number on top. N=5 •Next a fair coin is tossed N times and H, the number of heads is recorded: H=3 Question : Find the distribution of H, the number of heads? # of Heads in a Random Number of Tosses Solution : •The conditional probability for the H=h , given N=n is •By the rule of the average conditional probability and using P(N=n) = 1/6 h 0 1 2 3 4 5 6 P(H=h) 63/384 120/384 99/384 64/384 29/384 8/384 1/384 Conditional Distribution of Y given X =x Def: For each value x of X, the set of probabilities P(Y=y|X=x) where y varies over all possible values of Y, form a probability distribution, depending on x, called the conditional distribution of Y given X=x . Remarks: •In the example P(H = h | N =n) ~ binomial(n,½). •The unconditional distribution of Y is the average of the conditional distributions, weighted by P(X=x). Conditional Distribution of X given Y=y Remark: Once we have the conditional distribution of Y given X=x , using Bayes’ rule we may obtain the conditional distribution of X given Y=y . •Example : We have computed distribution of H given N = n : •Using the product rule we can get the joint distr.
    [Show full text]
  • 5. Conditional Expectation A. Definition of Conditional Expectation
    5. Conditional Expectation A. Definition of conditional expectation Suppose that we have partial information about the outcome ω, drawn from Ω according to probability measure IP ; the partial information might be in the form of the value of a random vector Y (ω)orofaneventB in which ω is known to lie. The concept of conditional expectation tells us how to calculate the expected values and probabilities using this information. The general definition of conditional expectation is fairly abstract and takes a bit of getting used to. We shall first build intuition by recalling the definitions of conditional expectaion in elementary elementary probability theory, and showing how they can be used to compute certain Radon-Nikodym derivatives. With this as motivation, we then develop the fully general definition. Before doing any prob- ability, though, we pause to review the Radon-Nikodym theorem, which plays an essential role in the theory. The Radon-Nikodym theorem. Let ν be a positive measure on a measurable space (S, S). If µ is a signed measure on (S, S), µ is said to be absolutely continuous with respect to ν, written µ ν,if µ(A) = 0 whenever ν(A)=0. SupposeZ that f is a ν-integrable function on (S, S) and define the new measure µ(A)= f(s)ν(ds), A ∈S. Then, clearly, µ ν. The Radon-Nikodym theorem A says that all measures which are absolutely continuous to ν arise in this way. Theorem (Radon-Nikodym). Let ν be a σ-finite, positive measure on (SsS). (This means that there is a countable covering A1,A2,..
    [Show full text]
  • Week 6: Linear Regression with Two Regressors
    Week 6: Linear Regression with Two Regressors Brandon Stewart1 Princeton October 17, 19, 2016 1These slides are heavily influenced by Matt Blackwell, Adam Glynn and Jens Hainmueller. Stewart (Princeton) Week 6: Two Regressors October 17, 19, 2016 1 / 132 Where We've Been and Where We're Going... Last Week I mechanics of OLS with one variable I properties of OLS This Week I Monday: F adding a second variable F new mechanics I Wednesday: F omitted variable bias F multicollinearity F interactions Next Week I multiple regression Long Run I probability inference regression ! ! Questions? Stewart (Princeton) Week 6: Two Regressors October 17, 19, 2016 2 / 132 1 Two Examples 2 Adding a Binary Variable 3 Adding a Continuous Covariate 4 Once More With Feeling 5 OLS Mechanics and Partialing Out 6 Fun With Red and Blue 7 Omitted Variables 8 Multicollinearity 9 Dummy Variables 10 Interaction Terms 11 Polynomials 12 Conclusion 13 Fun With Interactions Stewart (Princeton) Week 6: Two Regressors October 17, 19, 2016 3 / 132 Why Do We Want More Than One Predictor? Summarize more information for descriptive inference Improve the fit and predictive power of our model Control for confounding factors for causal inference 2 Model non-linearities (e.g. Y = β0 + β1X + β2X ) Model interactive effects (e.g. Y = β0 + β1X + β2X2 + β3X1X2) Stewart (Princeton) Week 6: Two Regressors October 17, 19, 2016 4 / 132 Example 1: Cigarette Smokers and Pipe Smokers Stewart (Princeton) Week 6: Two Regressors October 17, 19, 2016 5 / 132 Example 1: Cigarette Smokers and Pipe Smokers Consider the following example from Cochran (1968).
    [Show full text]
  • CONDITIONAL EXPECTATION Definition 1. Let (Ω,F,P) Be A
    CONDITIONAL EXPECTATION STEVEN P.LALLEY 1. CONDITIONAL EXPECTATION: L2 THEORY ¡ Definition 1. Let (­,F ,P) be a probability space and let G be a σ algebra contained in F . For ¡ any real random variable X L2(­,F ,P), define E(X G ) to be the orthogonal projection of X 2 j onto the closed subspace L2(­,G ,P). This definition may seem a bit strange at first, as it seems not to have any connection with the naive definition of conditional probability that you may have learned in elementary prob- ability. However, there is a compelling rationale for Definition 1: the orthogonal projection E(X G ) minimizes the expected squared difference E(X Y )2 among all random variables Y j ¡ 2 L2(­,G ,P), so in a sense it is the best predictor of X based on the information in G . It may be helpful to consider the special case where the σ algebra G is generated by a single random ¡ variable Y , i.e., G σ(Y ). In this case, every G measurable random variable is a Borel function Æ ¡ of Y (exercise!), so E(X G ) is the unique Borel function h(Y ) (up to sets of probability zero) that j minimizes E(X h(Y ))2. The following exercise indicates that the special case where G σ(Y ) ¡ Æ for some real-valued random variable Y is in fact very general. Exercise 1. Show that if G is countably generated (that is, there is some countable collection of set B G such that G is the smallest σ algebra containing all of the sets B ) then there is a j 2 ¡ j G measurable real random variable Y such that G σ(Y ).
    [Show full text]
  • Chapter 5: Multivariate Distributions
    Chapter 5: Multivariate Distributions Professor Ron Fricker Naval Postgraduate School Monterey, California 3/15/15 Reading Assignment: Sections 5.1 – 5.12 1 Goals for this Chapter • Bivariate and multivariate probability distributions – Bivariate normal distribution – Multinomial distribution • Marginal and conditional distributions • Independence, covariance and correlation • Expected value & variance of a function of r.v.s – Linear functions of r.v.s in particular • Conditional expectation and variance 3/15/15 2 Section 5.1: Bivariate and Multivariate Distributions • A bivariate distribution is a probability distribution on two random variables – I.e., it gives the probability on the simultaneous outcome of the random variables • For example, when playing craps a gambler might want to know the probability that in the simultaneous roll of two dice each comes up 1 • Another example: In an air attack on a bunker, the probability the bunker is not hardened and does not have SAM protection is of interest • A multivariate distribution is a probability distribution for more than two r.v.s 3/15/15 3 Joint Probabilities • Bivariate and multivariate distributions are joint probabilities – the probability that two or more events occur – It’s the probability of the intersection of n 2 ≥ events: Y = y , Y = y ,..., Y = y { 1 1} { 2 2} { n n} – We’ll denote the joint (discrete) probabilities as P (Y1 = y1,Y2 = y2,...,Yn = yn) • We’ll sometimes use the shorthand notation p(y1,y2,...,yn) – This is the probability that the event Y = y and { 1 1} the
    [Show full text]