Lecture Notes 7
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Logistic Regression, Part I: Problems with the Linear Probability Model
Logistic Regression, Part I: Problems with the Linear Probability Model (LPM) Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised February 22, 2015 This handout steals heavily from Linear probability, logit, and probit models, by John Aldrich and Forrest Nelson, paper # 45 in the Sage series on Quantitative Applications in the Social Sciences. INTRODUCTION. We are often interested in qualitative dependent variables: • Voting (does or does not vote) • Marital status (married or not) • Fertility (have children or not) • Immigration attitudes (opposes immigration or supports it) In the next few handouts, we will examine different techniques for analyzing qualitative dependent variables; in particular, dichotomous dependent variables. We will first examine the problems with using OLS, and then present logistic regression as a more desirable alternative. OLS AND DICHOTOMOUS DEPENDENT VARIABLES. While estimates derived from regression analysis may be robust against violations of some assumptions, other assumptions are crucial, and violations of them can lead to unreasonable estimates. Such is often the case when the dependent variable is a qualitative measure rather than a continuous, interval measure. If OLS Regression is done with a qualitative dependent variable • it may seriously misestimate the magnitude of the effects of IVs • all of the standard statistical inferences (e.g. hypothesis tests, construction of confidence intervals) are unjustified • regression estimates will be highly sensitive to the range of particular values observed (thus making extrapolations or forecasts beyond the range of the data especially unjustified) OLS REGRESSION AND THE LINEAR PROBABILITY MODEL (LPM). The regression model places no restrictions on the values that the independent variables take on. -
Introduction to Generalized Linear Models for Dichotomous Response Variables Edps/Psych/Soc 589
Introduction to Generalized Linear Models for Dichotomous Response Variables Edps/Psych/Soc 589 Carolyn J. Anderson Department of Educational Psychology ©Board of Trustees, University of Illinois Outline The Problem Linear Model for π Relationship π(x) & x Logistic regression Probit models SAS & R Triva Outline Introduction to GLMs for binary data Primary Example: High School & Beyond. The problem Linear model for π. Modeling Relationship between π(x) and x. Logistic regression. Probit models. Trivia Graphing: jitter and loews C.J. Anderson (Illinois) Introduction to GLMS for Dichotomous Data 2.1/ 56 Outline The Problem Linear Model for π Relationship π(x) & x Logistic regression Probit models SAS & R Triva The Problem Many variables have only 2 possible outcomes. Recall: Bernoulli random variables Y =0, 1. π is the probability of Y =1. E(Y )= µ = π. Var(Y )= µ(1 − µ)= π(1 − π). When we have n independent trials and take the sum of Y ’s, we have a Binomial distribution with mean = nπ and variance = nπ(1 − π). We are typically interested in π. We will consider models for π, which can vary according to some the values of an explanatory variable(s) (i.e., x1,...,xk). To emphasis that π changes with x’s, we write π(x) C.J. Anderson (Illinois) Introduction to GLMS for Dichotomous Data 3.1/ 56 Outline The Problem Linear Model for π Relationship π(x) & x Logistic regression Probit models SAS & R Triva Example: High School & Beyond Data from seniors (N=600). Consider whether students attend an academic high school program type of a non-academic program type (Y ). -
1. Linear Probability Model Vs. Logit (Or Probit) We Have Often Used Binary ("Dummy") Variables As Explanatory Variables in Regressions
EEP/IAS 118 Andrew Dustan Section Handout 13 1. Linear Probability Model vs. Logit (or Probit) We have often used binary ("dummy") variables as explanatory variables in regressions. What about when we want to use binary variables as the dependent variable? It's possible to use OLS: ͭ = ͤ + ͥͬͥ + ⋯ + &ͬ& + ͩ where y is the dummy variable. This is called the linear probability model . Estimating the equation: ͊Ȩʚͭ = 1|ͬʛ = ͭȤ= ɸͤ + ɸͥͬͥ + ⋯ + ɸ&ͬ& ͭȤ is the predicted probability of having ͭ = 1 for the given values of ͬͥ … ͬ&. Problems with the linear probability model (LPM): 1. Heteroskedasticity: can be fixed by using the "robust" option in Stata. Not a big deal. 2. Possible to get ͭȤ< 0 or ͭȤ> 1. This makes no sense—you can't have a probability below 0 or above 1. This is a fundamental problem with the LPM that we can't patch up. Solution: Use the logit or probit model. These models are specifically made for binary dependent variables and always result in 0 < ͭȤ< 1. Let's leave the technicalities aside and look at a graph of a case where LPM goes wrong and the logit works: Linear Probability Model Logit (probit looks similar) 1.5 1.5 1 1 -------- ͊Ȩʚ--------ͭ = 1|ͬʛ ͊Ȩʚͭ = 1|ͬʛ 0.5 0.5 0 0 -0.5 -0.5 ɸ ɸ ɸ ɸ ɸ ɸ 0 + 1ͬ1 + ⋯ + ͬ͟͟ 0 + 1ͬ1 + ⋯ + ͬ͟͟ This is the main feature of a logit/probit that distinguishes it from the LPM – predicted probability of ͭ = 1 is never below 0 or above 1, and the shape is always like the one on the right rather than a straight line. -
Analysis of Binary Dependent Variables Using Linear Probability Model and Logistic Regression : a Replication Study
ANALYSIS OF BINARY DEPENDENT VARIABLES USING LINEAR PROBABILITY MODEL AND LOGISTIC REGRESSION : A REPLICATION STUDY Submitted by Lutendo Vele A thesis submitted to the Department of Statistics in partial fulfilment of the requirements for Master degree in Statistics in the Faculty of Social Sciences Supervisor Harry J. Khamis Spring, 2019 ABSTRACT Linear Probability Model (LPM) is commonly used because it is easy to compute and interpret than with logits and probits even though the estimated probabilities may fall outside the 0,1 interval and the linearity concept does not make much sense when deal- ing with probabilities. This paper extends upon the results of Luca, Owens, and Sharma (2015) reviewing the use of LPM to examine if alcohol prohibition reduces domestic vi- olence. Regular LPM resulted in inconclusive estimates since prohibition was omitted due to collinearity as controls were added. However Luca et al. (2015) had results, and further inspection on their regression commands showed that they ran a linear regression, then a post-estimation on residuals and further used residuals as a dependent variable hence the results were different from the regular LPM. Their method still resulted in unbounded predicted probabilities and heteroscedastic residuals, thus showing that OLS was inefficient and a non-linear binary choice model like logistic regression would be a better option. Logistic regression predicts the probability of an outcome that can only have two values and was therefore used in this paper. Unlike LPM, logistic regression uses a non-linear function which results in a sigmoid bounding the predicted outcome between 0 and 1. Logistic regression had no complication; thus logistic (or any another non-linear dichotomous dependent variable models) regression should have been used on the final analysis while LPM is used at a preliminary stage to get quick results. -
LINEAR REGRESSION for BINARY OUTCOMES 1 Logistic Or
Running head: LINEAR REGRESSION FOR BINARY OUTCOMES 1 Logistic or Linear? Estimating Causal Effects of Experimental Treatments on Binary Outcomes Using Regression Analysis Robin Gomila Princeton University Currently in production: Journal of Experimental Psychology: General c 2020, American Psychological Association. This paper is not the copy of record and may not exactly replicate the final, authoritative version of the article. Please do not copy or cite without authors’ permission. The final article will be available, upon publication, via its DOI: 10.1037/xge0000920 Corresponding Author: Correspondence concerning this article should be addressed to Robin Gomila, Department of Psychology, Princeton University. E-mail: [email protected] Materials and Codes: Simulations and analyses reported in this paper were computed in R. The R codes can be found on the Open Science Framework (OSF): https://osf.io/ugsnm/ Author Contributions: Robin Gomila generated the idea for the paper. He wrote the article, simulation code and analysis code. Conflict of Interest: The author declare that there were no conflicts of interest with respect to the authorship or the publication of this article. LINEAR REGRESSION FOR BINARY OUTCOMES 2 Abstract When the outcome is binary, psychologists often use nonlinear modeling strategies such as logit or probit. These strategies are often neither optimal nor justified when the objective is to estimate causal effects of experimental treatments. Researchers need to take extra steps to convert logit and probit coefficients into interpretable quantities, and when they do, these quantities often remain difficult to understand. Odds ratios, for instance, are described as obscure in many textbooks (e.g., Gelman & Hill, 2006, p.