Binary Response and Logistic Regression Analysis
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Generalized Linear Models (Glms)
San Jos´eState University Math 261A: Regression Theory & Methods Generalized Linear Models (GLMs) Dr. Guangliang Chen This lecture is based on the following textbook sections: • Chapter 13: 13.1 – 13.3 Outline of this presentation: • What is a GLM? • Logistic regression • Poisson regression Generalized Linear Models (GLMs) What is a GLM? In ordinary linear regression, we assume that the response is a linear function of the regressors plus Gaussian noise: 0 2 y = β0 + β1x1 + ··· + βkxk + ∼ N(x β, σ ) | {z } |{z} linear form x0β N(0,σ2) noise The model can be reformulate in terms of • distribution of the response: y | x ∼ N(µ, σ2), and • dependence of the mean on the predictors: µ = E(y | x) = x0β Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University3/24 Generalized Linear Models (GLMs) beta=(1,2) 5 4 3 β0 + β1x b y 2 y 1 0 −1 0.0 0.2 0.4 0.6 0.8 1.0 x x Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University4/24 Generalized Linear Models (GLMs) Generalized linear models (GLM) extend linear regression by allowing the response variable to have • a general distribution (with mean µ = E(y | x)) and • a mean that depends on the predictors through a link function g: That is, g(µ) = β0x or equivalently, µ = g−1(β0x) Dr. Guangliang Chen | Mathematics & Statistics, San Jos´e State University5/24 Generalized Linear Models (GLMs) In GLM, the response is typically assumed to have a distribution in the exponential family, which is a large class of probability distributions that have pdfs of the form f(x | θ) = a(x)b(θ) exp(c(θ) · T (x)), including • Normal - ordinary linear regression • Bernoulli - Logistic regression, modeling binary data • Binomial - Multinomial logistic regression, modeling general cate- gorical data • Poisson - Poisson regression, modeling count data • Exponential, Gamma - survival analysis Dr. -
The Hexadecimal Number System and Memory Addressing
C5537_App C_1107_03/16/2005 APPENDIX C The Hexadecimal Number System and Memory Addressing nderstanding the number system and the coding system that computers use to U store data and communicate with each other is fundamental to understanding how computers work. Early attempts to invent an electronic computing device met with disappointing results as long as inventors tried to use the decimal number sys- tem, with the digits 0–9. Then John Atanasoff proposed using a coding system that expressed everything in terms of different sequences of only two numerals: one repre- sented by the presence of a charge and one represented by the absence of a charge. The numbering system that can be supported by the expression of only two numerals is called base 2, or binary; it was invented by Ada Lovelace many years before, using the numerals 0 and 1. Under Atanasoff’s design, all numbers and other characters would be converted to this binary number system, and all storage, comparisons, and arithmetic would be done using it. Even today, this is one of the basic principles of computers. Every character or number entered into a computer is first converted into a series of 0s and 1s. Many coding schemes and techniques have been invented to manipulate these 0s and 1s, called bits for binary digits. The most widespread binary coding scheme for microcomputers, which is recog- nized as the microcomputer standard, is called ASCII (American Standard Code for Information Interchange). (Appendix B lists the binary code for the basic 127- character set.) In ASCII, each character is assigned an 8-bit code called a byte. -
Understanding Linear and Logistic Regression Analyses
EDUCATION • ÉDUCATION METHODOLOGY Understanding linear and logistic regression analyses Andrew Worster, MD, MSc;*† Jerome Fan, MD;* Afisi Ismaila, MSc† SEE RELATED ARTICLE PAGE 105 egression analysis, also termed regression modeling, come). For example, a researcher could evaluate the poten- Ris an increasingly common statistical method used to tial for injury severity score (ISS) to predict ED length-of- describe and quantify the relation between a clinical out- stay by first producing a scatter plot of ISS graphed against come of interest and one or more other variables. In this ED length-of-stay to determine whether an apparent linear issue of CJEM, Cummings and Mayes used linear and lo- relation exists, and then by deriving the best fit straight line gistic regression to determine whether the type of trauma for the data set using linear regression carried out by statis- team leader (TTL) impacts emergency department (ED) tical software. The mathematical formula for this relation length-of-stay or survival.1 The purpose of this educa- would be: ED length-of-stay = k(ISS) + c. In this equation, tional primer is to provide an easily understood overview k (the slope of the line) indicates the factor by which of these methods of statistical analysis. We hope that this length-of-stay changes as ISS changes and c (the “con- primer will not only help readers interpret the Cummings stant”) is the value of length-of-stay when ISS equals zero and Mayes study, but also other research that uses similar and crosses the vertical axis.2 In this hypothetical scenario, methodology. -
Application of General Linear Models (GLM) to Assess Nodule Abundance Based on a Photographic Survey (Case Study from IOM Area, Pacific Ocean)
minerals Article Application of General Linear Models (GLM) to Assess Nodule Abundance Based on a Photographic Survey (Case Study from IOM Area, Pacific Ocean) Monika Wasilewska-Błaszczyk * and Jacek Mucha Department of Geology of Mineral Deposits and Mining Geology, Faculty of Geology, Geophysics and Environmental Protection, AGH University of Science and Technology, 30-059 Cracow, Poland; [email protected] * Correspondence: [email protected] Abstract: The success of the future exploitation of the Pacific polymetallic nodule deposits depends on an accurate estimation of their resources, especially in small batches, scheduled for extraction in the short term. The estimation based only on the results of direct seafloor sampling using box corers is burdened with a large error due to the long sampling interval and high variability of the nodule abundance. Therefore, estimations should take into account the results of bottom photograph analyses performed systematically and in large numbers along the course of a research vessel. For photographs taken at the direct sampling sites, the relationship linking the nodule abundance with the independent variables (the percentage of seafloor nodule coverage, the genetic types of nodules in the context of their fraction distribution, and the degree of sediment coverage of nodules) was determined using the general linear model (GLM). Compared to the estimates obtained with a simple Citation: Wasilewska-Błaszczyk, M.; linear model linking this parameter only with the seafloor nodule coverage, a significant decrease Mucha, J. Application of General in the standard prediction error, from 4.2 to 2.5 kg/m2, was found. The use of the GLM for the Linear Models (GLM) to Assess assessment of nodule abundance in individual sites covered by bottom photographs, outside of Nodule Abundance Based on a direct sampling sites, should contribute to a significant increase in the accuracy of the estimation of Photographic Survey (Case Study nodule resources. -
The Simple Linear Regression Model
The Simple Linear Regression Model Suppose we have a data set consisting of n bivariate observations {(x1, y1),..., (xn, yn)}. Response variable y and predictor variable x satisfy the simple linear model if they obey the model yi = β0 + β1xi + ǫi, i = 1,...,n, (1) where the intercept and slope coefficients β0 and β1 are unknown constants and the random errors {ǫi} satisfy the following conditions: 1. The errors ǫ1, . , ǫn all have mean 0, i.e., µǫi = 0 for all i. 2 2 2 2. The errors ǫ1, . , ǫn all have the same variance σ , i.e., σǫi = σ for all i. 3. The errors ǫ1, . , ǫn are independent random variables. 4. The errors ǫ1, . , ǫn are normally distributed. Note that the author provides these assumptions on page 564 BUT ORDERS THEM DIFFERENTLY. Fitting the Simple Linear Model: Estimating β0 and β1 Suppose we believe our data obey the simple linear model. The next step is to fit the model by estimating the unknown intercept and slope coefficients β0 and β1. There are various ways of estimating these from the data but we will use the Least Squares Criterion invented by Gauss. The least squares estimates of β0 and β1, which we will denote by βˆ0 and βˆ1 respectively, are the values of β0 and β1 which minimize the sum of errors squared S(β0, β1): n 2 S(β0, β1) = X ei i=1 n 2 = X[yi − yˆi] i=1 n 2 = X[yi − (β0 + β1xi)] i=1 where the ith modeling error ei is simply the difference between the ith value of the response variable yi and the fitted/predicted valuey ˆi. -
Generalized Linear Models
CHAPTER 6 Generalized linear models 6.1 Introduction Generalized linear modeling is a framework for statistical analysis that includes linear and logistic regression as special cases. Linear regression directly predicts continuous data y from a linear predictor Xβ = β0 + X1β1 + + Xkβk.Logistic regression predicts Pr(y =1)forbinarydatafromalinearpredictorwithaninverse-··· logit transformation. A generalized linear model involves: 1. A data vector y =(y1,...,yn) 2. Predictors X and coefficients β,formingalinearpredictorXβ 1 3. A link function g,yieldingavectoroftransformeddataˆy = g− (Xβ)thatare used to model the data 4. A data distribution, p(y yˆ) | 5. Possibly other parameters, such as variances, overdispersions, and cutpoints, involved in the predictors, link function, and data distribution. The options in a generalized linear model are the transformation g and the data distribution p. In linear regression,thetransformationistheidentity(thatis,g(u) u)and • the data distribution is normal, with standard deviation σ estimated from≡ data. 1 1 In logistic regression,thetransformationistheinverse-logit,g− (u)=logit− (u) • (see Figure 5.2a on page 80) and the data distribution is defined by the proba- bility for binary data: Pr(y =1)=y ˆ. This chapter discusses several other classes of generalized linear model, which we list here for convenience: The Poisson model (Section 6.2) is used for count data; that is, where each • data point yi can equal 0, 1, 2, ....Theusualtransformationg used here is the logarithmic, so that g(u)=exp(u)transformsacontinuouslinearpredictorXiβ to a positivey ˆi.ThedatadistributionisPoisson. It is usually a good idea to add a parameter to this model to capture overdis- persion,thatis,variationinthedatabeyondwhatwouldbepredictedfromthe Poisson distribution alone. -
Modelling Binary Outcomes
Modelling Binary Outcomes 01/12/2020 Contents 1 Modelling Binary Outcomes 5 1.1 Cross-tabulation . .5 1.1.1 Measures of Effect . .6 1.1.2 Limitations of Tabulation . .6 1.2 Linear Regression and dichotomous outcomes . .6 1.2.1 Probabilities and Odds . .8 1.3 The Binomial Distribution . .9 1.4 The Logistic Regression Model . 10 1.4.1 Parameter Interpretation . 10 1.5 Logistic Regression in Stata . 11 1.5.1 Using predict after logistic ........................ 13 1.6 Other Possible Models for Proportions . 13 1.6.1 Log-binomial . 14 1.6.2 Other Link Functions . 16 2 Logistic Regression Diagnostics 19 2.1 Goodness of Fit . 19 2.1.1 R2 ........................................ 19 2.1.2 Hosmer-Lemeshow test . 19 2.1.3 ROC Curves . 20 2.2 Assessing Fit of Individual Points . 21 2.3 Problems of separation . 23 3 Logistic Regression Practical 25 3.1 Datasets . 25 3.2 Cross-tabulation and Logistic Regression . 25 3.3 Introducing Continuous Variables . 26 3.4 Goodness of Fit . 27 3.5 Diagnostics . 27 3.6 The CHD Data . 28 3 Contents 4 1 Modelling Binary Outcomes 1.1 Cross-tabulation If we are interested in the association between two binary variables, for example the presence or absence of a given disease and the presence or absence of a given exposure. Then we can simply count the number of subjects with the exposure and the disease; those with the exposure but not the disease, those without the exposure who have the disease and those without the exposure who do not have the disease. -
Chapter 2 Simple Linear Regression Analysis the Simple
Chapter 2 Simple Linear Regression Analysis The simple linear regression model We consider the modelling between the dependent and one independent variable. When there is only one independent variable in the linear regression model, the model is generally termed as a simple linear regression model. When there are more than one independent variables in the model, then the linear model is termed as the multiple linear regression model. The linear model Consider a simple linear regression model yX01 where y is termed as the dependent or study variable and X is termed as the independent or explanatory variable. The terms 0 and 1 are the parameters of the model. The parameter 0 is termed as an intercept term, and the parameter 1 is termed as the slope parameter. These parameters are usually called as regression coefficients. The unobservable error component accounts for the failure of data to lie on the straight line and represents the difference between the true and observed realization of y . There can be several reasons for such difference, e.g., the effect of all deleted variables in the model, variables may be qualitative, inherent randomness in the observations etc. We assume that is observed as independent and identically distributed random variable with mean zero and constant variance 2 . Later, we will additionally assume that is normally distributed. The independent variables are viewed as controlled by the experimenter, so it is considered as non-stochastic whereas y is viewed as a random variable with Ey()01 X and Var() y 2 . Sometimes X can also be a random variable. -
Section II Descriptive Statistics for Continuous & Binary Data (Including
Section II Descriptive Statistics for continuous & binary Data (including survival data) II - Checklist of summary statistics Do you know what all of the following are and what they are for? One variable – continuous data (variables like age, weight, serum levels, IQ, days to relapse ) _ Means (Y) Medians = 50th percentile Mode = most frequently occurring value Quartile – Q1=25th percentile, Q2= 50th percentile, Q3=75th percentile) Percentile Range (max – min) IQR – Interquartile range = Q3 – Q1 SD – standard deviation (most useful when data are Gaussian) (note, for a rough approximation, SD 0.75 IQR, IQR 1.33 SD) Survival curve = life table CDF = Cumulative dist function = 1 – survival curve Hazard rate (death rate) One variable discrete data (diseased yes or no, gender, diagnosis category, alive/dead) Risk = proportion = P = (Odds/(1+Odds)) Odds = P/(1-P) Relation between two (or more) continuous variables (Y vs X) Correlation coefficient (r) Intercept = b0 = value of Y when X is zero Slope = regression coefficient = b, in units of Y/X Multiple regression coefficient (bi) from a regression equation: Y = b0 + b1X1 + b2X2 + … + bkXk + error Relation between two (or more) discrete variables Risk ratio = relative risk = RR and log risk ratio Odds ratio (OR) and log odds ratio Logistic regression coefficient (=log odds ratio) from a logistic regression equation: ln(P/1-P)) = b0 + b1X1 + b2X2 + … + bkXk Relation between continuous outcome and discrete predictor Analysis of variance = comparing means Evaluating medical tests – where the test is positive or negative for a disease In those with disease: Sensitivity = 1 – false negative proportion In those without disease: Specificity = 1- false positive proportion ROC curve – plot of sensitivity versus false positive (1-specificity) – used to find best cutoff value of a continuously valued measurement DESCRIPTIVE STATISTICS A few definitions: Nominal data come as unordered categories such as gender and color. -
The Conspiracy of Random Predictors and Model Violations Against Classical Inference in Regression 1 Introduction
The Conspiracy of Random Predictors and Model Violations against Classical Inference in Regression A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, K. Zhang, L. Zhao March 10, 2014 Dedicated to Halbert White (y2012) Abstract We review the early insights of Halbert White who over thirty years ago inaugurated a form of statistical inference for regression models that is asymptotically correct even under \model misspecification.” This form of inference, which is pervasive in econometrics, relies on the \sandwich estimator" of standard error. Whereas the classical theory of linear models in statistics assumes models to be correct and predictors to be fixed, White permits models to be \misspecified” and predictors to be random. Careful reading of his theory shows that it is a synergistic effect | a \conspiracy" | of nonlinearity and randomness of the predictors that has the deepest consequences for statistical inference. It will be seen that the synonym \heteroskedasticity-consistent estimator" for the sandwich estimator is misleading because nonlinearity is a more consequential form of model deviation than heteroskedasticity, and both forms are handled asymptotically correctly by the sandwich estimator. The same analysis shows that a valid alternative to the sandwich estimator is given by the \pairs bootstrap" for which we establish a direct connection to the sandwich estimator. We continue with an asymptotic comparison of the sandwich estimator and the standard error estimator from classical linear models theory. The comparison shows that when standard errors from linear models theory deviate from their sandwich analogs, they are usually too liberal, but occasionally they can be too conservative as well. -
Yes, No, Maybe So: Tips and Tricks for Using 0/1 Binary Variables Laurie Hamilton, Healthcare Management Solutions LLC, Columbia MD
NESUG 2012 Coders' Corner Yes, No, Maybe So: Tips and Tricks for Using 0/1 Binary Variables Laurie Hamilton, Healthcare Management Solutions LLC, Columbia MD ABSTRACT Many SAS® programmers are familiar with the use of 0/1 binary variables in various statistical procedures. But 0/1 variables are also useful in basic database construction, data profiling and QC techniques. By definition, a binary variable is a flavor of categorical variable, an outcome or response measure with only two possible values. In this paper, we will use a sample dataset composed of 0/1 numeric binary variables to demon- strate some tricks and tips for quick Data Profiling and Quality Assurance using basic SAS functions and the MEANS and FREQ procedures. INTRODUCTION Binary variables are a type of categorical variable, specifically those variables which can have only a Yes or No value. We see these types of variables often in Questionnaire type data, which is the example we will use in this paper. We will begin with a brief discussion of the options constructing 0/1 variables, including selection of data type and the implications for the coding of missing values. We will then look at some tips and tricks for using the SAS® functions SUM, NMISS and CAT to create summary variables from multiple binary variables across individual ob- servations and also demonstrate one method for constructing a Pattern variable which combines all the infor- mation in multiple binary variables into one character string. The SAS® procedures PROC MEANS and PROC FREQ are ideally suited to quickly profiling data composed of 0/1 numeric binary variables and we will explore some applications of those procedures using our sample data. -
Data Representation
Data Representation Data Representation Chapter Three A major stumbling block many beginners encounter when attempting to learn assembly language is the common use of the binary and hexadecimal numbering systems. Many programmers think that hexadecimal (or hex1) numbers represent absolute proof that God never intended anyone to work in assembly language. While it is true that hexadecimal numbers are a little different from what you may be used to, their advan- tages outweigh their disadvantages by a large margin. Nevertheless, understanding these numbering systems is important because their use simplifies other complex topics including boolean algebra and logic design, signed numeric representation, character codes, and packed data. 3.1 Chapter Overview This chapter discusses several important concepts including the binary and hexadecimal numbering sys- tems, binary data organization (bits, nibbles, bytes, words, and double words), signed and unsigned number- ing systems, arithmetic, logical, shift, and rotate operations on binary values, bit fields and packed data. This is basic material and the remainder of this text depends upon your understanding of these concepts. If you are already familiar with these terms from other courses or study, you should at least skim this material before proceeding to the next chapter. If you are unfamiliar with this material, or only vaguely familiar with it, you should study it carefully before proceeding. All of the material in this chapter is important! Do not skip over any material. In addition to the basic material, this chapter also introduces some new HLA state- ments and HLA Standard Library routines. 3.2 Numbering Systems Most modern computer systems do not represent numeric values using the decimal system.