Bayesian Linear and Logistic Regression

Total Page:16

File Type:pdf, Size:1020Kb

Bayesian Linear and Logistic Regression 1 Bayesian Inference for Linear and Logistic Re- gression Parameters Bayesian inference for simple linear and logistic regression parameters follows the usual pattern for all Bayesian analyses: 1. Form a prior distribution over all unknown parameters. 2. Write down the likelihood function of the data. 3. Use Bayes theorem to find the posterior distribution of all parameters. • We have applied this generic formulation so far to problems with bino- mial distributions, normal means (with variance known), multinomial parameters, and to the Poisson mean parameter. All of these problems involved only one parameter at a time (strictly speaking, more than one in the multinomial case, but only a single Dirichlet distribution was used, so all parameters were treated together, as if it were just one parameter). • What makes regression different is that we have three unknown parame- ters, since the intercept and slope of the line, α and β are unknown, and the residual standard deviation (for linear regression), σ is also unknown. • Hence our Bayesian problem becomes slightly more complicated, since we are in a multi-parameter situation. Thus, we will use the Gibbs sampler, as automatically implemented in WinBUGS software. Brief Sketch of Bayesian linear regression Recall the three steps: prior ! likelihood ! posterior. 1. We need a joint prior distribution over α, β, and σ. We will specify these as three independent priors [which when multiplied together will produce a joint prior]. One choice is: • α ∼ uniform[−∞; +1] 2 • β ∼ uniform[−∞; +1] • log(σ) ∼ uniform[−∞; +1] With this choice, we can approximate the results from a frequentist regression analysis. Another possible choice is (more practical for WinBUGS): • α ∼ Normal(0; 0:0001) • β ∼ Normal(0; 0:0001) • σ ∼ uniform[0; 100] These are very diffuse (typically, depends on scale of data), so approximate the first set of priors. Of course, if real prior information is available, it can be incorporated. Notes: • The need for the log in first set of priors comes from the fact the the variance must be positive. The prior on σ is equivalent to a density that 1 is proportional to σ2 . • We specify a non-informative prior distribution of these three parame- ters. Of course, we can also include prior information when available, but this is beyond the scope of this course. • First set of priors is in fact \improper" because their densities do not integrate to one, since the area under these curves in infinite! In general this is to be avoided since sometimes it can cause problems with posterior distributions. This is not one of those problem cases, however, and it is sometimes convenient to use a “flat” prior everywhere, so it is mentioned here (even if we use proper priors for the rest of this lecture). 2. Likelihood function in regression: • As is often the case, the likelihood function used in a Bayesian analysis is the same as the one used for the frequentist analysis. 3 • Recall that we have normally distributed residuals, ∼ N(0; σ2) • Recall that the mean of the regression line, given that we know α and β is y = α + β × x. • Putting this together, we have y ∼ N(α + β × x; σ2). • So for a single patient with observed value xi, we have y ∼ N(α + β × 2 xi; σ ) • So for a single patient, the likelihood function is: ( 2 ) 1 (yi − (α + β × xi)) f(yi) = p exp 2πσ σ2 • So for a group of n patients each contributing data (xi; yi), the likelihood function is given by n Y f(yi) = f(y1) × f(y2) × f(y3) × ::: × f(yn) i=1 • So the likelihood function is simply a bunch of normal densities multi- plied together . a multivariate normal likelihood of dimension n. 3. Posterior densities in regression • Bayes theorem now says to multiply the likelihood function (multivariate normal) with the prior. For the first prior set, the joint prior simply is 1 1 × 1 × σ2 . For the second, it is two normal distributions multiplied by a uniform. Since the uniform is equal to 1 everywhere, this reduced to a product of two normal distributions. • For the first prior set, the posterior distribution simply is: n Y 1 f(yi) × 2 : i=1 σ • This is a three dimensional posterior involving α, β, and σ2. • By integrating this posterior density, we can obtain the marginal densi- ties for each of α, β, and σ2. 4 • After integration (tedious details omitted): { α ∼ tn−2 { β ∼ tn−2 { σ2 ∼ Inverse Chi-Square (so 1/σ2 ∼ Chi-Square) • Note the similar results given by Bayesian and frequentist approaches for α and β, and, in fact, the means and variances are the same as well, assuming the “infinite priors" listed above are used. • When the second prior set is used, the resulting formulae are more com- plex, and WinBUGS will be used. However, as both sets of priors are very “diffuse” or \non-informative", numerically, results will be similar to each other, and both will be similar to the inferences given by the frequentist approach (but interpretations will be very different). • Although the t distributions listed above can be used, Bayesian computa- tions usually carried out via computer programs. We will use WinBUGS to compute Bayesian posterior distributions for regression and logistic problems. • Bayes approach also suggests different ways to assess goodness of fit and model selection (we will see this later). Example Consider the following data: 5 Community DMF per 100 Fluoride Concentration Number children in ppm 1 236 1.9 2 246 2.6 3 252 1.8 4 258 1.2 5 281 1.2 6 303 1.2 7 323 1.3 8 343 0.9 9 412 0.6 10 444 0.5 11 556 0.4 12 652 0.3 13 673 0.0 14 703 0.2 15 706 0.1 16 722 0.0 17 733 0.2 18 772 0.1 19 810 0.0 20 823 0.1 21 1027 0.1 We will now analyze these data from a Bayesian viewpoint, using WinBUGS. The program we need is: 6 model { for (i in 1:21) # loop over cities { mu.dmf[i] <- alpha + beta*fl[i] # regression equation dmf[i] ~ dnorm(mu.dmf[i],tau) # distribution individual values } alpha ~ dnorm(0.0,0.000001) # prior for intercept beta ~ dnorm(0.0,0.000001) # prior for slope sigma ~ dunif(0,400) # prior for residual SD tau <- 1/(sigma*sigma) # precision required by WinBUGS for (i in 1:21) { # calculate residuals residual[i] <- dmf[i] - mu.dmf[i] } pred.mean.1.7 <- alpha + beta*1.7 # mean prediction for fl=1.7 pred.ind.1.7 ~ dnorm(pred.mean.1.7, tau) # individual pred for fl=1.7 } Data are entered in one of the following two formats (both equivalent, note blank line required after \END" statement): dmf[] fl[] 236 1.9 246 2.6 252 1.8 258 1.2 281 1.2 303 1.2 323 1.3 343 0.9 412 0.6 444 0.5 556 0.4 652 0.3 673 0.0 703 0.2 706 0.1 722 0.0 7 733 0.2 772 0.1 810 0.0 823 0.1 1027 0.1 END or list(dmf=c( 236, 246, 252, 258, 281, 303, 323, 343, 412, 444, 556, 652, 673, 703, 706, 722, 733, 772, 810, 823, 1027), fl=c( 1.9, 2.6, 1.8, 1.2, 1.2, 1.2, 1.3, 0.9, 0.6, 0.5, 0.4, 0.3, 0.0, 0.2, 0.1, 0.0, 0.2, 0.1, 0.0, 0.1, 0.1)) Let's look at a graph of these data: 8 We will use these initial values (rather arbitrary, just to get into right general area to start): list(alpha=100, beta = 0, sigma=100) Running 2000 burn-in values and then 10,000 further iterations, produces the following results: node mean sd 2.5% median 97.5% alpha 730.1 41.45 646.8 730.6 809.7 beta -277.4 40.86 -358.8 -277.4 -196.8 mu.dmf[1] 203.0 57.16 89.49 203.0 314.1 mu.dmf[2] 8.869 82.93 -152.8 8.885 171.1 mu.dmf[3] 230.8 53.72 124.3 230.6 335.6 mu.dmf[4] 397.2 35.98 325.8 397.5 467.1 mu.dmf[5] 397.2 35.98 325.8 397.5 467.1 mu.dmf[6] 397.2 35.98 325.8 397.5 467.1 mu.dmf[7] 369.5 38.42 293.3 369.6 444.2 mu.dmf[8] 480.4 30.81 418.9 480.4 540.8 mu.dmf[9] 563.7 30.09 503.0 563.4 621.8 mu.dmf[10] 591.4 30.94 529.2 591.4 651.0 mu.dmf[11] 619.1 32.29 554.6 619.3 680.9 mu.dmf[12] 646.9 34.08 578.9 647.1 712.3 mu.dmf[13] 730.1 41.45 646.8 730.6 809.7 mu.dmf[14] 674.6 36.24 602.0 675.1 744.8 mu.dmf[15] 702.4 38.72 624.6 702.7 777.0 mu.dmf[16] 730.1 41.45 646.8 730.6 809.7 mu.dmf[17] 674.6 36.24 602.0 675.1 744.8 mu.dmf[18] 702.4 38.72 624.6 702.7 777.0 mu.dmf[19] 730.1 41.45 646.8 730.6 809.7 mu.dmf[20] 702.4 38.72 624.6 702.7 777.0 mu.dmf[21] 702.4 38.72 624.6 702.7 777.0 pred.ind.1.7 257.3 145.8 -33.89 255.3 546.8 pred.mean.1.7 258.5 50.37 158.7 258.5 356.8 residual[1] 32.96 57.16 -78.05 32.97 146.5 residual[2] 237.1 82.93 75.25 237.1 398.8 residual[3] 21.22 53.72 -83.55 21.39 127.9 residual[4] -139.2 35.98 -209.0 -139.5 -67.76 9 residual[5] -116.2 35.98 -186.0 -116.5 -44.76 residual[6] -94.22 35.98 -164.0 -94.46 -22.76 residual[7] -46.48 38.42 -121.0 -46.57 29.72 residual[8] -137.4 30.81 -197.8 -137.4 -75.79 residual[9] -151.7 30.09 -209.8 -151.4 -90.83 residual[10] -147.4 30.94 -206.9 -147.4 -85.25 residual[11] -63.13 32.29 -124.9 -63.27 1.432 residual[12] 5.126 34.08 -60.23 4.903 73.22 residual[13] -57.09 41.45 -136.7 -57.55 26.23 residual[14] 28.39 36.24 -41.83 27.93 101.0 residual[15] 3.647 38.72 -71.02 3.295 81.4 residual[16] -8.092 41.45 -87.68 -8.55 75.23 residual[17] 58.39 36.24 -11.83 57.93 131.0 residual[18] 69.65 38.72 -5.02 69.3 147.4 residual[19] 79.91 41.45 0.324 79.45 163.2 residual[20] 120.6 38.72 45.98 120.3 198.4 residual[21] 324.6 38.72 250.0 324.3 402.4 sigma 135.0 21.83 98.52 132.5 183.2 A quadratic model may fit better.
Recommended publications
  • Logistic Regression, Dependencies, Non-Linear Data and Model Reduction
    COMP6237 – Logistic Regression, Dependencies, Non-linear Data and Model Reduction Markus Brede [email protected] Lecture slides available here: http://users.ecs.soton.ac.uk/mb8/stats/datamining.html (Thanks to Jason Noble and Cosma Shalizi whose lecture materials I used to prepare) COMP6237: Logistic Regression ● Outline: – Introduction – Basic ideas of logistic regression – Logistic regression using R – Some underlying maths and MLE – The multinomial case – How to deal with non-linear data ● Model reduction and AIC – How to deal with dependent data – Summary – Problems Introduction ● Previous lecture: Linear regression – tried to predict a continuous variable from variation in another continuous variable (E.g. basketball ability from height) ● Here: Logistic regression – Try to predict results of a binary (or categorical) outcome variable Y from a predictor variable X – This is a classification problem: classify X as belonging to one of two classes – Occurs quite often in science … e.g. medical trials (will a patient live or die dependent on medication?) Dependent variable Y Predictor Variables X The Oscars Example ● A fictional data set that looks at what it takes for a movie to win an Oscar ● Outcome variable: Oscar win, yes or no? ● Predictor variables: – Box office takings in millions of dollars – Budget in millions of dollars – Country of origin: US, UK, Europe, India, other – Critical reception (scores 0 … 100) – Length of film in minutes – This (fictitious) data set is available here: https://www.southampton.ac.uk/~mb1a10/stats/filmData.txt Predicting Oscar Success ● Let's start simple and look at only one of the predictor variables ● Do big box office takings make Oscar success more likely? ● Could use same techniques as below to look at budget size, film length, etc.
    [Show full text]
  • The Bayesian Approach to Statistics
    THE BAYESIAN APPROACH TO STATISTICS ANTHONY O’HAGAN INTRODUCTION the true nature of scientific reasoning. The fi- nal section addresses various features of modern By far the most widely taught and used statisti- Bayesian methods that provide some explanation for the rapid increase in their adoption since the cal methods in practice are those of the frequen- 1980s. tist school. The ideas of frequentist inference, as set out in Chapter 5 of this book, rest on the frequency definition of probability (Chapter 2), BAYESIAN INFERENCE and were developed in the first half of the 20th century. This chapter concerns a radically differ- We first present the basic procedures of Bayesian ent approach to statistics, the Bayesian approach, inference. which depends instead on the subjective defini- tion of probability (Chapter 3). In some respects, Bayesian methods are older than frequentist ones, Bayes’s Theorem and the Nature of Learning having been the basis of very early statistical rea- Bayesian inference is a process of learning soning as far back as the 18th century. Bayesian from data. To give substance to this statement, statistics as it is now understood, however, dates we need to identify who is doing the learning and back to the 1950s, with subsequent development what they are learning about. in the second half of the 20th century. Over that time, the Bayesian approach has steadily gained Terms and Notation ground, and is now recognized as a legitimate al- ternative to the frequentist approach. The person doing the learning is an individual This chapter is organized into three sections.
    [Show full text]
  • Scalable and Robust Bayesian Inference Via the Median Posterior
    Scalable and Robust Bayesian Inference via the Median Posterior CS 584: Big Data Analytics Material adapted from David Dunson’s talk (http://bayesian.org/sites/default/files/Dunson.pdf) & Lizhen Lin’s ICML talk (http://techtalks.tv/talks/scalable-and-robust-bayesian-inference-via-the-median-posterior/61140/) Big Data Analytics • Large (big N) and complex (big P with interactions) data are collected routinely • Both speed & generality of data analysis methods are important • Bayesian approaches offer an attractive general approach for modeling the complexity of big data • Computational intractability of posterior sampling is a major impediment to application of flexible Bayesian methods CS 584 [Spring 2016] - Ho Existing Frequentist Approaches: The Positives • Optimization-based approaches, such as ADMM or glmnet, are currently most popular for analyzing big data • General and computationally efficient • Used orders of magnitude more than Bayes methods • Can exploit distributed & cloud computing platforms • Can borrow some advantages of Bayes methods through penalties and regularization CS 584 [Spring 2016] - Ho Existing Frequentist Approaches: The Drawbacks • Such optimization-based methods do not provide measure of uncertainty • Uncertainty quantification is crucial for most applications • Scalable penalization methods focus primarily on convex optimization — greatly limits scope and puts ceiling on performance • For non-convex problems and data with complex structure, existing optimization algorithms can fail badly CS 584 [Spring 2016] -
    [Show full text]
  • 1 Estimation and Beyond in the Bayes Universe
    ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe. 1.1 Estimation No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes’ risk is 0. Suppose that for a proper prior ¼ the Bayes estimator ±¼(X) is unbiased, Xjθ (8θ)E ±¼(X) = θ: This implies that the Bayes risk is 0. The Bayes risk of ±¼(X) can be calculated as repeated expectation in two ways, θ Xjθ 2 X θjX 2 r(¼; ±¼) = E E (θ ¡ ±¼(X)) = E E (θ ¡ ±¼(X)) : Thus, conveniently choosing either EθEXjθ or EX EθjX and using the properties of conditional expectation we have, θ Xjθ 2 θ Xjθ X θjX X θjX 2 r(¼; ±¼) = E E θ ¡ E E θ±¼(X) ¡ E E θ±¼(X) + E E ±¼(X) θ Xjθ 2 θ Xjθ X θjX X θjX 2 = E E θ ¡ E θ[E ±¼(X)] ¡ E ±¼(X)E θ + E E ±¼(X) θ Xjθ 2 θ X X θjX 2 = E E θ ¡ E θ ¢ θ ¡ E ±¼(X)±¼(X) + E E ±¼(X) = 0: Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xjθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and bias-squared a small dosage of bias can help.
    [Show full text]
  • Logistic Regression Maths and Statistics Help Centre
    Logistic regression Maths and Statistics Help Centre Many statistical tests require the dependent (response) variable to be continuous so a different set of tests are needed when the dependent variable is categorical. One of the most commonly used tests for categorical variables is the Chi-squared test which looks at whether or not there is a relationship between two categorical variables but this doesn’t make an allowance for the potential influence of other explanatory variables on that relationship. For continuous outcome variables, Multiple regression can be used for a) controlling for other explanatory variables when assessing relationships between a dependent variable and several independent variables b) predicting outcomes of a dependent variable using a linear combination of explanatory (independent) variables The maths: For multiple regression a model of the following form can be used to predict the value of a response variable y using the values of a number of explanatory variables: y 0 1x1 2 x2 ..... q xq 0 Constant/ intercept , 1 q co efficients for q explanatory variables x1 xq The regression process finds the co-efficients which minimise the squared differences between the observed and expected values of y (the residuals). As the outcome of logistic regression is binary, y needs to be transformed so that the regression process can be used. The logit transformation gives the following: p ln 0 1x1 2 x2 ..... q xq 1 p p p probabilty of event occuring e.g. person dies following heart attack, odds ratio 1- p If probabilities of the event of interest happening for individuals are needed, the logistic regression equation exp x x ....
    [Show full text]
  • Generalized Linear Models
    CHAPTER 6 Generalized linear models 6.1 Introduction Generalized linear modeling is a framework for statistical analysis that includes linear and logistic regression as special cases. Linear regression directly predicts continuous data y from a linear predictor Xβ = β0 + X1β1 + + Xkβk.Logistic regression predicts Pr(y =1)forbinarydatafromalinearpredictorwithaninverse-··· logit transformation. A generalized linear model involves: 1. A data vector y =(y1,...,yn) 2. Predictors X and coefficients β,formingalinearpredictorXβ 1 3. A link function g,yieldingavectoroftransformeddataˆy = g− (Xβ)thatare used to model the data 4. A data distribution, p(y yˆ) | 5. Possibly other parameters, such as variances, overdispersions, and cutpoints, involved in the predictors, link function, and data distribution. The options in a generalized linear model are the transformation g and the data distribution p. In linear regression,thetransformationistheidentity(thatis,g(u) u)and • the data distribution is normal, with standard deviation σ estimated from≡ data. 1 1 In logistic regression,thetransformationistheinverse-logit,g− (u)=logit− (u) • (see Figure 5.2a on page 80) and the data distribution is defined by the proba- bility for binary data: Pr(y =1)=y ˆ. This chapter discusses several other classes of generalized linear model, which we list here for convenience: The Poisson model (Section 6.2) is used for count data; that is, where each • data point yi can equal 0, 1, 2, ....Theusualtransformationg used here is the logarithmic, so that g(u)=exp(u)transformsacontinuouslinearpredictorXiβ to a positivey ˆi.ThedatadistributionisPoisson. It is usually a good idea to add a parameter to this model to capture overdis- persion,thatis,variationinthedatabeyondwhatwouldbepredictedfromthe Poisson distribution alone.
    [Show full text]
  • Introduction to Bayesian Inference and Modeling Edps 590BAY
    Introduction to Bayesian Inference and Modeling Edps 590BAY Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2019 Introduction What Why Probability Steps Example History Practice Overview ◮ What is Bayes theorem ◮ Why Bayesian analysis ◮ What is probability? ◮ Basic Steps ◮ An little example ◮ History (not all of the 705+ people that influenced development of Bayesian approach) ◮ In class work with probabilities Depending on the book that you select for this course, read either Gelman et al. Chapter 1 or Kruschke Chapters 1 & 2. C.J. Anderson (Illinois) Introduction Fall 2019 2.2/ 29 Introduction What Why Probability Steps Example History Practice Main References for Course Throughout the coures, I will take material from ◮ Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., & Rubin, D.B. (20114). Bayesian Data Analysis, 3rd Edition. Boco Raton, FL, CRC/Taylor & Francis.** ◮ Hoff, P.D., (2009). A First Course in Bayesian Statistical Methods. NY: Sringer.** ◮ McElreath, R.M. (2016). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Boco Raton, FL, CRC/Taylor & Francis. ◮ Kruschke, J.K. (2015). Doing Bayesian Data Analysis: A Tutorial with JAGS and Stan. NY: Academic Press.** ** There are e-versions these of from the UofI library. There is a verson of McElreath, but I couldn’t get if from UofI e-collection. C.J. Anderson (Illinois) Introduction Fall 2019 3.3/ 29 Introduction What Why Probability Steps Example History Practice Bayes Theorem A whole semester on this? p(y|θ)p(θ) p(θ|y)= p(y) where ◮ y is data, sample from some population.
    [Show full text]
  • Statistical Inference: Paradigms and Controversies in Historic Perspective
    Jostein Lillestøl, NHH 2014 Statistical inference: Paradigms and controversies in historic perspective 1. Five paradigms We will cover the following five lines of thought: 1. Early Bayesian inference and its revival Inverse probability – Non-informative priors – “Objective” Bayes (1763), Laplace (1774), Jeffreys (1931), Bernardo (1975) 2. Fisherian inference Evidence oriented – Likelihood – Fisher information - Necessity Fisher (1921 and later) 3. Neyman- Pearson inference Action oriented – Frequentist/Sample space – Objective Neyman (1933, 1937), Pearson (1933), Wald (1939), Lehmann (1950 and later) 4. Neo - Bayesian inference Coherent decisions - Subjective/personal De Finetti (1937), Savage (1951), Lindley (1953) 5. Likelihood inference Evidence based – likelihood profiles – likelihood ratios Barnard (1949), Birnbaum (1962), Edwards (1972) Classical inference as it has been practiced since the 1950’s is really none of these in its pure form. It is more like a pragmatic mix of 2 and 3, in particular with respect to testing of significance, pretending to be both action and evidence oriented, which is hard to fulfill in a consistent manner. To keep our minds on track we do not single out this as a separate paradigm, but will discuss this at the end. A main concern through the history of statistical inference has been to establish a sound scientific framework for the analysis of sampled data. Concepts were initially often vague and disputed, but even after their clarification, various schools of thought have at times been in strong opposition to each other. When we try to describe the approaches here, we will use the notions of today. All five paradigms of statistical inference are based on modeling the observed data x given some parameter or “state of the world” , which essentially corresponds to stating the conditional distribution f(x|(or making some assumptions about it).
    [Show full text]
  • An Introduction to Logistic Regression: from Basic Concepts to Interpretation with Particular Attention to Nursing Domain
    J Korean Acad Nurs Vol.43 No.2, 154 -164 J Korean Acad Nurs Vol.43 No.2 April 2013 http://dx.doi.org/10.4040/jkan.2013.43.2.154 An Introduction to Logistic Regression: From Basic Concepts to Interpretation with Particular Attention to Nursing Domain Park, Hyeoun-Ae College of Nursing and System Biomedical Informatics National Core Research Center, Seoul National University, Seoul, Korea Purpose: The purpose of this article is twofold: 1) introducing logistic regression (LR), a multivariable method for modeling the relationship between multiple independent variables and a categorical dependent variable, and 2) examining use and reporting of LR in the nursing literature. Methods: Text books on LR and research articles employing LR as main statistical analysis were reviewed. Twenty-three articles published between 2010 and 2011 in the Journal of Korean Academy of Nursing were analyzed for proper use and reporting of LR models. Results: Logistic regression from basic concepts such as odds, odds ratio, logit transformation and logistic curve, assumption, fitting, reporting and interpreting to cautions were presented. Substantial short- comings were found in both use of LR and reporting of results. For many studies, sample size was not sufficiently large to call into question the accuracy of the regression model. Additionally, only one study reported validation analysis. Conclusion: Nurs- ing researchers need to pay greater attention to guidelines concerning the use and reporting of LR models. Key words: Logit function, Maximum likelihood estimation, Odds, Odds ratio, Wald test INTRODUCTION The model serves two purposes: (1) it can predict the value of the depen- dent variable for new values of the independent variables, and (2) it can Multivariable methods of statistical analysis commonly appear in help describe the relative contribution of each independent variable to general health science literature (Bagley, White, & Golomb, 2001).
    [Show full text]
  • Variance Partitioning in Multilevel Logistic Models That Exhibit Overdispersion
    J. R. Statist. Soc. A (2005) 168, Part 3, pp. 599–613 Variance partitioning in multilevel logistic models that exhibit overdispersion W. J. Browne, University of Nottingham, UK S. V. Subramanian, Harvard School of Public Health, Boston, USA K. Jones University of Bristol, UK and H. Goldstein Institute of Education, London, UK [Received July 2002. Final revision September 2004] Summary. A common application of multilevel models is to apportion the variance in the response according to the different levels of the data.Whereas partitioning variances is straight- forward in models with a continuous response variable with a normal error distribution at each level, the extension of this partitioning to models with binary responses or to proportions or counts is less obvious. We describe methodology due to Goldstein and co-workers for appor- tioning variance that is attributable to higher levels in multilevel binomial logistic models. This partitioning they referred to as the variance partition coefficient. We consider extending the vari- ance partition coefficient concept to data sets when the response is a proportion and where the binomial assumption may not be appropriate owing to overdispersion in the response variable. Using the literacy data from the 1991 Indian census we estimate simple and complex variance partition coefficients at multiple levels of geography in models with significant overdispersion and thereby establish the relative importance of different geographic levels that influence edu- cational disparities in India. Keywords: Contextual variation; Illiteracy; India; Multilevel modelling; Multiple spatial levels; Overdispersion; Variance partition coefficient 1. Introduction Multilevel regression models (Goldstein, 2003; Bryk and Raudenbush, 1992) are increasingly being applied in many areas of quantitative research.
    [Show full text]
  • Bayesian Inference for Median of the Lognormal Distribution K
    Journal of Modern Applied Statistical Methods Volume 15 | Issue 2 Article 32 11-1-2016 Bayesian Inference for Median of the Lognormal Distribution K. Aruna Rao SDM Degree College, Ujire, India, [email protected] Juliet Gratia D'Cunha Mangalore University, Mangalagangothri, India, [email protected] Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Rao, K. Aruna and D'Cunha, Juliet Gratia (2016) "Bayesian Inference for Median of the Lognormal Distribution," Journal of Modern Applied Statistical Methods: Vol. 15 : Iss. 2 , Article 32. DOI: 10.22237/jmasm/1478003400 Available at: http://digitalcommons.wayne.edu/jmasm/vol15/iss2/32 This Regular Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Bayesian Inference for Median of the Lognormal Distribution Cover Page Footnote Acknowledgements The es cond author would like to thank Government of India, Ministry of Science and Technology, Department of Science and Technology, New Delhi, for sponsoring her with an INSPIRE fellowship, which enables her to carry out the research program which she has undertaken. She is much honored to be the recipient of this award. This regular article is available in Journal of Modern Applied Statistical Methods: http://digitalcommons.wayne.edu/jmasm/vol15/ iss2/32 Journal of Modern Applied Statistical Methods Copyright © 2016 JMASM, Inc. November 2016, Vol. 15, No. 2, 526-535.
    [Show full text]
  • Randomization Does Not Justify Logistic Regression
    Statistical Science 2008, Vol. 23, No. 2, 237–249 DOI: 10.1214/08-STS262 c Institute of Mathematical Statistics, 2008 Randomization Does Not Justify Logistic Regression David A. Freedman Abstract. The logit model is often used to analyze experimental data. However, randomization does not justify the model, so the usual esti- mators can be inconsistent. A consistent estimator is proposed. Ney- man’s non-parametric setup is used as a benchmark. In this setup, each subject has two potential responses, one if treated and the other if untreated; only one of the two responses can be observed. Beside the mathematics, there are simulation results, a brief review of the literature, and some recommendations for practice. Key words and phrases: Models, randomization, logistic regression, logit, average predicted probability. 1. INTRODUCTION nπT subjects at random and assign them to the treatment condition. The remaining nπ subjects The logit model is often fitted to experimental C are assigned to a control condition, where π = 1 data. As explained below, randomization does not C π . According to Neyman (1923), each subject has− justify the assumptions behind the model. Thus, the T two responses: Y T if assigned to treatment, and Y C conventional estimator of log odds is difficult to in- i i if assigned to control. The responses are 1 or 0, terpret; an alternative will be suggested. Neyman’s where 1 is “success” and 0 is “failure.” Responses setup is used to define parameters and prove re- are fixed, that is, not random. sults. (Grammatical niceties apart, the terms “logit If i is assigned to treatment (T ), then Y T is ob- model” and “logistic regression” are used interchange- i served.
    [Show full text]