Power Analysis for the Wald, LR, Score, and Gradient Tests in a Marginal Maximum Likelihood Framework: Applications in IRT

Total Page:16

File Type:pdf, Size:1020Kb

Power Analysis for the Wald, LR, Score, and Gradient Tests in a Marginal Maximum Likelihood Framework: Applications in IRT Power analysis for the Wald, LR, score, and gradient tests in a marginal maximum likelihood framework: Applications in IRT Felix Zimmer1, Clemens Draxler2, and Rudolf Debelak1 1University of Zurich 2The Health and Life Sciences University March 9, 2021 Abstract The Wald, likelihood ratio, score and the recently proposed gradient statistics can be used to assess a broad range of hypotheses in item re- sponse theory models, for instance, to check the overall model fit or to detect differential item functioning. We introduce new methods for power analysis and sample size planning that can be applied when marginal maximum likelihood estimation is used. This avails the application to a variety of IRT models, which are increasingly used in practice, e.g., in large-scale educational assessments. An analytical method utilizes the asymptotic distributions of the statistics under alternative hypotheses. For a larger number of items, we also provide a sampling-based method, which is necessary due to an exponentially increasing computational load of the analytical approach. We performed extensive simulation studies in two practically relevant settings, i.e., testing a Rasch model against a 2PL model and testing for differential item functioning. The observed distribu- tions of the test statistics and the power of the tests agreed well with the predictions by the proposed methods. We provide an openly accessible R package that implements the methods for user-supplied hypotheses. Keywords: marginal maximum likelihood, item response theory, power analysis, Wald test, score test, likelihood ratio, gradient test We wish to thank Carolin Strobl for helpful comments in the course of this research. This work was supported by an SNF grant (188929) to Rudolf Debelak. Correspondence concerning this manuscript should be addressed to Felix Zimmer, Department of Psychology, University of Zurich, Binzmuehlestrasse 14, 8050 Zurich, Switzerland. Email: [email protected] 1 1 Introduction When applying item response theory (IRT), it is often necessary to check whether a model fits the data or whether the parameters differ between the groups of participants. To test these research questions, the Wald, likelihood ratio (LR), and score statistics are established tools (for an introduction, see e.g., Glas & Verhelst, 1995). The determination of their statistical power serves critical pur- poses during several phases of a research project (e.g., Cohen, 1992; Faul et al., 2009). Before data collection, it can be determined how many participants are required to achieve a certain level of power. For model fitting purposes, it answers the question of how many participants are needed to reject a wrongly assumed model with a predetermined desired probability. This is a reasonable approach of empirically substantiating interpretations based on the model, e.g., considering the participants' abilities. Furthermore, since the three statistics are only asymptotically equivalent (Silvey, 1959; Wald, 1943), one may infer which of the tests has the highest power in finite sample sizes, in particular, in cases of practically relevant sample sizes. After data collection, a power anal- ysis provides an additional perspective on the observed effect and the size of the sample. Assuming that the null hypothesis is indeed false, one can infer the probability of rejecting it for any sample size. This can be applied when planning the sample size for an adequately powered replication study. For its critical roles during the research process, power analysis has been a major dis- cussion point in the course of the recent replication crisis (e.g. Cumming, 2014; Dwyer et al., 2018): estimating and reporting the statistical power has been firmly established as a research standard (National Academies of Sciences, En- gineering, and Medicine, 2019). However, in IRT, methods of power analysis are rarely presented. As one exception, Maydeu-Olivares and Monta~no(2013) have described a method of power analysis for some tests of contingency tables, such as the M2 test. Draxler (2010) and Draxler and Alexandrowicz (2015) provided formulas to calculate the power or sample size of the Wald, LR, and score tests in the context of exponential family models and conditional maximum likelihood (CML) estimation. Two examples of exponential family models in IRT are the Rasch (Rasch, 1960) and the partial credit models (Masters, 1982), in which the parameters represent the item difficulties. Several estimation methods are available for this family of models, in particular, CML and marginal maximum likelihood (MML) estimation (for an overview, see Baker & Kim, 2004). The advantage of the CML approach lies in the reliance on the participants' overall test score as a sufficient statistic for their underlying ability. However, mod- els that do not belong to an exponential family cannot be estimated by CML because they do not offer a sufficient statistic. This paper treats a general class of IRT models, where each item is described by one or more types of parameters. Examples are models that extend the Rasch model by a discrimination parameter (two-parameter logistic model, 2PL) or a guessing parameter (three-parameter logistic model, 3PL; Birnbaum, 1968). These more complex IRT models are becoming increasingly important. In the Trends in International Mathematics and Science Study (TIMSS, Martin et 1 al., 2020), for example, the scaling model was changed from a Rasch to a 3PL model in 1999. More recently, the methodology in the Program for International Student Assessment (PISA, OECD, 2017) was changed from a Rasch to a 2PL model (see also Robitzsch et al., 2020). Yet, a power analysis for the Wald, LR and score tests in IRT models is currently limited by the requirement that both the null and alternative hypotheses belong to an exponential family model where each item is described by only one parameter type. An approach applicable to MML would allow either hypotheses to refer to more complex models and enable, e.g., a power analysis for a test of a Rasch model against a 2PL model. Recently, the gradient statistic was introduced as a complement to the Wald, LR, and score statistics (Lemonte, 2016; Terrell, 2002). It is asymptotically equivalent to the other three statistics without uniform superiority of one of them (Lemonte & Ferrari, 2012). It is easier to compute in many instances because it does not require the estimation of an information matrix. Concern- ing analytical power analysis in IRT, Lemonte and Ferrari (2012) provided a comparison of the power for exponential family models. Draxler et al. (2020) showcased an application in IRT where the gradient test provided a compara- tively higher power. To our knowledge, the gradient statistic and its analytical power have not yet been formulated or evaluated for linear hypotheses that form the research context of this paper. Furthermore, the gradient test has not been discussed in the MML framework in psychometrics. To address these gaps in the present literature, we propose the gradient statistic for arbitrary linear hypotheses and introduce an analytical power anal- ysis for a general class of IRT models and MML estimation for all four mentioned statistics. The main obstacle for this is a strongly increased computational load of the analytical approach when larger numbers of items are considered. For this scenario, we present a sampling-based method that builds on and complements the analytical method. We subsequently evaluate the procedures in a test of a Rasch against a 2PL model and a test for differential item functioning (DIF) in extensive simulation studies. We contrast the power of the computationally simpler gradient test with that of the other, more established tests. The impli- cations of misspecification regarding the person parameter distribution are also investigated briefly. Furthermore, the application to real data is showcased in the context of an assessment of academic skills. We provide an R package that implements the power analysis for user-defined parameter sets and hypotheses at https://github.com/flxzimmer/pwrml. Finally, we discuss some limitations and give an outlook on possible extensions. 2 Power Analysis We will first define a general IRT model for which we assume local independence of items and unidimensional person parameters that are independent and iden- tically distributed. The probability distribution of the item response function l is expressed as fβ,θv (x), where β represents the item parameters β 2 R and θv denotes the unidimensional person parameter for person v = 1 : : : n. Fur- 2 thermore, let X be a discrete random variable with realizations x 2 f1; ::; KgI . Here, K is the number of different response categories and I is the number of items. Each possible value x is therefore a vector of length I that represents one specific pattern of answers across all items. The vector β depends on the specific model, but shall generally have length l. For the Rasch model, for ex- ample, there is only the difficulty parameter so l is equal to the number of items I. We consider the test of a linear hypothesis using the example of testing a Rasch model against a 2PL model. The use of such linear hypotheses provides a flexible framework for power analyses. The null hypothesis is expressed as T (β) = c or equivalently Aβ = c; (1) where T is a linear transformation of the item parameters, T : Rl ! Rm with m ≤ m l. Let c 2 R be a vector of constants and A 2 Mml(R) the unique matrix that represents T . In the following, we will denote a set of item parameters that follow the null hypothesis by β0 2 B0 = fβjAβ = cg. Similarly, we will refer to parameters that follow the alternative as βa 2 Ba = fβjAβ 6= cg. To describe the 2PL model for the test of a Rasch against a 2PL model, let the probability of a positive answer to item i = 1 :::I be given by 1 Pβ,θ(xi = 1) = (2) 1 + exp(−(aiθ + di)) with discrimination parameter ai and difficulty parameter di.
Recommended publications
  • Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation
    Research Report Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation Insu Paek December 2009 ETS RR-09-35 Listening. Learning. Leading.® Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation Insu Paek ETS, Princeton, New Jersey December 2009 As part of its nonprofit mission, ETS conducts and disseminates the results of research to advance quality and equity in education and assessment for the benefit of ETS’s constituents and the field. To obtain a PDF or a print copy of a report, please visit: http://www.ets.org/research/contact.html Copyright © 2009 by Educational Testing Service. All rights reserved. ETS, the ETS logo, GRE, and LISTENING. LEARNING. LEADING. are registered trademarks of Educational Testing Service (ETS). SAT is a registered trademark of the College Board. Abstract Three statistical testing procedures well-known in the maximum likelihood approach are the Wald, likelihood ratio (LR), and score tests. Although well-known, the application of these three testing procedures in the logistic regression method to investigate differential item function (DIF) has not been rigorously made yet. Employing a variety of simulation conditions, this research (a) assessed the three tests’ performance for DIF detection and (b) compared DIF detection in different DIF testing modes (targeted vs. general DIF testing). Simulation results showed small differences between the three tests and different testing modes. However, targeted DIF testing consistently performed better than general DIF testing; the three tests differed more in performance in general DIF testing and nonuniform DIF conditions than in targeted DIF testing and uniform DIF conditions; and the LR and score tests consistently performed better than the Wald test.
    [Show full text]
  • Testing for INAR Effects
    Communications in Statistics - Simulation and Computation ISSN: 0361-0918 (Print) 1532-4141 (Online) Journal homepage: https://www.tandfonline.com/loi/lssp20 Testing for INAR effects Rolf Larsson To cite this article: Rolf Larsson (2020) Testing for INAR effects, Communications in Statistics - Simulation and Computation, 49:10, 2745-2764, DOI: 10.1080/03610918.2018.1530784 To link to this article: https://doi.org/10.1080/03610918.2018.1530784 © 2019 The Author(s). Published with license by Taylor & Francis Group, LLC Published online: 21 Jan 2019. Submit your article to this journal Article views: 294 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=lssp20 COMMUNICATIONS IN STATISTICS - SIMULATION AND COMPUTATIONVR 2020, VOL. 49, NO. 10, 2745–2764 https://doi.org/10.1080/03610918.2018.1530784 Testing for INAR effects Rolf Larsson Department of Mathematics, Uppsala University, Uppsala, Sweden ABSTRACT ARTICLE HISTORY In this article, we focus on the integer valued autoregressive model, Received 10 April 2018 INAR (1), with Poisson innovations. We test the null of serial independ- Accepted 25 September 2018 ence, where the INAR parameter is zero, versus the alternative of a KEYWORDS positive INAR parameter. To this end, we propose different explicit INAR model; Likelihood approximations of the likelihood ratio (LR) statistic. We derive the limit- ratio test ing distributions of our statistics under the null. In a simulation study, we compare size and power of our tests with the score test, proposed by Sun and McCabe [2013. Score statistics for testing serial depend- ence in count data.
    [Show full text]
  • Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs
    Journal of Statistical Theory and Applications Volume 10, Number 4, 2011, pp. 553-569 ISSN 1538-7887 Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs Yanqing Yi1∗and Xikui Wang2 1 Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. Johns, Newfoundland, Canada A1B 3V6 2 Department of Statistics, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2 Abstract Data collected from response adaptive designs are dependent. Traditional statistical methods need to be justified for the use in response adaptive designs. This paper gener- alizes the Rao's score test to response adaptive designs and introduces a generalized score statistic. Simulation is conducted to compare the statistical powers of the Wald, the score, the generalized score and the likelihood ratio statistics. The overall statistical power of the Wald statistic is better than the score, the generalized score and the likelihood ratio statistics for small to medium sample sizes. The score statistic does not show good sample properties for adaptive designs and the generalized score statistic is better than the score statistic under the adaptive designs considered. When the sample size becomes large, the statistical power is similar for the Wald, the sore, the generalized score and the likelihood ratio test statistics. MSC: 62L05, 62F03 Keywords and Phrases: Response adaptive design, likelihood ratio test, maximum likelihood estimation, Rao's score test, statistical power, the Wald test ∗Corresponding author. Fax: 1-709-777-7382. E-mail addresses: [email protected] (Yanqing Yi), xikui [email protected] (Xikui Wang) Y. Yi and X.
    [Show full text]
  • Robust Score and Portmanteau Tests of Volatility Spillover Mike Aguilar, Jonathan B
    Journal of Econometrics 184 (2015) 37–61 Contents lists available at ScienceDirect Journal of Econometrics journal homepage: www.elsevier.com/locate/jeconom Robust score and portmanteau tests of volatility spillover Mike Aguilar, Jonathan B. Hill ∗ Department of Economics, University of North Carolina at Chapel Hill, United States article info a b s t r a c t Article history: This paper presents a variety of tests of volatility spillover that are robust to heavy tails generated by Received 27 March 2012 large errors or GARCH-type feedback. The tests are couched in a general conditional heteroskedasticity Received in revised form framework with idiosyncratic shocks that are only required to have a finite variance if they are 1 September 2014 independent. We negligibly trim test equations, or components of the equations, and construct heavy Accepted 1 September 2014 tail robust score and portmanteau statistics. Trimming is either simple based on an indicator function, or Available online 16 September 2014 smoothed. In particular, we develop the tail-trimmed sample correlation coefficient for robust inference, and prove that its Gaussian limit under the null hypothesis of no spillover has the same standardization JEL classification: C13 irrespective of tail thickness. Further, if spillover occurs within a specified horizon, our test statistics C20 obtain power of one asymptotically. We discuss the choice of trimming portion, including a smoothed C22 p-value over a window of extreme observations. A Monte Carlo study shows our tests provide significant improvements over extant GARCH-based tests of spillover, and we apply the tests to financial returns data. Keywords: Finally, based on ideas in Patton (2011) we construct a heavy tail robust forecast improvement statistic, Volatility spillover which allows us to demonstrate that our spillover test can be used as a model specification pre-test to Heavy tails Tail trimming improve volatility forecasting.
    [Show full text]
  • Econometrics-I-11.Pdf
    Econometrics I Professor William Greene Stern School of Business Department of Economics 11-1/78 Part 11: Hypothesis Testing - 2 Econometrics I Part 11 – Hypothesis Testing 11-2/78 Part 11: Hypothesis Testing - 2 Classical Hypothesis Testing We are interested in using the linear regression to support or cast doubt on the validity of a theory about the real world counterpart to our statistical model. The model is used to test hypotheses about the underlying data generating process. 11-3/78 Part 11: Hypothesis Testing - 2 Types of Tests Nested Models: Restriction on the parameters of a particular model y = 1 + 2x + 3T + , 3 = 0 (The “treatment” works; 3 0 .) Nonnested models: E.g., different RHS variables yt = 1 + 2xt + 3xt-1 + t yt = 1 + 2xt + 3yt-1 + wt (Lagged effects occur immediately or spread over time.) Specification tests: ~ N[0,2] vs. some other distribution (The “null” spec. is true or some other spec. is true.) 11-4/78 Part 11: Hypothesis Testing - 2 Hypothesis Testing Nested vs. nonnested specifications y=b1x+e vs. y=b1x+b2z+e: Nested y=bx+e vs. y=cz+u: Not nested y=bx+e vs. logy=clogx: Not nested y=bx+e; e ~ Normal vs. e ~ t[.]: Not nested Fixed vs. random effects: Not nested Logit vs. probit: Not nested x is (not) endogenous: Maybe nested. We’ll see … Parametric restrictions Linear: R-q = 0, R is JxK, J < K, full row rank General: r(,q) = 0, r = a vector of J functions, R(,q) = r(,q)/’. Use r(,q)=0 for linear and nonlinear cases 11-5/78 Part 11: Hypothesis Testing - 2 Broad Approaches Bayesian: Does not reach a firm conclusion.
    [Show full text]
  • Rao's Score Test in Econometrics
    UNIVERSITY OF ILLINOIS LIBRARY AT URBANA CHAMPAIGN EOOKSTACKS Digitized by the Internet Archive in 2012 with funding from University of Illinois Urbana-Champaign http://www.archive.org/details/raosscoretestine91132bera Faculty Working Paper 91-0132 330 STX B385 1991: 132 COPY 2 Rao's Score Test in Econometrics Anil K. Bern Aman Ullah Department of Economics Department of Economics University of Illinois University of California Indiana University Riverside o' *9 •the UWatK %2& Bureau of Economic and Business Research College of Commerce and Business Administration University of Illinois at Urbana-Champaign BEBR FACULTY WORKING PAPER NO. 91-0132 College of Commerce and Business Administration University of Illinois at Urbana-Champaign April 1991 Rao's Score Test in Econometrics Anil K. Bera Department of Economics University of Illinois and Indiana University Aman Ullah Department of Economics University of California-Riverside We are grateful to Rick Vinod, Vicki Zinde- Walsh and Mann Yoon for comments on an earlier draft of the paper. Financial support from the Research Board and the Bureau of Economics and Business Research of the University of Illinois is gratefully acknowledged. ABSTRACT Rao's work is always inspired by some practical problems. It is not surprising that many of the techniques he developed found their way to various applications in econometrics. In this review paper, we try to evaluate the impact of his pioneering 1948 score test on the current practice of econometric model specification tests. In so doing, we relate Rao's score principle to some of the earlier popular tests in econometrics. We also discuss the recent developments on this topic and problems for further research.
    [Show full text]
  • An Improved Sample Size Calculation Method for Score Tests in Generalized Linear Models Arxiv:2006.13104V1 [Stat.ME] 23 Jun 20
    An improved sample size calculation method for score tests in generalized linear models Yongqiang Tang ∗ Tesaro, 1000 Winter St, Waltham, MA 02451 Liang Zhu The University of Texas Health Science Center at Houston, Houston, TX 77030 Jiezhun Gu Duke Clinical Research Institute, Durham, NC 27705 June 24, 2020 Abstract Self and Mauritsen(1988) developed a sample size determination procedure for score tests in generalized linear models under contiguous alternatives. Its perfor- mance may deteriorate when the effect size is large. We propose a modification of the Self-Mauritsen method by taking into account of the variance of the score statistic under both the null and alternative hypotheses, and extend the method to nonin- feriority trials. The modified approach is employed to calculate the sample size for arXiv:2006.13104v1 [stat.ME] 23 Jun 2020 the logistic regression and negative binomial regression in superiority and noninferi- ority trials. We further explain why the formulae recently derived by Zhu and Lakkis tend to underestimate the required sample size for the negative binomial regression. Numerical examples are used to demonstrate the accuracy of the proposed method. Keywords: Exemplary dataset; Negative binomial regression; Noninferiority trials; Power and sample size; Score confidence interval ∗email: yongqiang [email protected] 1 1 Introduction Generalized linear models (GLM) have been commonly used in the analysis of biomedical data (Nelder and Wedderburn, 1972; McCullagh and Nelder, 1989). Statistical inference in GLMs is often based on the Wald test and the likelihood ratio (LR) test. However, the Wald and LR tests can be liberal in small and moderate samples. In the comparison of two binary proportions, the Wald and LR methods can be anti-conservative under some parameter configurations even when the sample size reaches 200 (Laud and Dane, 2014) because the logistic regression overestimates the odds ratio in these studies (Nemes et al., 2009).
    [Show full text]
  • Lagrange Multiplier Test
    LAGRANGE MULTIPLIER TEST MANUEL ARELLANO The Lagrange Multiplier (LM) test is a general principle for testing hy- potheses about parameters in a likelihood framework. The hypothesis under test is expressed as one or more constraints on the values of parameters. To perform an LM test only estimation of the parameters subject to the re- strictions is required. This is in contrast with Wald tests, which are based on unrestricted estimates, and likelihood ratio tests which require both re- stricted and unrestricted estimates. The name of the test is motivated by the fact that it can be regarded as testing whether the Lagrange multipliers involved in enforcing the restric- tions are significantly different from zero. The term “Lagrange multiplier” itself is a wider mathematical word coined after the work of the eighteenth century mathematician Joseph Louis Lagrange. The LM testing principle has found wide applicability to many problems of interest in econometrics. Moreover, the notion of testing the cost of imposing the restrictions, although originally formulated in a likelihood framework, has been extended to other estimation environments, including method of moments and robust estimation. 1. The Form of the Test Statistic Let L () be a log-likelihood function of a k 1 parameter vector , and let the score function and the information matrix be @L () q () = @ @2L () I () = E . @@ 0 Let be the maximum likelihood estimator (MLE) of subject to an r 1 vector of constraints h () = 0. If we consider the Lagrangian function e = L () 0h () L where is an r 1 vector of Lagrange multipliers, the first-order conditions for are @ L = q H = 0 e @ @ L = h e = 0 e e @ Date: September 2002.
    [Show full text]
  • Lecture 02: Statistical Inference for Binomial Parameters
    Lecture 02: Statistical Inference for Binomial Parameters Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University of South Carolina Lecture 02: Statistical Inference for Binomial Parameters – p. 1/50 Inference for a probability • Phase II cancer clinical trials are usually designed to see if a new, single treatment produces favorable results (proportion of success), when compared to a known, ‘industry standard’). • If the new treatment produces good results, then further testing will be done in a Phase III study, in which patients will be randomized to the new treatment or the ‘industry standard’. • In particular, n independent patients on the study are given just one treatment, and the outcome for each patient is usually 1 if new treatment shrinks tumor (success) Yi = , ( 0 if new treatment does not shrinks tumor (failure) i = 1,...,n • For example, suppose n = 30 subjects are given Polen Springs water, and the tumor shrinks in 5 subjects. • The goal of the study is to estimate the probability of success, get a confidence interval for it, or perform a test about it. Lecture 02: Statistical Inference for Binomial Parameters – p. 2/50 • Suppose we are interested in testing H0 : p = .5 where .5 is the probability of success on the “industry standard" As discussed in the previous lecture, there are three ML approaches we can consider. • Wald Test (non-null standard error) • Score Test (null standard error) • Likelihood Ratio test Lecture 02: Statistical Inference for Binomial Parameters – p. 3/50 Wald Test For the hypotheses H0 : p = p0 H : p = p A 6 0 The Wald statistic can be written as pb−p0 zW = SE b− = p p0 √pb(1−pb)/n Lecture 02: Statistical Inference for Binomial Parameters – p.
    [Show full text]
  • Skedastic: Heteroskedasticity Diagnostics for Linear Regression
    Package ‘skedastic’ June 14, 2021 Type Package Title Heteroskedasticity Diagnostics for Linear Regression Models Version 1.0.3 Description Implements numerous methods for detecting heteroskedasticity (sometimes called heteroscedasticity) in the classical linear regression model. These include a test based on Anscombe (1961) <https://projecteuclid.org/euclid.bsmsp/1200512155>, Ramsey's (1969) BAMSET Test <doi:10.1111/j.2517-6161.1969.tb00796.x>, the tests of Bickel (1978) <doi:10.1214/aos/1176344124>, Breusch and Pagan (1979) <doi:10.2307/1911963> with and without the modification proposed by Koenker (1981) <doi:10.1016/0304-4076(81)90062-2>, Carapeto and Holt (2003) <doi:10.1080/0266476022000018475>, Cook and Weisberg (1983) <doi:10.1093/biomet/70.1.1> (including their graphical methods), Diblasi and Bowman (1997) <doi:10.1016/S0167-7152(96)00115-0>, Dufour, Khalaf, Bernard, and Genest (2004) <doi:10.1016/j.jeconom.2003.10.024>, Evans and King (1985) <doi:10.1016/0304-4076(85)90085-5> and Evans and King (1988) <doi:10.1016/0304-4076(88)90006-1>, Glejser (1969) <doi:10.1080/01621459.1969.10500976> as formulated by Mittelhammer, Judge and Miller (2000, ISBN: 0-521-62394-4), Godfrey and Orme (1999) <doi:10.1080/07474939908800438>, Goldfeld and Quandt (1965) <doi:10.1080/01621459.1965.10480811>, Harrison and McCabe (1979) <doi:10.1080/01621459.1979.10482544>, Harvey (1976) <doi:10.2307/1913974>, Honda (1989) <doi:10.1111/j.2517-6161.1989.tb01749.x>, Horn (1981) <doi:10.1080/03610928108828074>, Li and Yao (2019) <doi:10.1016/j.ecosta.2018.01.001>
    [Show full text]
  • Piagnostics for Heteroscedasticity in Regression by R. Dennis Cook and Sanford Weisberg
    piagnostics for Heteroscedasticity in Regression by R. Dennis Cook and Sanford Weisberg -Techni9al Report No. 405 May 1982 Revised, August 1982 University of Minnesota School of Statistics Department of Applied Statistics St. Paul, Minnesota 55108 f ABSTRACT For the usual regression model without replications, we provide a diagnostic test for heteroscedasticity based on the score statistic. A graphical procedure to complement the score test is also presented. Key words: Infl~~nce, linear models, residuals, score tests , 1. INTRODUCTION Diagnostic methods in linear regression are used to examine the appropriateness of assumptions underlying the modelling process and to locate unusual characteristics of the data that may influence conclusions. The recent literature on diagnostics is dominated by studies of methods for the detection of influential observations (Cook and Weisberg, 1982, provide a review). Diagnostics for the relevance of specific assumptions, however, have not received the same degree of attention, even though these may be of equal importance. Our purpose here is to provide appropriate diagnostic techniques to aid in an assessment of the validity of the usual assumption of homo­ scedasticity when little or no replication is present. Available methods for studying this assumption include both graphical -and nongraphical procedures. The usual graphical ·procedure consists of plotting the ordinary least squares residuals against fitted values or an explanatory variable. A megaphone shaped pattern is taken as evidence that the variance depends on the quantity plotted on the abscissa (Weisberg, 1980, Chapter 6). In section 3, we suggest several ways in which this standard graphical method may be improved, particularly in small to moderate sized data sets.
    [Show full text]
  • Chapter 1 Introduction
    Outline Chapter 1 Introduction • Variable types • Review of binomial and multinomial distributions • Likelihood and maximum likelihood method Yibi Huang • Inference for a binomial proportion (Wald, score and likelihood Department of Statistics ratio tests and confidence intervals) University of Chicago • Sample sample inference 1 Regression methods are used to analyze data when the response variable is numerical • e.g., temperature, blood pressure, heights, speeds, income • Stat 22200, Stat 22400 Methods in categorical data analysis are used when the response variable takes categorical (or qualitative) values Variable Types • e.g., • gender (male, female), • political philosophy (liberal, moderate, conservative), • region (metropolitan, urban, suburban, rural) • Stat 22600 In either case, the explanatory variables can be numerical or categorical. 2 Two Types of Categorical Variables Nominal : unordered categories, e.g., • transport to work (car, bus, bicycle, walk, other) • favorite music (rock, hiphop, pop, classical, jazz, country, folk) Review of Binomial and Ordinal : ordered categories Multinomial Distributions • patient condition (excellent, good, fair, poor) • government spending (too high, about right, too low) We pay special attention to — binary variables: success or failure for which nominal-ordinal distinction is unimportant. 3 Binomial Distributions (Review) Example If n Bernoulli trials are performed: Vote (Dem, Rep). Suppose π = Pr(Dem) = 0:4: • only two possible outcomes for each trial (success, failure) Sample n = 3 voters,
    [Show full text]