Chapter 2. Fixed Effects Models / 2-1

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 2. Fixed Effects Models / 2-1 Chapter 2. Fixed Effects Models / 2-1 © 2003 by Edward W. Frees. All rights reserved Chapter 2. Fixed Effects Models Abstract. This chapter introduces the analysis of longitudinal and panel data using the general linear model framework. Here, longitudinal data modeling is cast as a regression problem by using fixed parameters to represent the heterogeneity; nonrandom quantities that account for the heterogeneity are known as fixed effects. In this way, ideas of model representation and data exploration are introduced using regression analysis, a toolkit that is widely known. Analysis of covariance, from the general linear model, easily handles the many parameters that are needed to represent the heterogeneity. Although longitudinal and panel data can be analyzed using regression techniques, it is also important to emphasize the special features of these data. Specifically, the chapter emphasizes the wide cross-section and the short time-series of many longitudinal and panel data sets, as well as the special model specification and diagnostic tools needed to handle these features. 2.1 Basic fixed effects model Data Suppose that we are interested in explaining hospital costs for each state in terms of measures of utilization, such as the number of discharged patients and the average hospital stay per discharge. Here, we consider the state to be the unit of observation, or subject. We differentiate among states with the index i, where i may range from 1 to n, and n is the number of subjects. Each state is observed Ti times and we use the index t to differentiate the observation times. With these indices, let yit denote the response of the ith subject at the tth time point. Associated with each response yit is a set of explanatory variables, or covariates. For example, for state hospital costs, these explanatory variables include the number of discharged patients and the average hospital stay per discharge. In general, we assume there are K explanatory variables xit,1, xit,2, …, xit,K that may vary by subject i and time t. We achieve a more compact notational form by expressing the K explanatory variables as a K × 1 column vector x it,1 xit,2 xit = . M xit,K To save space, it is customary to use the alternate expression xit = (xit,1, xit,2, …, xit,K)′, where the prime “ ′ ” means transpose. (You will find that some sources prefer to use a superscript “T” for transpose. Here, T will refer to the number of time replications.) Thus, the data for the ith subject consists of: {xi1,1 ,L, xi1,K , yi1} M {}x , , x , y iTi ,1 L iTi ,K iTi 2-2 / Chapter 2. Fixed Effects Models that can be expressed more compactly as {x′i1 , yi1} M . {}x′ , y iTi iTi Unless specified otherwise, we allow the number of responses to vary by subject, indicated with the notation Ti. This is known as the unbalanced case. We use the notation T = max{T1, T2, …, Tn} to be the maximal number of responses for a subject. Recall from Section 1.1 that the case Ti = T for each i is called the balanced case. Basic models To analyze relationships among variables, the relationships between the response and the explanatory variables are summarized through the regression function E yit = α + β1 xit,1+ β2 xit,2+ ... + βK xit,K, (2.1) that is linear in the parameters α, β1 ,…, βk. For applications where the explanatory variables are nonrandom, the only restriction of equation (2.1) is that we believe that the variables enter linearly. As we will see in Chapter 6, for applications where the explanatory variables are random, we may interpret the expectation in equation (2.1) as conditional on the observed explanatory variables. We focus attention on assumptions that concern the observable variables, {xit,1, ... , xit,K, yit}. Assumptions of the Observables Representation of the Linear Regression Model F1. E yit = α + β1 xit,1+ β2 xit,2+ ... + βK xit,K. F2. {xit,1, ... , xit,K} are nonstochastic variables. 2 F3. Var yit = σ . F4. { yit } are independent random variables. The “observables representation” is based on the idea of conditional linear expectations (see Goldberger, 1991, for additional background). One can motivate assumption F1 by thinking of (xit,1, ... , xit,K, yit) as a draw from a population, where the mean of the conditional distribution of yit given { xit,1, ... , xit,K} is linear in the explanatory variables. Inference about the distribution of y is conditional on the observed explanatory variables, so that we may treat { xit,1, ... , xit,K} as nonstochastic variables. When considering types of sampling mechanisms for thinking of (xit,1, ... , xit,K, yit) as a draw from a population, it is convenient to think of a stratified random sampling scheme, where values of {xit,1, ... , xit,K} are treated as the strata. That is, for each value of {xit,1, ... , xit,K}, we draw a random sample of responses from a population. This sampling scheme also provides motivation for assumption F4, the independence among responses. To illustrate, when drawing from a database of firms to understand stock return performance (y), one can choose large firms, measured by asset size, focus on an industry, measured by standard industrial classification and so forth. You may not select firms with the largest stock return performance because this is stratifying based on the response, not the explanatory variables. A fifth assumption that is often implicitly required in the linear regression model is: F5. {yit} is normally distributed. This assumption is not required for all statistical inference procedures because central limit theorems provide approximate normality for many statistics of interest. However, formal justification for some, such as t-statistics, do require this additional assumption. Chapter 2. Fixed Effects Models / 2-3 In contrast to the observables representation, the classical formulation of the linear regression model focuses attention on the “errors” in the regression, defined as εit = yit – (α + β1 xit,1+ β2 xit,2+ ... + βK xit,K). Assumptions of the Error Representation of the Linear Regression Model E1. yit = α + β1 xit,1+ β2 xit,2+ ... + βK xit,K + εit where E εit = 0. E2. {xit,1, ... , xit,K} are nonstochastic variables. 2 E3. Var εit = σ . E4. { εit } are independent random variables. The “error representation” is based on the Gaussian theory of errors (see Stigler, 1986, for a historical background). As described above, the linear regression function incorporates the additional knowledge from independent variables through the relation E yit = α + β1 xit,1+ β2 xit,2+ ... + βK xit,K. Other unobserved variables that influence the measurement of y are encapsulated in the “error” term εit, which is also known as the “disturbance” term. The independence of errors, F4, can be motivated by assuming that {εit} are realized through a simple random sample from an unknown population of errors. Assumptions E1-E4 are equivalent to assumptions F1-F4. The error representation provides a useful springboard for motivating goodness of fit measures. However, a drawback of the error representation is that it draws the attention from the observable quantities (xit,1, ... , xit,K, yit) to an unobservable quantity, {εit}. To illustrate, the sampling basis, viewing {εit} as a simple random sample, is not directly verifiable because one cannot directly observe the sample {εit}. Moreover, the assumption of additive errors in E1 will be troublesome when we consider nonlinear regression models in Part II. Our treatment focuses on the observable representation in Assumptions F1-F4. In assumption F1, the slope parameters β1, β2, …, βK are associated with the K explanatory variables. For a more compact expression, we summarize the parameters as a column vector of dimension K × 1, denoted by β 1 β = M . β K With this notation, we may re-write assumption F1 as E yit = α + xit′ β, (2.2) because of the relation xit′ β = β1 xit,1+ β2 xit,2+ ... + βK xit,K . We call the representation in equation (2.2) cross-sectional because, although it relates the explanatory variables to the response, it does not use the information in the repeated measurements on a subject. Because it also does not include (subject-specific) heterogeneous terms, we also refer to the equation (2.2) representation as part of a homogeneous model. Our first representation that uses the information in the repeated measurements on a subject is E yit = αi + xit′ β. (2.3) Equation (2.3) and assumptions F2-F4 comprise the basic fixed effects model. Unlike equation (2.2), in equation (2.3) the intercept terms, αi, are allowed to vary by subject. Parameters of interest The parameters {βj} are common to each subject and are called global, or population, parameters. The parameters {αi} vary by subject and are known as individual, or subject-specific, 2-4 / Chapter 2. Fixed Effects Models parameters. In many applications, we will see that population parameters capture broad relationships of interest and hence are the parameters of interest. The subject-specific parameters account for the different features of subjects, not broad population patterns. Hence, they are often of secondary interest and are called nuisance parameters. As we saw in Section 1.3, the subject-specific parameters represent our first device that helps control for the heterogeneity among subjects. We will see that estimators of these parameters use information in the repeated measurements on a subject. Conversely, the parameters {αi} are non-estimable in cross-sectional regression models without repeated observations. That is, with Ti = 1, the model yi1 = αi + β1 xi1,1+ β2 xi1,2+ ... + βK xi1,K + εi1 has more parameters (n+K) than observations (n) and thus, we cannot identify all the parameters. Typically, the disturbance term εit includes the information in αi in cross-sectional regression models.
Recommended publications
  • Improving the Interpretation of Fixed Effects Regression Results
    Improving the Interpretation of Fixed Effects Regression Results Jonathan Mummolo and Erik Peterson∗ October 19, 2017 Abstract Fixed effects estimators are frequently used to limit selection bias. For example, it is well-known that with panel data, fixed effects models eliminate time-invariant confounding, estimating an independent variable's effect using only within-unit variation. When researchers interpret the results of fixed effects models, they should therefore consider hypothetical changes in the independent variable (coun- terfactuals) that could plausibly occur within units to avoid overstating the sub- stantive importance of the variable's effect. In this article, we replicate several recent studies which used fixed effects estimators to show how descriptions of the substantive significance of results can be improved by precisely characterizing the variation being studied and presenting plausible counterfactuals. We provide a checklist for the interpretation of fixed effects regression results to help avoid these interpretative pitfalls. ∗Jonathan Mummolo is an Assistant Professor of Politics and Public Affairs at Princeton University. Erik Pe- terson is a post-doctoral fellow at Dartmouth College. We are grateful to Justin Grimmer, Dorothy Kronick, Jens Hainmueller, Brandon Stewart, Jonathan Wand and anonymous reviewers for helpful feedback on this project. The fixed effects regression model is commonly used to reduce selection bias in the es- timation of causal effects in observational data by eliminating large portions of variation thought to contain confounding factors. For example, when units in a panel data set are thought to differ systematically from one another in unobserved ways that affect the outcome of interest, unit fixed effects are often used since they eliminate all between-unit variation, producing an estimate of a variable's average effect within units over time (Wooldridge 2010, 304; Allison 2009, 3).
    [Show full text]
  • Linear Mixed-Effects Modeling in SPSS: an Introduction to the MIXED Procedure
    Technical report Linear Mixed-Effects Modeling in SPSS: An Introduction to the MIXED Procedure Table of contents Introduction. 1 Data preparation for MIXED . 1 Fitting fixed-effects models . 4 Fitting simple mixed-effects models . 7 Fitting mixed-effects models . 13 Multilevel analysis . 16 Custom hypothesis tests . 18 Covariance structure selection. 19 Random coefficient models . 20 Estimated marginal means. 25 References . 28 About SPSS Inc. 28 SPSS is a registered trademark and the other SPSS products named are trademarks of SPSS Inc. All other names are trademarks of their respective owners. © 2005 SPSS Inc. All rights reserved. LMEMWP-0305 Introduction The linear mixed-effects models (MIXED) procedure in SPSS enables you to fit linear mixed-effects models to data sampled from normal distributions. Recent texts, such as those by McCulloch and Searle (2000) and Verbeke and Molenberghs (2000), comprehensively review mixed-effects models. The MIXED procedure fits models more general than those of the general linear model (GLM) procedure and it encompasses all models in the variance components (VARCOMP) procedure. This report illustrates the types of models that MIXED handles. We begin with an explanation of simple models that can be fitted using GLM and VARCOMP, to show how they are translated into MIXED. We then proceed to fit models that are unique to MIXED. The major capabilities that differentiate MIXED from GLM are that MIXED handles correlated data and unequal variances. Correlated data are very common in such situations as repeated measurements of survey respondents or experimental subjects. MIXED extends repeated measures models in GLM to allow an unequal number of repetitions.
    [Show full text]
  • Recovering Latent Variables by Matching∗
    Recovering Latent Variables by Matching∗ Manuel Arellanoy St´ephaneBonhommez This draft: November 2019 Abstract We propose an optimal-transport-based matching method to nonparametrically es- timate linear models with independent latent variables. The method consists in gen- erating pseudo-observations from the latent variables, so that the Euclidean distance between the model's predictions and their matched counterparts in the data is mini- mized. We show that our nonparametric estimator is consistent, and we document that it performs well in simulated data. We apply this method to study the cyclicality of permanent and transitory income shocks in the Panel Study of Income Dynamics. We find that the dispersion of income shocks is approximately acyclical, whereas skewness is procyclical. By comparison, we find that the dispersion and skewness of shocks to hourly wages vary little with the business cycle. Keywords: Latent variables, nonparametric estimation, matching, factor models, op- timal transport. ∗We thank Tincho Almuzara and Beatriz Zamorra for excellent research assistance. We thank Colin Mallows, Kei Hirano, Roger Koenker, Thibaut Lamadon, Guillaume Pouliot, Azeem Shaikh, Tim Vogelsang, Daniel Wilhelm, and audiences at various places for comments. Arellano acknowledges research funding from the Ministerio de Econom´ıay Competitividad, Grant ECO2016-79848-P. Bonhomme acknowledges support from the NSF, Grant SES-1658920. yCEMFI, Madrid. zUniversity of Chicago. 1 Introduction In this paper we propose a method to nonparametrically estimate a class of models with latent variables. We focus on linear factor models whose latent factors are mutually independent. These models have a wide array of economic applications, including measurement error models, fixed-effects models, and error components models.
    [Show full text]
  • Testing for a Winning Mood Effect and Predicting Winners at the 2003 Australian Open
    University of Tasmania School of Mathematics and Physics Testing for a Winning Mood Effect and Predicting Winners at the 2003 Australian Open Lisa Miller October 2005 Submitted in partial fulfilment of the requirements for the Degree of Bachelor of Science with Honours Supervisor: Dr Simon Wotherspoon Acknowledgements I would like to thank my supervisor Dr Simon Wotherspoon for all the help he has given me throughout the year, without him this thesis would not have happened. I would also like to thank my family, friends and fellow honours students, especially Shari, for all their support. 1 Abstract A `winning mood effect’ can be described as a positive effect that leads a player to perform well on a point after performing well on the previous point. This thesis investigates the `winning mood effect’ in males singles data from the 2003 Australian Open. It was found that after winning the penultimate set there was an increase in the probability of winning the match and that after serving an ace and also after breaking your opponents service there was an increase in the probability of winning the next service. Players were found to take more risk at game point but less risk after their service was broken. The second part of this thesis looked at predicting the probability of winning a match based on some previous measure of score. Using simulation, several models were able to predict the probability of winning based on a score from earlier in the match, or on a difference in player ability. However, the models were only really suitable for very large data sets.
    [Show full text]
  • Advanced Panel Data Methods Fixed Effects Estimators We Discussed The
    EC327: Advanced Econometrics, Spring 2007 Wooldridge, Introductory Econometrics (3rd ed, 2006) Chapter 14: Advanced panel data methods Fixed effects estimators We discussed the first difference (FD) model as one solution to the problem of unobserved heterogeneity in the context of panel data. It is not the only solution; the leading alternative is the fixed effects model, which will be a better solution under certain assumptions. For a model with a single explanatory variable, yit = β1xit + ai + uit (1) If we average this equation over time for each unit i, we have y¯it = β1x¯it + ¯ai +u ¯it (2) Subtracting the second equation from the first, we arrive at yit − y¯it = β1(xit − x¯it) + (uit − u¯it) (3) defining the demeaned data on [y, x] as the ob- servations of each panel with their mean values per individual removed. This algebra is known as the within transformation, and the estima- tor we derive is known as the within estimator. Just as OLS in a cross-sectional context only “explains” the deviations of y from its mean y¯, the within estimator’s explanatory value is derived from the comovements of y around its individual-specific mean with x around its individual-specific mean. Thus, it matters not if a unit has consistently high or low values of y and x. All that matters is how the variations around those mean values are correlated. Just as in the case of the FD estimator, the within estimator will be unbiased and consis- tent if the explanatory variables are strictly ex- ogenous: independent of the distribution of u in terms of their past, present and future val- ues.
    [Show full text]
  • Panel Data 4: Fixed Effects Vs Random Effects Models Richard Williams, University of Notre Dame, Last Revised March 20, 2018
    Panel Data 4: Fixed Effects vs Random Effects Models Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised March 20, 2018 These notes borrow very heavily, sometimes verbatim, from Paul Allison’s book, Fixed Effects Regression Models for Categorical Data. The Stata XT manual is also a good reference. This handout tends to make lots of assertions; Allison’s book does a much better job of explaining why those assertions are true and what the technical details behind the models are. Overview. With panel/cross sectional time series data, the most commonly estimated models are probably fixed effects and random effects models. Population-Averaged Models and Mixed Effects models are also sometime used. In this handout we will focus on the major differences between fixed effects and random effects models. Several considerations will affect the choice between a fixed effects and a random effects model. 1. What is the nature of the variables that have been omitted from the model? a. If you think there are no omitted variables – or if you believe that the omitted variables are uncorrelated with the explanatory variables that are in the model – then a random effects model is probably best. It will produce unbiased estimates of the coefficients, use all the data available, and produce the smallest standard errors. More likely, however, is that omitted variables will produce at least some bias in the estimates. b. If there are omitted variables, and these variables are correlated with the variables in the model, then fixed effects models may provide a means for controlling for omitted variable bias.
    [Show full text]
  • Fixed Effects, Random Effects and GEE: What Are the Differences?
    STATISTICS IN MEDICINE Statist. Med. 2009; 28:221–239 Published online 14 November 2008 in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/sim.3478 Fixed effects, random effects and GEE: What are the differences? Joseph C. Gardiner1,∗,†, Zhehui Luo1,2 and Lee Anne Roman3,4 1Division of Biostatistics, Department of Epidemiology, Michigan State University, East Lansing, MI 48824, U.S.A. 2RTI International, Behavioral Health Economics Program, Research Triangle Park, NC 27709, U.S.A. 3Department of Obstetrics and Gynecology, College of Human Medicine, Michigan State University, East Lansing, MI 48824, U.S.A. 4Institute for Healthcare Studies, Michigan State University, East Lansing, MI 48824, U.S.A. SUMMARY For analyses of longitudinal repeated-measures data, statistical methods include the random effects model, fixed effects model and the method of generalized estimating equations. We examine the assumptions that underlie these approaches to assessing covariate effects on the mean of a continuous, dichotomous or count outcome. Access to statistical software to implement these models has led to widespread application in numerous disciplines. However, careful consideration should be paid to their critical assumptions to ascertain which model might be appropriate in a given setting. To illustrate similarities and differences that might exist in empirical results, we use a study that assessed depressive symptoms in low-income pregnant women using a structured instrument with up to five assessments that spanned the pre-natal and post-natal periods. Understanding the conceptual differences between the methods is important in their proper application even though empirically they might not differ substantively. The choice of model in specific applications would depend on the relevant questions being addressed, which in turn informs the type of design and data collection that would be relevant.
    [Show full text]
  • Practical Regression: Fixed Effects Models
    DAVID DRANOVE 7-112-005 Practical Regression: Fixed Effects Models This is one in a series of notes entitled “Practical Regression.” These notes supplement the theoretical content of most statistics texts with practical advice on solving real world empirical problems through regression analysis. The “Experiment” The technical note “Convincing Empirical Research in Ten Steps”1 states that good empirical research should be convincing. Perhaps the most convincing empirical studies come from biomedical research, in which researchers use double-blind randomized controlled studies. By carefully controlling the experimental design, the researchers are reasonably sure that the only difference between the treatment group and the control group in the medical study is the presence of the treatment. (For example, one group of patients receives a drug; the other receives a placebo; and there are no other differences between the groups.) With such a clean experimental design, the results are bound to be convincing. Such clean experimental designs are rare in social science research, however. As a result, research studies are subject to all sorts of biases. The technical note “Omitted Variable Bias”2 describes the best-known bias. In many circumstances, it is possible to design social science studies that are reasonably immune to this bias. At a minimum, one can state the assumptions required for the results to be unbiased, and often these assumptions are plausible. The most important class of studies that overcome omitted variable bias is known as fixed effects models. Fixed effects models are designed to capture experiments that occur naturally in real world (i.e., non-experimental) data.
    [Show full text]
  • A Closer Look at Fixed Effects Regression in Structural Equation
    A closer look at fixed effects regression in structural equation modeling using lavaan Henrik Kenneth Andersen 2020-06-21 Abstract This article provides an in-depth look at fixed effects regression in the structural equation modeling (SEM) framework, specifically the application of fixed effects in the lavaan package for R. It is meant as a applied guide for researchers, covering the underlying model specification, syntax, and summary output. Online supplementary materials further discuss various common extentsions to the basic fixed-effect model, demonstrating how to relax model assumptions, deal with measurement error in both the dependent and independent variables, and include time-invariant predictors in a type of hybrid fixed-/ random effects model. Keywords: Fixed effects, structural equation modeling, lavaan, R, panel analysis 1 Introduction Several years ago, Curran and Bauer (2011) reflected positively on the growing use of panel studies in empirical social research. Some of the strengths of panel data are well-known, e.g., the ability to establish temporal precedence, increased statistical power and the reduction of potential alternative models. However, perhaps the greatest strength of panel data is that they allow for a more rigorous testing of substantive theories. Panel data, i.e., repeated measures of the same observed units (people, schools, firms, countries, etc.), allow researchers to decompose the error term into a part that stays constant within units and the part that changes over time. The part that does not change over time can be seen as the combined effect of all time-invariant influences (e.g., sex, date of birth, nationality) on the dependent variable.
    [Show full text]
  • Panel Data for Linear Models: Very Brief Overview Richard Williams, University of Notre Dame, Last Revised March 19, 2018
    Panel Data for Linear Models: Very Brief Overview Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised March 19, 2018 These notes borrow very heavily, often verbatim, from Paul Allison’s book, Fixed Effects Regression Models for Categorical Data. I strongly encourage people to get their own copy. The Stata XT manual is also a good reference, as is Microeconometrics Using Stata, Revised Edition, by Cameron and Trivedi. Allison’s book does a much better job of explaining why assertions made here are true and what the technical details behind the models are. Overview. We often have data where variables have been measured for the same subjects (or countries, or companies, or whatever) at multiple points in time. These are typically referred to as Panel Data or as Cross-Sectional Time Series Data. We need special techniques for analyzing such data, e.g. it would be a mistake to treat 200 individuals measured at 5 points in time as though they were 1,000 independent observations, since doing so would usually result in standard error estimates that were too small. Therefore, Stata has an entire manual and suite of XT commands devoted to panel data, e.g. xtreg, xtlogit, xtpoisson, etc. Panel Data offer some important advantages over cross-sectional only data, only a very few of which will be covered here. The Linear Regression Panel Model. (Adapted heavily from Allison pp. 6-7) Suppose we have a continuous dependent variable that is linearly dependent on a set of predictor variables. We have a set of individuals who are measured at two or more points of time.
    [Show full text]
  • Fixed Effects Models Versus Mixed Effects Models for Clustered Data
    Psychological Methods © 2018 American Psychological Association 2019, Vol. 24, No. 1, 20–35 1082-989X/19/$12.00 http://dx.doi.org/10.1037/met0000182 Fixed Effects Models Versus Mixed Effects Models for Clustered Data: Reviewing the Approaches, Disentangling the Differences, and Making Recommendations Daniel McNeish Ken Kelley Arizona State University University of Notre Dame Abstract Clustered data are common in many fields. Some prominent examples of clustering are employees clustered within supervisors, students within classrooms, and clients within therapists. Many methods exist that explicitly consider the dependency introduced by a clustered data structure, but the multitude of available options has resulted in rigid disciplinary preferences. For example, those working in the psychological, organizational behavior, medical, and educational fields generally prefer mixed effects models, whereas those working in economics, behavioral finance, and strategic management generally prefer fixed effects models. However, increasingly interdisciplinary research has caused lines that separate the fields grounded in psychology and those grounded in economics to blur, leading to researchers encountering unfamiliar statistical methods commonly found in other disciplines. Persistent discipline-specific preferences can be particularly problematic because (a) each approach has certain limitations that can restrict the types of research questions that can be appropriately addressed, and (b) analyses based on the statistical modeling decisions common in one discipline can be difficult to understand for researchers trained in alternative disciplines. This can impede cross-disciplinary collaboration and limit the ability of scientists to make appropriate use of research from adjacent fields. This article discusses the differences between mixed effects and fixed effects models for clustered data, reviews each approach, and helps to identify when each approach is optimal.
    [Show full text]
  • Applications of Mixed-Effects Models in Biostatistics Author(S): Robert D
    Indian Statistical Institute Applications of Mixed-Effects Models in Biostatistics Author(s): Robert D. Gibbons and Donald Hedeker Reviewed work(s): Source: Sankhyā: The Indian Journal of Statistics, Series B (1960-2002), Vol. 62, No. 1, Biostatistics (Apr., 2000), pp. 70-103 Published by: Springer on behalf of the Indian Statistical Institute Stable URL: http://www.jstor.org/stable/25053120 . Accessed: 31/05/2012 17:39 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Springer and Indian Statistical Institute are collaborating with JSTOR to digitize, preserve and extend access to Sankhy: The Indian Journal of Statistics, Series B (1960-2002). http://www.jstor.org Sankhy? : The Indian Journal of Statistics Special Issue on Biostatistics 2000, Volume 62, Series B, Pt. 1, pp. 70-103 APPLICATIONS OF MIXED-EFFECTS MODELS IN BIOSTATISTICS* By ROBERT D. GIBBONS and DONALD HEDEKER University of Illinois at Chicago SUMMARY. We present recent developments in mixed-effects models relevant to application in biostatistics. The major focus is on application of mixed-effects models to analysis of longitudinal data in general and longitudinal controlled clinical trials in detail. We present application of mixed-effects models to the case of unbalanced longitudinal data with complex residual error structures for continuous, binary and ordinal outcome measures for data with two and three levels of nesting (e.g., a multi-center longitudinal clinical trial).
    [Show full text]