Fundamentals of Hierarchical Linear and Multilevel Modeling 1 G

Total Page:16

File Type:pdf, Size:1020Kb

Fundamentals of Hierarchical Linear and Multilevel Modeling 1 G Fundamentals of Hierarchical Linear and Multilevel Modeling 1 G. David Garson INTRODUCTION Hierarchical linear models and multilevel models are variant terms for what are broadly called linear mixed models (LMM). These models handle data where observations are not independent, correctly modeling correlated error. Uncorrelated error is an important but often violated assumption of statistical procedures in the general linear model family, which includes analysis of variance, correlation, regression, and factor analysis. Violations occur when error terms are not independent but instead cluster by one or more grouping variables. For instance, predicted student test scores and errors in predicting them may cluster by classroom, school, and municipality. When clustering occurs due to a grouping factor (this is the rule, not the exception), then the standard errors computed for prediction parameters will be wrong (ex., wrong b coefficients in regression). Moreover, as is shown in the application in Chapter 6 in this volume, effects of predictor variables may be misinterpreted, not only in magnitude but even in direction. Linear mixed modeling, including hierarchical linear modeling, can lead to substantially different conclusions compared to conventional regression analysis. Raudenbush and Bryk (2002), citing their 1988 research on the increase over time of math scores among students in Grades 1 through 3, wrote that with hierarchical linear modeling, The results were startling—83% of the variance in growth rates was between schools. In contrast, only about 14% of the variance in initial status was between schools, which is consistent with results typically encountered in cross-sectional studies of school effects. This analysis identified substantial differences among schools that conventional models would not have detected because such analyses do not allow for the partitioning of learning-rate variance into within- and between-school components. (pp. 9–10) 3 4 PART I. GUIDE Linear mixed models are a generalization of general linear models to better support analysis of a continuous dependent variable for the following: 1. Random effects: For when the set of values of a categorical predictor variable are seen not as the complete set but rather as a random sample of all values (ex., when the variable “product” has values representing only 30 of a possible 142 brands). Random effects modeling allows the researcher to make inferences over a wider population than is possible with regression or other general linear model (GLM) methods. 2. Hierarchical effects: For when predictor variables are measured at more than one level (ex., reading achievement scores at the student level and teacher–student ratios at the school level; or sentencing lengths at the offender level, gender of judges at the court level, and budgets of judicial districts at the district level). The researcher can assess the effects of higher levels on the intercepts and coefficients at the lowest level (ex., assess judge-level effects on predictions of sentencing length at the offender level). 3. Repeated measures: For when observations are correlated rather than independent (ex., before–after studies, time series data, matched-pairs designs). In repeated measures, the lowest level is the observation level (ex., student test scores on multiple occasions), grouped by observation unit (ex., students) such that each unit (student) has multiple data rows, one for each observation occasion. The versatility of linear mixed modeling has led to a variety of terms for the models it makes possible. Different disciplines favor one or another label, and different research targets influence the selection of terminology as well. These terms, many of which are discussed later in this chapter, include random intercept modeling, random coefficients modeling, random coefficients regression, random effects modeling, hierarchical linear modeling, multilevel modeling, linear mixed modeling, growth modeling, and longitudinal modeling. Linear mixed models in some disciplines are called “random effects” or “mixed effects” models. In economics, the term “random coefficient regression models” is used. In sociology, “multilevel modeling” is common, alluding to the fact that regression intercepts and slopes at the individual level may be treated as random effects of a higher (ex., organizational) level. And in statistics, the term “covariance components models” is often used, alluding to the fact that in linear mixed models one may decompose the covariance into components attributable to within-groups versus between-groups effects. In spite of many different labels, the commonality is that all adjust observation-level predictions based on the clustering of measures at some higher level or by some grouping variable. The “linear” in linear mixed modeling has a meaning similar to that in regres- sion: There is an assumption that the predictor terms on the right-hand side of the estimation equation are linearly related to the target term on the left-hand side. Of course, nonlinear terms such as power or log functions may be added to the predic- tor side (ex., time and time-squared in longitudinal studies). Also, the target variable may be transformed in a nonlinear way (ex., logit link functions). Linear mixed model (LMM) procedures that do the latter are “generalized” linear mixed models. CHAPTER 1. FUNDAMENtalS OF HIERARCHICAL LINEAR AND Multilevel MODELING 5 Just as regression and GLM procedures can be extended to “generalized general linear models” (GZLM), multilevel and other LMM procedures can be extended to “generalized linear mixed models” (GLMM), discussed further below. Linear mixed models for multilevel analysis address hierarchical data, such as when employee data are at level 1, agency data are at level 2, and department data are at level 3. Hierarchical data usually call for LMM implementation. While most multilevel modeling is univariate (one dependent variable), multivariate multilevel modeling for two or more dependent variables is available also. Likewise, models for cross-classified data exist for data that are not strictly hierarchical (ex., as when schools are a lower level and neighborhoods are a higher level, but schools may serve more than one neighborhood). The researcher undertaking causal modeling using linear mixed modeling should be guided by multilevel theory. That is, hierarchical linear modeling pos- tulates that there are cross-level causal effects. Just as regression models postulate direct effects of independent variables at level 1 on the dependent variable at level 1, so too, multilevel models specify cross-level interaction effects between vari- ables located at different levels. In doing multilevel modeling, the researcher postulates the existence of mediating mechanisms that cause variables at one level to influence variables at another level (ex., school-level funding may posi- tively affect individual-level student performance by way of recruiting superior teachers, made possible by superior financial incentives). Multilevel modeling tests multilevel theories statistically, simultaneously modeling variables at different levels without necessary recourse to aggrega- tion or disaggregation.1 Aggregation and disaggregation as used in regression models run the risk of ecological fallacy: What is true at one level need not be true at another level. For instance, aggregated state-level data on race and lit- eracy greatly overestimate the correlation of African American ethnicity with illiteracy because states with many African Americans tend to have higher illiteracy for all races. Individual-level data shows a low correlation of race and illiteracy. WHY USE LINEAR MIXED/HIERARCHICAL LINEAR/ MULTILEVEL MODELING? Why not just stick with tried-and-true regression models for analysis of data? The central reason, noted above, is that linear mixed models handle random effects, including the effects of grouping of observations under higher entities (ex., grouping of employees by agency, students by school, etc.). Clustering of observations within groups leads to correlated error terms, biased estimates of parameter (ex., regression coefficient) standard errors, and possible substantive mistakes when interpreting the importance of one or another predictor variable. Whenever data are sampled, the sampling unit as a grouping variable may well be a random effect. In a study of the federal bureaucracy, for instance, “agency” 6 PART I. GUIDE might be the sampling unit and error terms may cluster by agency, violating ordinary least squares (OLS) assumptions. Unlike OLS regression, linear mixed models take into account the fact that over many samples, different b coefficients for effects may be computed, one for each group. Conceptually, mixed models treat b coefficients as random effects drawn from a normal distribution of possible b’s, whereas OLS regression treats the b parameters as if they were fixed constants (albeit within a confidence inter- val). Treating “agency” as a random rather than fixed factor will alter and make more accurate the ensuing parameter estimates. Put another way, the misestima- tion of standard errors in OLS regression inflates Type 1 error (thinking there is relationship when there is not: false positives), whereas mixed models handle this potential problem. In addition, LMM can handle a random sampling vari- able like “agencies,” even when there are too many agencies to make into dummy variables in OLS regression and still expect reliable coefficients.
Recommended publications
  • Linear Mixed-Effects Modeling in SPSS: an Introduction to the MIXED Procedure
    Technical report Linear Mixed-Effects Modeling in SPSS: An Introduction to the MIXED Procedure Table of contents Introduction. 1 Data preparation for MIXED . 1 Fitting fixed-effects models . 4 Fitting simple mixed-effects models . 7 Fitting mixed-effects models . 13 Multilevel analysis . 16 Custom hypothesis tests . 18 Covariance structure selection. 19 Random coefficient models . 20 Estimated marginal means. 25 References . 28 About SPSS Inc. 28 SPSS is a registered trademark and the other SPSS products named are trademarks of SPSS Inc. All other names are trademarks of their respective owners. © 2005 SPSS Inc. All rights reserved. LMEMWP-0305 Introduction The linear mixed-effects models (MIXED) procedure in SPSS enables you to fit linear mixed-effects models to data sampled from normal distributions. Recent texts, such as those by McCulloch and Searle (2000) and Verbeke and Molenberghs (2000), comprehensively review mixed-effects models. The MIXED procedure fits models more general than those of the general linear model (GLM) procedure and it encompasses all models in the variance components (VARCOMP) procedure. This report illustrates the types of models that MIXED handles. We begin with an explanation of simple models that can be fitted using GLM and VARCOMP, to show how they are translated into MIXED. We then proceed to fit models that are unique to MIXED. The major capabilities that differentiate MIXED from GLM are that MIXED handles correlated data and unequal variances. Correlated data are very common in such situations as repeated measurements of survey respondents or experimental subjects. MIXED extends repeated measures models in GLM to allow an unequal number of repetitions.
    [Show full text]
  • Multilevel and Mixed-Effects Modeling 391
    386 Statistics with Stata , . d itl 770/ ""01' ncdctemp However the residuals pass tests for white noise 111uahtemp, compar e WI 1 /0 l' r : , , '12 19) (p = .65), and a plot of observed and predicted values shows a good visual fit (Figure . , Multilevel and Mixed-Effects predict uahhat2 solar, ENSO & C02" label variable uahhat2 "predicted from volcanoes, Modeling predict uahres2, resid label variable uahres2 "UAH residuals from ARMAX model" wntestq uahres2, lags(25) portmanteau test for white noise portmanteau (Q)statistic 21. 7197 Mixed-effects modeling is basically regression analysis allowing two kinds of effects: fixed =rob > chi2(25) 0.6519 effects, meaning intercepts and slopes meant to describe the population as a whole, just as in Figure 12.19 ordinary regression; and also random effects, meaning intercepts and slopes that can vary across subgroups of the sample. All of the regression-type methods shown so far in this book involve fixed effects only. Mixed-effects modeling opens a new range of possibilities for multilevel o models, growth curve analysis, and panel data or cross-sectional time series, "r~ 00 01 Albright and Marinova (2010) provide a practical comparison of mixed-modeling procedures uj"'! > found in Stata, SAS, SPSS and R with the hierarchical linear modeling (HLM) software 0>- m developed by Raudenbush and Bryck (2002; also Raudenbush et al. 2005). More detailed ~o c explanation of mixed modeling and its correspondences with HLM can be found in Rabe• ro (I! Hesketh and Skrondal (2012). Briefly, HLM approaches multilevel modeling in several steps, ::IN co I' specifying separate equations (for example) for levelland level 2 effects.
    [Show full text]
  • Mixed Model Methodology, Part I: Linear Mixed Models
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/271372856 Mixed Model Methodology, Part I: Linear Mixed Models Technical Report · January 2015 DOI: 10.13140/2.1.3072.0320 CITATIONS READS 0 444 1 author: jean-louis Foulley Université de Montpellier 313 PUBLICATIONS 4,486 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Football Predictions View project All content following this page was uploaded by jean-louis Foulley on 27 January 2015. The user has requested enhancement of the downloaded file. Jean-Louis Foulley Mixed Model Methodology 1 Jean-Louis Foulley Institut de Mathématiques et de Modélisation de Montpellier (I3M) Université de Montpellier 2 Sciences et Techniques Place Eugène Bataillon 34095 Montpellier Cedex 05 2 Part I Linear Mixed Model Models Citation Foulley, J.L. (2015). Mixed Model Methodology. Part I: Linear Mixed Models. Technical Report, e-print: DOI: 10.13140/2.1.3072.0320 3 1 Basics on linear models 1.1 Introduction Linear models form one of the most widely used tools of statistics both from a theoretical and practical points of view. They encompass regression and analysis of variance and covariance, and presenting them within a unique theoretical framework helps to clarify the basic concepts and assumptions underlying all these techniques and the ones that will be used later on for mixed models. There is a large amount of highly documented literature especially textbooks in this area from both theoretical (Searle, 1971, 1987; Rao, 1973; Rao et al., 2007) and practical points of view (Littell et al., 2002).
    [Show full text]
  • A Primer for Analyzing Nested Data: Multilevel Modeling in SPSS Using
    REL 2015–046 The National Center for Education Evaluation and Regional Assistance (NCEE) conducts unbiased large-scale evaluations of education programs and practices supported by federal funds; provides research-based technical assistance to educators and policymakers; and supports the synthesis and the widespread dissemination of the results of research and evaluation throughout the United States. December 2014 This report was prepared for the Institute of Education Sciences (IES) under contract ED-IES-12-C-0009 by Regional Educational Laboratory Northeast & Islands administered by Education Development Center, Inc. (EDC). The content of the publication does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. This REL report is in the public domain. While permission to reprint this publication is not necessary, it should be cited as: O’Dwyer, L. M., and Parker, C. E. (2014). A primer for analyzing nested data: multilevel mod­ eling in SPSS using an example from a REL study (REL 2015–046). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Educa­ tion Evaluation and Regional Assistance, Regional Educational Laboratory Northeast & Islands. Retrieved from http://ies.ed.gov/ncee/edlabs. This report is available on the Regional Educational Laboratory website at http://ies.ed.gov/ ncee/edlabs. Summary Researchers often study how students’ academic outcomes are associated with the charac­ teristics of their classrooms, schools, and districts. They also study subgroups of students such as English language learner students and students in special education.
    [Show full text]
  • R2 Statistics for Mixed Models
    Kansas State University Libraries New Prairie Press Conference on Applied Statistics in Agriculture 2005 - 17th Annual Conference Proceedings R2 STATISTICS FOR MIXED MODELS Matthew Kramer Follow this and additional works at: https://newprairiepress.org/agstatconference Part of the Agriculture Commons, and the Applied Statistics Commons This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License. Recommended Citation Kramer, Matthew (2005). "R2 STATISTICS FOR MIXED MODELS," Conference on Applied Statistics in Agriculture. https://doi.org/10.4148/2475-7772.1142 This is brought to you for free and open access by the Conferences at New Prairie Press. It has been accepted for inclusion in Conference on Applied Statistics in Agriculture by an authorized administrator of New Prairie Press. For more information, please contact [email protected]. Conference on Applied Statistics in Agriculture Kansas State University 148 Kansas State University R2 STATISTICS FOR MIXED MODELS Matthew Kramer Biometrical Consulting Service, ARS (Beltsville, MD), USDA Abstract The R2 statistic, when used in a regression or ANOVA context, is appealing because it summarizes how well the model explains the data in an easy-to- understand way. R2 statistics are also useful to gauge the effect of changing a model. Generalizing R2 to mixed models is not obvious when there are correlated errors, as might occur if data are georeferenced or result from a designed experiment with blocking. Such an R2 statistic might refer only to the explanation associated with the independent variables, or might capture the explanatory power of the whole model. In the latter case, one might develop an R2 statistic from Wald or likelihood ratio statistics, but these can yield different numeric results.
    [Show full text]
  • A Bayesian Multilevel Model for Time Series Applied to Learning in Experimental Auctions
    Linköpings universitet | Institutionen för datavetenskap Kandidatuppsats, 15 hp | Statistik Vårterminen 2016 | LIU-IDA/STAT-G--16/003—SE A Bayesian Multilevel Model for Time Series Applied to Learning in Experimental Auctions Torrin Danner Handledare: Bertil Wegmann Examinator: Linda Wänström Linköpings universitet SE-581 83 Linköping, Sweden 013-28 10 00, www.liu.se Abstract Establishing what variables affect learning rates in experimental auctions can be valuable in determining how competitive bidders in auctions learn. This study aims to be a foray into this field. The differences, both absolute and actual, between participant bids and optimal bids are evaluated in terms of the effects from a variety of variables such as age, sex, etc. An optimal bid in the context of an auction is the best bid a participant can place to win the auction without paying more than the value of the item, thus maximizing their revenues. This study focuses on how two opponent types, humans and computers, affect the rate at which participants learn to optimize their winnings. A Bayesian multilevel model for time series is used to model the learning rate of actual bids from participants in an experimental auction study. The variables examined at the first level were auction type, signal, round, interaction effects between auction type and signal and interaction effects between auction type and round. At a 90% credibility interval, the true value of the mean for the intercept and all slopes falls within an interval that also includes 0. Therefore, none of the variables are deemed to be likely to influence the model. The variables on the second level were age, IQ, sex and answers from a short quiz about how participants felt when they won or lost auctions.
    [Show full text]
  • Linear Mixed Models and Penalized Least Squares
    Linear mixed models and penalized least squares Douglas M. Bates Department of Statistics, University of Wisconsin–Madison Saikat DebRoy Department of Biostatistics, Harvard School of Public Health Abstract Linear mixed-effects models are an important class of statistical models that are not only used directly in many fields of applications but also used as iterative steps in fitting other types of mixed-effects models, such as generalized linear mixed models. The parameters in these models are typically estimated by maximum likelihood (ML) or restricted maximum likelihood (REML). In general there is no closed form solution for these estimates and they must be determined by iterative algorithms such as EM iterations or general nonlinear optimization. Many of the intermediate calculations for such iterations have been expressed as generalized least squares problems. We show that an alternative representation as a penalized least squares problem has many advantageous computational properties including the ability to evaluate explicitly a profiled log-likelihood or log-restricted likelihood, the gradient and Hessian of this profiled objective, and an ECME update to refine this objective. Key words: REML, gradient, Hessian, EM algorithm, ECME algorithm, maximum likelihood, profile likelihood, multilevel models 1 Introduction We will establish some results for the penalized least squares representation of a general form of a linear mixed-effects model, then show how these results specialize for particular models. The general form we consider is y = Xβ + Zb + ∼ N (0, σ2I), b ∼ N (0, σ2Ω−1), ⊥ b (1) where y is the n-dimensional response vector, X is an n × p model matrix for the p-dimensional fixed-effects vector β, Z is the n × q model matrix for Preprint submitted to Journal of Multivariate Analysis 25 September 2003 the q-dimensional random-effects vector b that has a Gaussian distribution with mean 0 and relative precision matrix Ω (i.e., Ω is the precision of b relative to the precision of ), and is the random noise assumed to have a spherical Gaussian distribution.
    [Show full text]
  • Introduction to Multilevel Modeling
    1 INTRODUCTION TO MULTILEVEL MODELING OVERVIEW Multilevel modeling goes back over half a century to when social scientists becamedistribute attuned to the fact that what was true at the group level was not necessarily true at the individual level. The classic example was percentage literate and percentage African American. Using data aggregated to the state level for the 50 American states in the 1930s as units ofor observation, Robinson (1950) found that there was a strong correlation of race with illiteracy. When using individual- level data, however, the correlation was observed to be far lower. This difference arose from the fact that the Southern states, which had many African Americans, also had many illiterates of all races. Robinson described potential misinterpretations based on aggregated data, such as state-level data, as a type of “ecological fallacy” (see also Clancy, Berger, & Magliozzi, 2003). In the classic article by Davis, Spaeth, and Huson (1961),post, the same problem was put in terms of within-group versus between-group effects, corresponding to individual-level and group-level effects. A central function of multilevel modeling is to separate within-group individual effects from between-group aggregate effects. Multilevel modeling (MLM) is appropriate whenever there is clustering of the outcome variable by a categorical variable such that error is no longer independent as required by ordinary least squares (OLS) regression but rather error is correlated. Clustering of level 1 outcome scores within levels formed by a levelcopy, 2 clustering variable (e.g., employee ratings clustering by work unit) means that OLS estimates will be too high for some units and too low for others.
    [Show full text]
  • Lecture 2: Linear and Mixed Models
    Lecture 2: Linear and Mixed Models Bruce Walsh lecture notes Introduction to Mixed Models SISG (Module 12), Seattle 17 – 19 July 2019 1 Quick Review of the Major Points The general linear model can be written as y = Xb + e • y = vector of observed dependent values • X = Design matrix: observations of the variables in the assumed linear model • b = vector of unknown parameters to estimate • e = vector of residuals (deviation from model fit), e = y-X b 2 y = Xb + e Solution to b depends on the covariance structure (= covariance matrix) of the vector e of residuals Ordinary least squares (OLS) • OLS: e ~ MVN(0, s2 I) • Residuals are homoscedastic and uncorrelated, so that we can write the cov matrix of e as Cov(e) = s2I • the OLS estimate, OLS(b) = (XTX)-1 XTy Generalized least squares (GLS) • GLS: e ~ MVN(0, V) • Residuals are heteroscedastic and/or dependent, • GLS(b) = (XT V-1 X)-1 XTV-1y 3 BLUE • Both the OLS and GLS solutions are also called the Best Linear Unbiased Estimator (or BLUE for short) • Whether the OLS or GLS form is used depends on the assumed covariance structure for the residuals 2 – Special case of Var(e) = se I -- OLS – All others, i.e., Var(e) = R -- GLS 4 Linear Models One tries to explain a dependent variable y as a linear function of a number of independent (or predictor) variables. A multiple regression is a typical linear model, Here e is the residual, or deviation between the true value observed and the value predicted by the linear model.
    [Show full text]
  • An Introduction to Bayesian Multilevel (Hierarchical) Modelling Using
    An Intro duction to Bayesian Multilevel Hierarchical Mo delling using MLwiN by William Browne and Harvey Goldstein Centre for Multilevel Mo delling Institute of Education London May Summary What Is Multilevel Mo delling Variance comp onents and Random slop es re gression mo dels Comparisons between frequentist and Bayesian approaches Mo del comparison in multilevel mo delling Binary resp onses multilevel logistic regres sion mo dels Comparisons between various MCMC meth ods Areas of current research Junior Scho ol Project JSP Dataset The JSP dataset Mortimore et al was a longitudinal study of around primary scho ol children from scho ols in the Inner Lon don Education Authority Sample of interest here consists of pupils from scho ols Main outcome of intake is a score out of in year in mathematics MATH Main predictor is a score in year again out of on a mathematics test MATH Other predictors are child gender and fathers o ccupation status manualnon manual Why Multilevel mo delling Could t a simple linear regression of MATH and MATH on the full dataset Could t separate regressions of MATH and MATH for each scho ol indep endently The rst approach ignores scho ol eects dif ferences whilst the second ignores similari ties between the scho ols Solution A multilevel or hierarchical or ran dom eects mo del treats both the students and scho ols as random variables and can be seen as a compromise between the two single level approaches The predicted regression lines for each scho ol pro duced by a multilevel mo del are a weighted average of the individual scho ol lines and the full dataset regression line A mo del with both random scho ol intercepts and slop es is called a random slop es regression mo del Random Slopes Regression (RSR) Model MLwiN (Rasbash et al.
    [Show full text]
  • 3 Diagnostic Checks for Multilevel Models 141 Eroscedasticity, I.E., Non-Constant Variances of the Random Effects
    3 Diagnostic Checks for Multilevel Models Tom A. B. Snijders1,2 and Johannes Berkhof3 1 University of Oxford 2 University of Groningen 3 VU University Medical Center, Amsterdam 3.1 Specification of the Two-Level Model This chapter focuses on diagnostics for the two-level Hierarchical Linear Model (HLM). This model, as defined in chapter 1, is given by y X β Z δ ǫ j = j + j j + j , j = 1, . , m, (3.1a) with ǫ ∅ Σ (θ) ∅ j , j (3.1b) δ ∼ N ∅ ∅ Ω(ξ) j and (ǫ , δ ) (ǫ , δ ) (3.1c) j j ⊥ ℓ ℓ for all j = ℓ. The lengths of the vectors yj, β, and δj, respectively, are nj, r, 6 and s. Like in all regression-type models, the explanatory variables X and Z are regarded as fixed variables, which can also be expressed by saying that the distributions of the random variables ǫ and δ are conditional on X and Z. The random variables ǫ and δ are also called the vectors of residuals at levels 1 and 2, respectively. The variables δ are also called random slopes. Level-two units are also called clusters. The standard and most frequently used specification of the covariance matrices is that level-one residuals are i.i.d., i.e., 2 Σj(θ)= σ Inj , (3.1d) where Inj is the nj-dimensional identity matrix; and that either all elements of the level-two covariance matrix Ω are free parameters (so one could identify Ω with ξ), or some of them are constrained to 0 and the others are free parameters.
    [Show full text]
  • A Simple, Linear, Mixed-Effects Model
    Chapter 1 A Simple, Linear, Mixed-effects Model In this book we describe the theory behind a type of statistical model called mixed-effects models and the practice of fitting and analyzing such models using the lme4 package for R. These models are used in many different dis- ciplines. Because the descriptions of the models can vary markedly between disciplines, we begin by describing what mixed-effects models are and by ex- ploring a very simple example of one type of mixed model, the linear mixed model. This simple example allows us to illustrate the use of the lmer function in the lme4 package for fitting such models and for analyzing the fitted model. We also describe methods of assessing the precision of the parameter estimates and of visualizing the conditional distribution of the random effects, given the observed data. 1.1 Mixed-effects Models Mixed-effects models, like many other types of statistical models, describe a relationship between a response variable and some of the covariates that have been measured or observed along with the response. In mixed-effects models at least one of the covariates is a categorical covariate representing experimental or observational “units” in the data set. In the example from the chemical industry that is given in this chapter, the observational unit is the batch of an intermediate product used in production of a dye. In medical and social sciences the observational units are often the human or animal subjects in the study. In agriculture the experimental units may be the plots of land or the specific plants being studied.
    [Show full text]