An Introduction to Mixed Models for Experimental Psychology

An Introduction to Mixed Models for Experimental Psychology

An Introduction to Mixed Models for Experimental Psychology Henrik Singmann David Kellen University of Zurich Syracuse University This is a preprint of: Singmann, H., & Kellen, D. (2019). An Introduction to Mixed Models for Experimental Psychology. In D. H. Spieler & E. Schumacher (Eds.), New Methods in Cognitive Psychology (pp. 4–31). Psychology Press. In order to increase statistical power and precision, many tion or the assumption of variance heterogeneity in ANOVA, psychological experiments collect more than one data point standard statistical procedures are usually not robust to vi- from each participant, often across different experimental olations of the independence assumption (Judd et al., 2012; conditions. Such repeated-measures pose a problem to most Kenny & Judd, 1986). In a frequentist statistical framework standard statistical procedures such as ordinary least-squares such violations often lead to considerably increased Type I regression or (between-subjects) ANOVA as those proce- errors (i.e., false positives). More generally, such violations dures assume that the data points are independent and iden- can produce overconfident results (e.g., too narrow standard tically distributed (henceforth iid). The iid assumption is errors). comprised of two parts: The assumption of identical distribu- In this chapter we will describe a class of statistical tion simply means that all observations are samples from the model that is able to account for most of the cases of non- same underlying distribution. The independence assumption independence that are typically encountered in psycholog- states that the probability of a data point taking on a specific ical experiments, linear mixed effects models (LMM, e.g., value is independent of the values taken by all other data Baayen et al., 2008), or mixed models for short. Mixed mod- points.1 In this chapter we are mainly concerned with the els are a generalization of ordinary regression that explicitly latter assumption. capture the dependency among data points via random ef- It is easy to see that in the case of repeated measures the fects parameters. Compared to traditional analyses that ig- independence assumption is expected to be violated. Obser- nore these dependencies, mixed models provide more accu- vations coming from the same participant are usually corre- rate (and generalizable) estimates of the effects, improved lated; e.g., they are more likely to be similar to each other statistical power, and non-inflated Type I errors. The reason than two observations coming from two different partici- for the recent popularity of linear mixed models boils down pants. For example, when measuring response latencies a to the computational resources required to implement them: participant that is generally slower than her/his peers will In the absence of such resources, realistic data-analytic meth- respond comparatively slower across conditions, thus mak- ods had to rely on simpler models that ignored the depen- ing the data points from this participant correlated and non- dencies in the data, and relied on closed-form estimates and independent (i.e., a participant’s rank in one condition is asymptotic results. Fortunately, today we can easily imple- predictive of their rank in other conditions). More gener- ment most linear mixed models using any recent computer ally, one can expect violations of the iid assumption if data with sufficient RAM. are collected from units of observations that are clustered in The remainder of this chapter is structured as follows: groups. Other examples of this are data from experiments First, we introduce the concepts underlying mixed models collected in group settings, students within classrooms, or and how they allow to account for different types of non- patients within hospitals. In such situations one would expect independence that can occur in psychological data. Next, we that observations within each cluster (i.e., a specific group, discuss how to set up a mixed model and how to perform classroom, or hospital) are more similar to each other than observations across clusters. Unfortunately, compared to vi- 1Technically, the independence assumption does not pertain to olations of other assumptions, such as the normality assump- the actual data points (or marginal distribution), but to the residuals (or conditional distribution) once the statistical model (i.e., fixed effects, random effects, etc.) has been taken into account. With this, we can define independence formally via conditional probabilities. We thank Ben Bolker, Jake Westfall, and Rene Schlegelmilch for The probability that any observation i takes on a specific value xi very helpful comments on a previous versions of this chapter. Hen- is the same irrespective of the values taken on by all the other ob- rik Singmann and David Kellen received support from the Swiss servations j , i, and a statistical model with parameter vector θ: National Science Foundation Grant 100014_165591. T P(xijθ) = P(xi j θ; j,i x j). 2 HENRIK SINGMANN statistical inference with a mixed model. Then, we will dis- plicit specification of the random-effects structure embedded cuss how to estimate a mixed model using the lme4 (Bates, in the experimental design. As described above, the benefit Mächler, et al., 2015) as well as the afex (Singmann et al., of this extra step is that one can adequately capture a variety 2017) packages for the statistical programming language R of dependencies that standard models cannot. (R Core Team, 2016). Finally, we will provide an outlook of In order to make the distinction and the role of random how to extend mixed models to handle non-normal data (e.g., effects in mixed models clearer, let us consider a simple ex- categorical responses). ample (constructed after Baayen et al., 2008 and Barr et al., 2013). Assume you have obtained response latency data Fixed Effects, Random Effects, and Non-Independence from I participants in K = 2 difficulty conditions, an easy condition that leads to fast responses and a hard condition The most important concept for understanding how to es- that produces slow responses. For example, in both condi- timate and how to interpret mixed models is the distinction tions participants have to make binary judgments on the same 2 between fixed and random effects. In experimental settings groups of words: In the easy condition they have to make an- fixed effects are often of primary interest to the researcher imacy judgments (whether it is a living thing). In the hard and represent the overall or population-level average effect condition participants have to a) judge whether the object the of a specific model term (i.e., main effect or interaction) or word refers to is larger than a soccer ball and b) whether it parameter on the dependent variable, irrespective of the ran- appears in the northern hemisphere; participants should only dom or stochastic variability that is present in the data. A press a specific key if both judgments are positive. Moreover, statistically-significant fixed effect should be interpreted in assume that each participant provides responses to the same essentially the same way as a statistically-significant test re- J words in each difficulty condition. Thus, difficulty is a sult for any given term in a standard ANOVA or regression repeated-measures factor, more specifically a within-subjects model. Furthermore, for fixed effects one can easily test spe- factor with J replicates for each participant in each cell of the cific hypotheses among the factor levels (e.g., planned con- design, but also a within-words factor with I replicates for trasts). each word in each cell of the design. Note that the cells of a In contrast, random effects capture random or stochas- designs are given by the combination of all (fixed) factor lev- tic variability in the data that comes from different sources, els. In the present example there are two cells, corresponding such as participants or items. These sources of stochastic to the easy condition and the difficult condition, but in a 2 × variability are the grouping variables or grouping factors for 2 design we would have four cells instead.4 the random effects and always concern categorical variables Figure 1 illustrates the response latency data from two par- (i.e., nominal variables such as condition, participant, item) ticipants (S 1 and S 2) across the easy and hard conditions, – continuous variables cannot serve as grouping factors for for two words (I1 and I2). The different panels show the random effects. In experimental settings, it is often useful to observed data together with the predictions from a specific think about the random effects grouping factors as the part of model. Going from top to bottom, the complexity of these the design a researcher wants to generalize over. For exam- models increases. The features of each of these models will ple, one is usually not interested in knowing whether or not become clear throughout the remainder of this chapter. But two factor levels differ for a specific sample of participants at this point a brief description of the data is in order: First, (after all, this could be done simply by looking at the ob- tained means in a descriptive manner), but whether the data 2Note that there are different possibilities on how to define fixed provides evidence that a difference holds in the population of and random effects, ways that are not necessarily compatible with participants the sample is drawn from. By specifying random each other (Bolker, 2015; Gelman, 2005). The definition employed effects in our model, we are able to factor out the idiosyn- here is the one most useful for understanding how to specify and crasies of our sample and obtain a more general estimate of estimate frequentist mixed model as implemented in lme4 (Bates, the fixed effects of interest.3 Mächler, et al., 2015). The independence assumption of standard statistical mod- 3It should be noted that this distinction, fixed effects as variables els implies that one can only generalize across exactly one of interests versus random effects as nuisance variables one wants source of stochastic variability: the population from which to generalize over, is a simplification.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us