A Bayesian Approach to the Mixed-Effects Analysis of Accuracy Data in Repeated-Measures Designs ⇑ ⇑ Yin Song A, Farouk S

A Bayesian Approach to the Mixed-Effects Analysis of Accuracy Data in Repeated-Measures Designs ⇑ ⇑ Yin Song A, Farouk S

Journal of Memory and Language 96 (2017) 78–92 Contents lists available at ScienceDirect Journal of Memory and Language journal homepage: www.elsevier.com/locate/jml A Bayesian approach to the mixed-effects analysis of accuracy data in repeated-measures designs ⇑ ⇑ Yin Song a, Farouk S. Nathoo a, , Michael E.J. Masson b, a Department of Mathematics and Statistics, University of Victoria, Canada b Department of Psychology, University of Victoria, Canada article info abstract Article history: Many investigations of human language, memory, and other cognitive processes use response accuracy as Received 6 April 2016 the primary dependent measure. We propose a Bayesian approach for the mixed-effects analysis of accu- revision received 3 May 2017 racy studies using mixed binomial regression models. We present logistic and probit mixed models that allow for random subject and item effects, as well as interactions between experimental conditions and both items and subjects in either one- or two-factor repeated-measures designs. The effect of experimen- Keywords: tal conditions on accuracy is assessed through Bayesian model selection and we consider two such Accuracy studies approaches to model selection: (a) the Bayes factor via the Bayesian Information Criterion approximation Bayesian analysis and (b) the Watanabe-Akaike Information Criterion. Simulation studies are used to assess the methodol- Behavioral data Model selection ogy and to demonstrate its advantages over the more standard approach that consists of aggregating the Repeated-measures accuracy data across trials within each condition and over the contemporary use of logistic and probit mixed models with model selection based on the Akaike Information Criterion. Software and examples in R and JAGS for implementing the analysis are available at https://v2south.github.io/BinBayes/. Crown Copyright Ó 2017 Published by Elsevier Inc. All rights reserved. Introduction approach’, has serious problems that have repeatedly been pointed out to researchers, it continues to be used. Here we illustrate a Many types of behavioral data generated by experimental solution to these problems offered by Bayesian data analysis. We investigations of human language, memory, and other cognitive first (re)summarize the problems of the standard aggregating processes entail the measurement of response accuracy. For exam- approach. Then we summarize one approach to this problem that ple, in studies of word identification, error rates in word-naming or has gained traction over the last decade (non-Bayesian Generalized lexical-decision tasks are analyzed to determine whether manipu- Linear Mixed Models), followed by a brief review of some of the lated variables or item characteristics influence response accuracy general pros and cons of Bayesian approaches. The rest of the paper (e.g., Chateau & Jared, 2003; Yap, Balota, Tse, & Besner, 2008). Sim- then presents a Bayesian statistical modeling framework for ilarly, in experiments on memory topics such as false memory and repeated-measures accuracy data, simulation studies evaluating the avoidance of retroactive and proactive interference on recall, the proposed methodology, and an application to actual data aris- response errors or probability of accurate responding are the crit- ing from a single-factor repeated-measures design. ical measures of performance (e.g., Arndt & Reder, 2003; Jacoby, Wahlheim, & Kelley, 2015). The standard aggregating approach The common treatment of accuracy or error-rate data has, and to a large extent continues, to consist of aggregating data across To assess the validity of our characterization of how researchers trials within each condition for each subject to generate the equiv- typically analyze accuracy, error, or other classification data, we alent of a proportion correct or incorrect score, ranging from 0 to 1. examined articles published in recent issues of four of the leading These scores are then analyzed using repeated-measures analysis journals in the field of cognitive psychology: the Journal of Memory of variance (ANOVA) or, in the simplest cases, a t test. Although this and Language (JML), the Journal of Experimental Psychology: standard approach, hereafter termed ’standard aggregating Learning, Memory, and Cognition (JEP), Cognition, and Cognitive Psychology. All articles appearing in issues with a publication date ⇑ Corresponding authors. of January to August 2016 (up to the October 2016 issue for JML E-mail addresses: [email protected] (F.S. Nathoo), [email protected] (M.E.J. because later issues of that journal were available at the time the Masson). survey was conducted) were considered. Articles in which http://dx.doi.org/10.1016/j.jml.2017.05.002 0749-596X/Crown Copyright Ó 2017 Published by Elsevier Inc. All rights reserved. Y. Song et al. / Journal of Memory and Language 96 (2017) 78–92 79 accuracy was analyzed using a transformed measure such as d0, the actual accuracy data. Jaeger (2008) also shows that these trans- receiver operating characteristic curves, or parameters of compu- formations do not fix the problem when the mean proportions are tational models based on simulation of accuracy data were not close to 0 or 1. Furthermore, transforming the data after aggregat- included. A total of 180 articles across the four journals reported ing across items precludes the investigation of item effects. data expressed as proportions or the equivalent (e.g., accuracy, error, classification responses). Among these articles, 69 were on a topic related to language processing and the remaining 111 Generalized linear mixed models addressed other issues in memory and cognition. For each article, we determined whether the authors used standard methods of A viable solution to these difficulties with the standard aggre- analyzing data that included aggregating performance across items gating approach to analyzing accuracy data involves using general- or across subjects or whether generalized linear mixed models ized linear mixed-models of logistic regression (Dixon, 2008; were used in which individual trials were the units of analysis. Jaeger, 2008; Quené & Van den Bergh, 2008). In this setting a hier- We included in the standard-analysis category any standard uni- archical model based on two levels is specified for the data, where, variate method of analysis, such as analysis of variance, t-tests, cor- at the first level the response variables are assumed to be gener- relation, and regression in which data were aggregated over items ated from a Bernoulli distribution. At the second level of the model or over subjects. The application of analysis of variance using sub- the accuracy or error rates are converted to a logit scale (the loga- jects and items as random effects in separate analyses and report- rithm of the odds of success or failure): logitðpÞ¼lnðp=ð1 À pÞÞ and ing F1 and F2 was also classified as using a method of aggregation. the variability in the log-odds across subjects, items, and condi- This approach, used widely since Clark’s (1973) seminal paper on tions is based on a mixed effects model. We emphasize here that item variability, relies on an analysis that aggregates across defined p is not computed from the data and does not correspond to the subsets of trials (items for F1 and subjects for F2), rather than ana- proportion of accurate responses aggregated over items for a given lyzing data at the level of individual trials. Our assessment indi- condition and subject; rather, p is an unknown parameter repre- cated that for articles on language-related topics, 37 (54%) senting the probability of an accurate response for a given subject, applied some form of the standard aggregating approach (of these, item, and experimental condition. Rather than aggregating data 15 used methods that reported effects aggregated over subjects over trials to obtain a single estimate of the proportion correct in and effects aggregated over items; i.e., F1 and F2). For articles on a given condition for each subject, the individual binary accuracy other topics of memory and cognition, 99 (89%) relied on the stan- trial scores are the unit of measurement. This level of granularity dard aggregating approach (two of these reported F1 and F2 analy- allows the assessment of possible random effects for both subjects ses). Overall, then, 76% of recently published articles in these four and items. That is, effects of a manipulation may not be consistent leading cognitive psychology journals analyzed accuracy or other from subject to subject or item to item and a mixed-effects analysis binomial data in the historically standard way, which involves can characterize the extent of these differences. Variance in effects aggregating performance across items for at least a subset of the across items can thus be assessed, which addresses the concern analyses. The remaining articles used generalized linear mixed raised by Clark (1973) about the ‘‘language-as-a-fixed-effect fal- models to analyze the data,1 which does not aggregate across items lacy” (Jaeger, 2008; Quené & Van den Bergh, 2008). and which we discuss in detail below. The proposed use of mixed-effects logistic regression for the The shortcomings of what continues to be a widely applied analysis of accuracy data can be implemented either with or with- method of analyzing accuracy data, and binomial data in general out significance tests. In the latter case, information criteria such as (i.e., aggregating across items), have been known for some time the Akaike Information Criterion (AIC) can be used for model selec- (Cochran, 1940) and have been reiterated in recent accounts of tion. In the former case, these analyses continue to rely on the alternative approaches (e.g., Dixon, 2008; Jaeger, 2008; Quené & basic principles of null-hypothesis significance testing (NHST) for Van den Bergh, 2008). For instance, the proportions generated from making decisions about whether independent variables are pro- binary observations (correct versus incorrect) need not be nor- ducing effects on performance. A number of recent reports in the mally distributed, which violates one of the fundamental assump- psychological literature have highlighted potential deficiencies tions of ANOVA and t-tests.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us