Assessing Model-Based Inferences in Decision Making with Single-Trial Response Time Decomposition
Total Page:16
File Type:pdf, Size:1020Kb
Assessing model-based inferences in decision making with single-trial response time decomposition Gabriel Weindel1;2, Royce Anders1;3, F.-Xavier Alario1;4, Boris Burle2 1Laboratoire de Psychologie Cognitive, Aix Marseille Univ., CNRS, LPC, Marseille, France 2Laboratoire de Neurosciences Cognitives, Aix Marseille Univ., CNRS, LNC, Marseille, France 3Laboratoire d’Étude des Mécanismes Cognitifs, Univ. Lumière Lyon 2, EMC, Lyon, France 4Department of Neurological Surgery, University of Pittsburgh, PA 15213, USA Decision-making models based on evidence accumulation processes (the most prolific one being the drift-diffusion model – DDM) are widely used to draw inferences about latent psy- chological processes from chronometric data. While the observed goodness of fit in a wide range of tasks supports the model’s validity, the derived interpretations have yet to be suffi- ciently cross-validated with other measures that also reflect cognitive processing. To do so, we recorded electromyographic (EMG) activity along with response times (RT), and used it to decompose every RT into two components: a pre-motor (PMT) and motor time (MT). These measures were mapped to the DDM’s parameters, thus allowing a test, beyond quality of fit, of the validity of the model’s assumptions and their usual interpretation. In two perceptual decision tasks, performed within a canonical task setting, we manipulated stimulus contrast, speed-accuracy trade-off, and response force, and assessed their effects on PMT, MT, and RT. Contrary to common assumptions, these three factors consistently affected MT. DDM parameter estimates of non-decision processes are thought to include motor execu- tion processes, and they were globally linked to the recorded response execution MT. However, when the assumption of independence between decision and non-decision processes was not met, in the fastest trials, the link was weaker. Overall, the results show a fair concordance between model-based and EMG-based decompositions of RTs, but also establish some limits on the interpretability of decision model parameters linked to response execution. Introduction the different alternatives until one of them reaches a thresh- old that triggers the execution of the corresponding response Understanding how decisions are made is an important en- (Ratcliff, Smith, Brown, & McKoon, 2016; Stone, 1960). deavor at the crossroads of many research programs in cog- Typically, this theoretical framework is implemented quanti- nitive psychology. According to one prevalent theoretical tatively as a continuous diffusion process. These models are framework, decisions are driven by a cognitive mechanism applied empirically to fit response accuracy and latency mea- that sequentially samples goal-relevant information from the sures along different alternative-choice tasks and experimen- environment. Contextual evidence is accumulated in favor of tal manipulations. Such quantitative modeling procedures yield data-derived process parameters that are used to infer the cognitive dynamics underlying decision-making, which are otherwise not directly observable. We thank Thibault Gajdos for an early review of the manuscript, and Thierry Hasbroucq, Mathieu Servant and Michael Nunez for There are multiple variants of these quantitative models, valuable comments. We thank the four reviewers for their thor- which specify the dynamics of evidence accumulation dif- ough and helpful comments and suggestions on earlier versions of ferently (e.g., Anders, Alario, & van Maanen, 2016; Brown the manuscript. This work, carried out within the Labex BLRI & Heathcote, 2008; Heathcote & Love, 2012; Palmer, Huk, (ANR-11-LABX-0036) and the Institut Convergence ILCB (ANR- & Shadlen, 2005; Ratcliff, 1978; Ratcliff & McKoon, 2008; 16-CONV-0002), has benefited from support from the French gov- ernment, managed by the French National Agency for Research Tillman, 2017; Usher & McClelland, 2001, among many oth- (ANR) and the Excellence Initiative of Aix-Marseille University ers). All of these variants share the hypothesis that each (A*MIDEX). response time (RT) is the sum of three major terms (Equa- All data and code used in this study are available at https:// tion 1; see Luce, 1986): the time needed for stimulus en- osf.io/frhj9/. coding (Te), the time needed for the accumulated evidence The manuscript is currently (as of the 1st July 2020) under revi- to reach a threshold once accumulation has started (“deci- sion at Journal of Experimental Psychology : General sion time”: DT), and the time needed for response execution 2 GABRIEL WEINDEL1;2, ROYCE ANDERS1;3, F.-XAVIER ALARIO1;4, BORIS BURLE2 pression (Lawlor et al., 2019; Pe, Vandekerckhove, & Kup- pens, 2013), schizophrenia (Moustafa et al., 2015), Parkin- son’s disease (Herz, Bogacz, & Brown, 2016), language im- pairments (Anders, Riès, van Maanen, & Alario, 2017), and so on (see Ratcliff et al., 2016, for a review). This brief overview demonstrates the instrumental power of evidence accumulation as a cognitive process model of many diverse tasks, and its endorsement by the scientific community to characterize the latent processing dynamics of decision-making. While cognitive dynamics (in decision- making, or elsewhere) are not directly observable, the infer- ences made from these process models are expected to be Figure 1. Processing account of a single decision in a diffu- much more powerful than those derived from more generic sion model implementing evidence accumulation. statistic models (e.g. regression) intended to describe the The stochastic path represents the stimulus evidence being data at various depths of analysis (Anders, Oravecz, & accumulated over time through a noisy channel, modeled as Alario, 2018; Anders, Van Maanen, & Alario, 2019). This a diffusion process with a drift (i.e., rate of accumulation). is because cognitive dynamics are directly modeled from the Accumulation stops once a threshold boundary is reached, observed data. It is important to note, however, that this link- and the corresponding alternative is chosen. ing advantage may only hold if the modeling assumptions can be shown to be cognitively appropriate. We will now discuss these assumptions more extensively. (Tr). If the decision-relevant information encoding reaches First, in using this framework, one has to assume that the the same level of completeness on each trial, Te does not af- postulated model indeed reflects the process generating the fect the decision-making process of evidence accumulation behavior. This assumption is often considered to be sup- (DT). It is also usually assumed that the way the response is ported by the quality of the data fit diagnostics (e.g. by RT executed does not depend on how the decision was reached. quantile residuals Ratcliff & McKoon, 2008), which is gen- In other words, Tr is assumed to be independent from DT. erally very satisfactory in most published applications. For these reasons, presumably, the terms T and T are most e r However, it has often been reported that a given behav- often pooled into a single parameter T (Equation 2). er ioral data-set can be fitted equally well with different models, that differ substantially in their architectures (e.g. Donkin, RT = T + DT + T (1) e r Brown, Heathcote, & Wagenmakers, 2011; Servant, White, Ter = Te + Tr (2) Montagnini, & Burle, 2016). Therefore, although a good fit In the minimal form of the model, the decision time is a necessary criteria, it might not be sufficient to single out DT is derived from two parameters: the accumulation rate a particular model (Roberts & Pashler, 2000). In addition, (i.e., amount of evidence accumulated per time unit) and the while generally robust, model fitting procedures are complex decision threshold (i.e., the amount of accumulated evidence operations that may suffer from problems such as parameter needed for a response to be triggered; Figure 1). The best trade-off, sensitivity to trials generated from another genera- fitting parameters are directly estimated from the measured tive model (e.g. guessed responses), or other biases that can RT and the corresponding response accuracy. impact the estimated parameters, and hence notably affect These models provide a framework to determine the cog- the validity of the inferences made upon them (Ratcliff & nitive locus of specific experimental observations (e.g., as- Childers, 2015). sociative vs. categorical priming: Voss, Rothermund, Gast, Secondly, one has to assume that the manipulations that & Wentura, 2013; masked vs. unmasked priming: Gomez, modulate the estimated parameters can indeed be attributed Perea, & Ratcliff, 2013; among many others). They have to modulations of the presumed cognitive process. The ma- also been used to assess populations differences. For exam- jor diagnostic to ascertain this mapping is known as the “Se- ple, general slowing associated with aging has been linked lective Influence Test” (Heathcote, Brown, & Wagenmakers, to a higher evidence threshold and a longer Ter, rather than 2015). This approach probes whether a given experimental lower evidence accumulation rate (Ratcliff, Thapar, & McK- manipulation, presumed to selectively-affect a given psycho- oon, 2001). The same rationale for group analyses is increas- logical process, only affects the corresponding parameter of ingly used in psychopathology research to characterize infor- the model. For example, manipulations of stimulus informa- mation processing alterations in a variety of pathologies such tion quality are expected to selectively affect the accumu- as anxiety (linked