Bridging Cognitive and Neuropsychological Measures of Attention
Total Page:16
File Type:pdf, Size:1020Kb
Treviño et al. Cogn. Research (2021) 6:51 https://doi.org/10.1186/s41235-021-00313-1 Cognitive Research: Principles and Implications ORIGINAL ARTICLE Open Access How do we measure attention? Using factor analysis to establish construct validity of neuropsychological tests Melissa Treviño1* , Xiaoshu Zhu2, Yi Yi Lu3,4, Luke S. Scheuer3,4, Eliza Passell3,4, Grace C. Huang2, Laura T. Germine3,4 and Todd S. Horowitz1 Abstract We investigated whether standardized neuropsychological tests and experimental cognitive paradigms measure the same cognitive faculties. Specifcally, do neuropsychological tests commonly used to assess attention measure the same construct as attention paradigms used in cognitive psychology and neuroscience? We built on the “general attention factor”, comprising several widely used experimental paradigms (Huang et al., 2012). Participants (n 636) completed an on-line battery (TestMyBrain.org) of six experimental tests [Multiple Object Tracking, Flanker Interfer= - ence, Visual Working Memory, Approximate Number Sense, Spatial Confguration Visual Search, and Gradual Onset Continuous Performance Task (Grad CPT)] and eight neuropsychological tests [Trail Making Test versions A & B (TMT-A, TMT-B), Digit Symbol Coding, Forward and Backward Digit Span, Letter Cancellation, Spatial Span, and Arithmetic]. Exploratory factor analysis in a subset of 357 participants identifed a fve-factor structure: (1) attentional capac- ity (Multiple Object Tracking, Visual Working Memory, Digit Symbol Coding, Spatial Span), (2) search (Visual Search, TMT-A, TMT-B, Letter Cancellation); (3) Digit Span; (4) Arithmetic; and (5) Sustained Attention (GradCPT). Confrmatory analysis in 279 held-out participants showed that this model ft better than competing models. A hierarchical model where a general cognitive factor was imposed above the fve specifc factors ft as well as the model without the gen- eral factor. We conclude that Digit Span and Arithmetic tests should not be classifed as attention tests. Digit Symbol Coding and Spatial Span tap attentional capacity, while TMT-A, TMT-B, and Letter Cancellation tap search (or attention- shifting) ability. These fve tests can be classifed as attention tests. Signifcance statement we do not understand the relationship between existing Assessment of cognitive function in clinical popula- neuropsychological tests and widely used cognitive para- tions, for both clinical and research purposes, is primar- digms. Te goal of this paper is to begin to address this ily based on standardized neuropsychological testing. problem, using factor analysis tools to map the relation- However, this approach is limited as a clinical research ships, specifcally in the domain of attention. Our results tool due to two major issues: sensitivity and construct should provide guidance for which neuropsychological validity. Deriving new measures based on contempo- tests should be classifed as attention tests, and hopefully rary work in cognitive psychology and cognitive neu- provide inspiration for the development of new clinical roscience could help to solve these problems. However, assessments based on experimental attention paradigms. Furthermore, we hope we have provided a template for other researchers to explore the connections between *Correspondence: [email protected] cognitive paradigms and neuropsychological tests in 1 Basic Biobehavioral and Psychological Sciences Branch, National Cancer domains beyond attention. By bringing these felds closer Institute, 9609 Medical Center Drive, Rockville, MD 20850, USA Full list of author information is available at the end of the article together, we can improve our scientifc understanding of © The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Treviño et al. Cogn. Research (2021) 6:51 Page 2 of 26 cognition, and ultimately improve the welfare of people problem is a major limitation (McFall, 2005; McFall & who sufer from cognitive disorders and defcits. Townsend, 1998), and poses a general challenge to inte- grating neuropsychological research with cognitive neu- Introduction roscience (Horowitz et al., 2019). Assessing cognitive functioning across the gamut of In contrast to the neuropsychological tradition, experi- health and mental health conditions has traditionally mental paradigms (“paradigms” rather than “tests”, relied on standardized neuropsychological test batteries because there is no standard version; Kessels, 2019) in (Foti et al., 2017; Helmstaedter et al., 2003; Meade et al., basic cognitive psychology and cognitive neuroscience 2018; Vives et al., 2015). However, this approach may are explicitly created to test theoretical models of spe- be reaching its limits as a clinical research tool in many cifc cognitive functions and operations. Experimental felds, due to two major issues: sensitivity and construct paradigms often have internal manipulations that allow validity (Bilder & Reise, 2019; Horowitz et al., 2018; How- for separations of subcomponent processes. Consider ieson, 2019; Kessels, 2019; Marcopulos & Łojek, 2019; the visual search paradigm, in which observers search Parsons & Dufeld, 2019). We and others have proposed through N items to fnd a target (e.g., search for the T that deriving new measures based on contemporary among Ls). Instead of looking at the overall response work in cognitive psychology and cognitive neuroscience time, the experimenter computes the slope of the regres- could help to solve these problems (Carter & Barch, 2007; sion line for response time as a function of N to yield a Horowitz et al., 2018). However, we currently do not pure search rate, independent of perceptual, response, understand the relationship between existing neuropsy- and motor stages (Sternberg, 1966). Similarly, in the Erik- chological tests and widely used cognitive paradigms. sen fanker paradigm (Eriksen & Eriksen, 1974) partici- Te goal of this paper is to begin to address this prob- pants see three letters, and are asked to give one response lem, using factor analysis tools to map the relationships. if the central letter belongs to one category (e.g., X or Specifcally, we will address the attention domain, which T) and another response if it belongs to another (e.g., was the most frequently assessed cognitive domain in our O or C). If the two fanking letters come from the same survey of cancer-related cognitive impairment studies category as the target (e.g., X T X, compatible trial), (Horowitz et al., 2019). responses are typically faster than when they come from Many neuropsychological tests were originally diferent categories (e.g., X C X, incompatible). Te pri- designed to diagnose severe cognitive difculties (e.g., mary dependent measure is again not overall response resulting from stroke). As a result, they tend to lack sen- time, but the incompatible-compatible diference score, sitivity to the less severe, and often more difuse cogni- which provides a measure of the strength of interference tive difculties encountered by many clinical populations from incompatible responses. Tis sort of logic is rare in (Nelson & Suls, 2013). Tis insensitivity may contribute neuropsychological tests, perhaps in part because they to the widely observed lack of correlation between objec- have been largely administered, even now, as paper-and- tive neuropsychological tests and patients’ subjective pencil tests. Consider the Trail Making Test (Partington reports of their own cognitive problems (Jenkins et al., & Leiter, 1949), in which participants are asked to con- 2006; Srisurapanont et al., 2017). nect a sequence of digits in order (1, 2, 3, etc., version Neuropsychological tests tend to be developed from a A) or alternate between digits and letters (1, A, 2, B, etc., practical rather than a theoretical standpoint, and often version B). Te score for each version is overall com- tap multiple cognitive abilities in a single test (Sohlberg pletion time. Tis score confates search ability, motor & Mateer, 1989). Tis means that it is often difcult to function, and executive function into a single score. As know exactly what cognitive faculties are being measured with the Digit Symbol Coding test, this makes the Trail by a given test (Kessels, 2019; Schmidt et al., 1994). Te Making Test sensitive to a wide range of defcits, which Digit Symbol Coding test, for example, is a widely used contributes to its popularity (along with the fact that it neuropsychological test that is variously held to measure is not proprietary), while making the results difcult to attention, psychomotor speed, working memory, pro- interpret (Kessels, 2019) though several groups have cessing speed, and executive function (Horowitz et al., attempted to decompose Trail Making