THE CLINICAL NEUROPSYCHOLOGIST https://doi.org/10.1080/13854046.2018.1523465

SPECIAL ISSUE ARTICLE National Institutes of Health initiatives for advancing scientific developments in clinical

Thomas D. Parsonsa,b,c and Tyler Duffieldd aNetDragon Digital Research Centre, Denton, Texas; bComputational Neuropsychology and Simulation (CNS) Laboratory, Denton, Texas; cCollege of Information, Denton, Texas; dDepartment of Family/Sports Medicine, Oregon Health and Science University, Portland, Oregon, USA

ABSTRACT ARTICLE HISTORY Objective: The current review briefly addresses the history of Received 14 March 2018 neuropsychology as a context for discussion of developmental Accepted 8 September 2018 milestones that have advanced the profession, as well as areas where the progression has lagged. More recently in the digital/ KEYWORDS information age, utilization and incorporation of emerging tech- Neuropsychology 3.0; technologies: National nologies has been minimal, which has stagnated ongoing evolu- Institutes of Health; office tion of the practice of neuropsychology despite technology of behavioral and social changing many aspects of daily living. These authors advocate for sciences research; embracing National Institutes of Health (NIH) initiatives, or inter- neuroinformatics changeably referred to as transformative opportunities, for the behavioral and social sciences. These initiatives address the need for neuropsychologists to transition from fragmented and data- poor approaches to integrated and data-rich scientific approaches that ultimately improve translational applications. Specific to neuropsychology is the need for the adoption of novel means of –behavior characterizations. Method: Narrative review Conclusions: Clinical neuropsychology has reached a develop- mental plateau where it is ready to embrace the measurement science and technological advances which have been readily adopted by the human . While there are ways in which neuropsychology is making inroads into these areas, a great deal of growth is needed to maintain relevance as a scien- tific discipline (see Figures 1, 2, and 3) consistent with NIH initia- tives to advance scientific developments. Moreover, implications of such progress require discussion and modification of training, ethical, and legal mandates of the practice of neuropsychology.

Clinical neuropsychology has a rich tradition of developing validated assessment tools using basic low dimensional technologies (i.e. pencil-and-paper protocols that are not ecologically valid simulations of everyday activities). These tools have undergone a number of apparent advances in quantitative methodologies (e.g. expanded normative

CONTACT Thomas D. Parsons [email protected] NetDragon Digital Research Centre, Director, Computational Neuropsychology and Simulation (CNS) Laboratory, Professor of and of Technologies, University of North Texas, Fellow, National Academy of Neuropsychology, 1155 Union Circle #311280, Denton, TX 76203, USA ß 2018 Informa UK Limited, trading as Taylor & Francis Group 2 T. D. PARSONS AND T. DUFFIELD standards) throughout the profession’s history (e.g. Casaletto & Heaton, 2017). However, neuropsychologists have been slow to embrace emerging technological advances in the digital/information age (Rabin et al., 2014; Rabin, Paolillo, & Barr, 2016). As a result, current neuropsychological assessment procedures represent a tech- nology that has barely changed since the first scales were developed in the early 1900s (Miller & Barr, 2017; Parsons, 2016). A result of this stagnation would be neuro- psychology falling behind current National Institutes of Health (NIH) initiatives to advance scientific developments, including: (1) Integrating into behav- ioral and social sciences; (2) Transformative advances in measurement science; (3) Digital intervention platforms; and (4) Large scale population cohorts and data integra- tion (Collins & Riley, 2016). Relatedly, the NIH Brain Research through Advancing Innovative (BRAIN) Initiative endeavors to uncover the mysteries of brain disorders (e.g. Alzheimer’s, Parkinson’s, depression, and traumatic brain injury) and accelerate the development of new technologies for producing dynamic brain imaging and modeling (Insel, Lands, & Collins, 2013). The NIH has developed a 12-year research strategy to achieve the goals of the initiative. For neuropsychologists interested in keeping pace with NIH initiatives (e.g. the NIH BRAIN initiative), there is a unique opportunity to take part in the development of technologies for exploration into the ways in which the brain records, processes, uses, stores, and retrieves vast quantities of information. Furthermore, neuropsychologists can help shed light on neuroscience findings through clinical expertise to aid our understanding of the complex links between brain function and behavior. After a brief discussion of the historical progression of neuropsychological assess- ment technologies, there will be a discussion of current NIH initiatives for the behav- ioral and social sciences. This will include evaluations of current neuropsychological assessment technologies and approaches for maintaining relevance as a specialty.

1. Brief historical overview of neuropsychology and technology Numerous advances from qualitative to more quantitative methods have occurred since neuropsychology’s inception, such as expanded normative standards (Casaletto & Heaton, 2017), evidence-based indicators of neuropsychological change (e.g. Duff, 2012), performance validity testing (e.g. Greher & Wodushek, 2017), and cross-cultural considerations regarding limits of normative comparisons, daily life relevance of test content, familiarity with the testing process, etc. (e.g. Olson & Jacobson, 2015). While each of these advances has been meaningful for the investigation of cognitive func- tions, the importance of modernizing current approaches to neuropsychological assessment in light of technological advances has received increasing interest of late (e.g. Beaumont, 2008; Benton & Sivan, 2007; Bilder, 2011; Casaletto & Heaton, 2017; Jagaroo, 2009; Jagaroo & Santangelo, 2017; Miller & Barr, 2017; Parsons, 2016; Ponsford, 2017). Miller and Barr (2017) provide a useful brief history of neuropsychol- ogy’s limited adoption of technology and the maintenance of low dimensional paper- and-pencil measures due to the profession’s relationship with test publish- ing companies. THE CLINICAL NEUROPSYCHOLOGIST 3

Bilder (2011) argues that three periods in time represent clinical neuropsychology’s scientific development. From 1950–1979, Neuropsychology 1.0 was a period in which lesion localization and interpretation were emphasized without thorough normative data. From 1980 to the present, one finds Neuropsychology 2.0, wherein technological advances in shifted the focus from lesion localization to characterization of cognitive strengths and weaknesses using normative batteries. Bilder sees Neuropsychology 3.0 as a future period of neuropsychological advancement that will incorporate findings from neuroinformatics and information technologies. In a similar fashion, Parsons (2016) appraised the technological and theoretical development of neuropsychological assessment in terms of three waves of techno- logical adoption. Parsons argues that Neuropsychology 1.0 is best understood as a period in which neuropsychological assessments emphasized the development of low dimensional and construct-driven (i.e. simple stimulus presentations of stimuli to test abstract concepts like working memory) paper-and-pencil measures. Neuropsychology 2.0 represents a technological move to automate the administration, scoring, and in some instances the interpretation of low dimensional stimulus presentations using computerized approaches (NIH Toolbox; Gershon et al., 2010, 2013; Weintraub et al., 2013), as well as teleneuropsychology (e.g. video teleconferencing; Cullum, Hynan, Grosch, Parikh, & Weiner, 2014). The NIH Toolbox is an example of a computerized assessment battery initiated by the NIH Blueprint for Neuroscience Research that is developing a normative database (Gershon et al., 2010, 2013; Weintraub et al., 2013). The goal was to develop a set of computerized neuropsychological measures to enhance collection of data in large cohort studies and to advance biomedical research (Gershon et al., 2010, 2013; Weintraub et al., 2013). With synchronous developments in neuroimaging during this era, neuropsychologists were increasingly called upon to make predictions about the patient’s ability to perform activities of daily living. As such, Neuropsychological Assessment 3.0 reflects the contemporary development of high dimensional (ecologically valid simulation of everyday activities) assessment and rehabilitation technologies (e.g. computational modeling and simulation; ; deep learning; neuroinformatics). These historical formulations are not without controversy. One area of historical concern has been that computerized assessments may result in increased error (e.g. program shutting down mid-way through subtest; e.g. Cernich, Brennana, Barker, & Bleiberg, 2007) and/or decreased integrity of the neuropsychological evaluation pro- cess by means of automation. While many of these concerns have been addressed with advances in computational power and security, there are a number of steps that need to be taken on the part of developers and users of computerized assessments to ensure proper implementation (Parsons, McMahan, & Kane, 2018). Moreover, greater normative efforts are needed for validating advanced platforms and novel data ana- lytic approaches. It is important to note that this is not an attack on clinical neuropsychology. Instead, it is an attempt to answer the question of ‘Are modern neuropsychological assessment methods really “modern”?’ This is not a new question. Thirty years ago, Paul Meehl (1987) began calling for clinical psychologists to embrace increasingly prevalent technological advances. During the same decade (1980s) neuropsychologists 4 T. D. PARSONS AND T. DUFFIELD also began discussing the possibilities of -based assessments (Adams, Kvale, & Keegan, 1984). Twenty years ago, Sternberg (1997) noted that psychological assess- ments had fallen short of meeting Meehl’s challenge, barely progressing throughout the twentieth century (e.g. Wechsler scales) compared to progress in our everyday technologies. Sternberg compared the lack of technological advances in the intelli- gence testing to now obsolete black and white televisions, vinyl records, rotary-dial telephones, and the first commercial computer made in the United States (i.e. UNIVAC I). Sternberg also pointed out cognitive testing needed to progress in ideas, not just new measures. Contemporaneously to Sternberg, Dodrill (1997) contended that neuropsychology had made much less progress than would be expected in absolute terms and in com- parison with the progress made in other clinical neurosciences. Dodrill pointed out that clinical neuropsychologists were using many of the same tests that they have used for decades, and new tests were not conceptually or substantively better than the old ones (e.g. Wechsler scales). While progress differences between and neuropsych- ology prior to the appearance of computerized tomographic scanning (1970s) may have been negligible, following this advancement neuropsychologists stopped being asked to identify focal brain lesions. The continued neuroimaging advances since then have allowed clinical neurologists increasing ability to understand and treat neurologic patho- physiology (e.g. Cendes, Theodore, Brinkmann, Sulc, & Cascino, 2016). If neurology cur- rently utilized technological developments commensurate to neuropsychology, then they would be limited to pneumo-encephalograms and radioisotope scans, procedures that are considered primeval by current neuroradiological standards. To emphasize this point, simplistic searches were performed on 27 February 2018 to assess the number of publications per discipline regarding utilization of techno- logical advancements. The first search included a Pubmed search with the search terms ‘computer’ AND (‘neuropsychology’ OR ‘neurology’ OR ‘neuroscience’) and iden- tified 30,013 publications (neuropsychology total = 1077; neurology total = 11,810; neuroscience total = 17,126) from 1985 to 2017 (see Figure 1). In a second and third search, the term ‘computer’ was replaced by ‘technology’ and ‘neuroimaging’ with similar proportional findings (see Figures 2 and 3, respectively). Figures 1, 2, and 3

Discipline + "Computer" Publications Per Year

2500 2000 1500 1000 500 0 Number of Publications of Number 1987 1991 1995 1998 2000 2003 2007 2008 2013 2016 1985 1986 1988 1989 1990 1992 1993 1994 1996 1997 1999 2001 2002 2004 2005 2006 2009 2010 2011 2012 2014 2017 2015

neuropsychology count neurology count neuroscience count Figure 1. Increase in number of publications identified in the PubMed database over time contain- ing the term ‘computer’ relative to search of discipline (e.g. ‘neuropsychology’; ‘neurology’; ‘neuroscience’). THE CLINICAL NEUROPSYCHOLOGIST 5

Discipline + "Technology" Publications Per Year

3000 2500 2000 1500 1000 500 0 Number of of Publications Number 1994 1997 2006 2012 2013 2009 1985 1986 1987 1988 1989 1990 1991 1996 1998 1999 2000 2001 2002 2003 2004 2005 2014 2015 2016 2017 2008 2010 1992 1993 1995 2007 2011 neuropsychology count neurology count neuroscience count Figure 2. Increase in number of publications identified in the PubMed database over time contain- ing the term ‘technology’ relative to search of discipline (e.g. ‘neuropsychology’; ‘neurology’; ‘neuroscience’).

Discipline + "Neuroimaging" Publications Per Year

3500 3000 2500 2000 1500 1000 500 Number of Publications of Number 0 2010 2012 2004 2017 2011 1986 1996 2005 2006 2007 2008 2015 2016 1985 1997 1998 1999 2000 2009 2013 2014 1990 1991 1992 2001 2002 2003 1987 1988 1989 1993 1994 1995

neuropsychology count neurology count neuroscience count Figure 3. Increase in number of publications identified in the PubMed database over time contain- ing the term ‘neuroimaging’ relative to search of discipline (e.g. ‘neuropsychology’; ‘neurology’; ‘neuroscience’). show the number of publications by year that resulted from each of these three broad literature searches. These figures suggest (in this simplistic search) that computational technologies have greater representation in neurology and the neurosciences. While the progressive inclusion of technologies is apparent in neuropsychology, it does not appear to have kept pace with other neurosciences. These search results are reflective of reported util- ization rates of computerized instruments by neuropsychologists. Rabin and colleagues (2014) surveyed 512 doctorate-level psychologists residing in the United States and Canada that were affiliated with the National Academy of Neuropsychology and/or the International Neuropsychological Society. Only six percent (n = 40) of the 693 distinct neuropsychology assessments reported by respondents were computerized. Moreover, the average respondent reported that they rarely used computerized tests. It is important to note that an increased likelihood of computerized assessment usage was apparent for newer neuropsychologists. While technological progress has been slower for clinical neuropsychology than for fellow disciplines, it is important to note that this does not suggest a comprehensive 6 T. D. PARSONS AND T. DUFFIELD lack of progress. Some neuropsychologists have adopted new technologies, such as those participating in the Framingham Heart Study. For example, Au, Piers, and Devine (2017) discuss the potential of technologies for increased granularity in assessments of performance that can now be measured through digital capture. This can be described as ‘BPA on steroids’ (p. 856). Also, newly implemented digital recordings of verbal responses to neuropsychological tests have allowed automated to extract new language features (e.g. speaker turn taking, speaking rate, hesitations, pitch, num- ber of words, and vocabulary) that may be used for predicting incident cognitive impair- ment (Alhanai, Au, & Glass, 2017). Additionally, replacement of a ballpoint pen with a digital pen has allowed software to identify 1000 analyzable features for Clock Drawing Test performance. Moreover, the sensitivity of these digital technologies to latencies in decision-making and graphomotor characteristics may assess underlying cognitive proc- esses at a level of precision that hand scored Boston Process Approach (BPA) efforts cannot. Lastly, new performance feature extractions made possible through novel tech- nologies (i.e. the digital pen) are providing data that question traditional conceptualiza- tions of abstract cognitive constructs. Specifically, logging of automated strokes (motor behavior), drawing versus non-drawing time (non-motor behavior), and transitions from one aspect of the clock to the next (higher-order decision latencies) suggest that infor- mation processing speed is not a unitary construct (Piers et al., 2017). In the next section, there is a discussion of recent initiatives, or interchangeably referred to as transformative opportunities, from the NIH for advancing scientific develop- ments. These include the integration of neuroscience into behavioral and social sciences, transformative advances in measurement science, digital intervention platforms, and large-scale cohort data integration. Of specific emphasis will be the ways in which mod- ern neuropsychological assessment methods are prepared to support these opportunities.

2. NIH's transformative opportunities for the behavioral and social sciences In a recent article on NIH initiatives for behavioral and social sciences, Collins and Riley (2016) discuss emerging scientific and technological opportunities (e.g. novel sensor tools) for enhanced characterization of neurological, behavioral, and social processes. They even go so far to suggest that these new technologies may result in a scientific paradigm shift from fragmented and data-poor behavioral science to inte- grated and data-rich science that allows for heightened translation from bench to bed- side. In their discussion of transformative opportunities from the NIH Office of Behavioral and Social Sciences Research (OBSSR), they draw from four key develop- ments that influenced the scientific priorities of the NIH OBSSR strategic plan for fiscal years 2017 through 2021. In the following, we discuss these four key developments in terms of their promise for enriching neuropsychological assessment.

2.1. Integrating neuroscience advances into clinical neuropsychology Developments in neuroscience approaches and technologies, such as functional neuro- imaging, have afforded real-time observations of brain function that are challenging THE CLINICAL NEUROPSYCHOLOGIST 7 the validity of established neuropsychological models using traditional paper-and-pen- cil (low dimensional) technologies (Price, 2018). Price (2018) highlights a lack of ‘pure specificity’ (i.e. neuropsychological impairments after brain injury are rarely confined to one type of processing) and lack of a one-to-one relationship between neuropsycho- logical functions and brain structures/systems (i.e. the same symptom can arise from different types of injury and, conversely, the same underlying injury can result in var- iety of different symptoms), both which can hinder interpretation of neuropsycho- logical data. Although integration of imaging and neuropsychological methods has improved our understanding of brain functions, Price (2018) also notes multiple diffi- culties with making conclusions about neuropsychological functions from neuroimag- ing data (e.g. specific neuropsychological functions are typically associated with activations in multiple brain regions, known as ‘distributed processing’). Price (2018) discusses allowing ongoing advances in methods and technologies to redefine old cognitive functions (e.g. elucidation of multiple types of processing speed) and intro- duce new cognitive functions. The latter is suggested based upon continued observa- tions that there exists multiple ways that the same stimulus (e.g. a written word) can be converted (i.e. network activation) into the same output (e.g. speaking the written word) and inter-subject variability exists for completing the same task (i.e. different strategies and thus different network activations) even in normal populations (Price, 2018). Importantly, addressing challenges to the validity of established neuropsychological models will likely be best understood in terms of patients’ reciprocal relations with the environments in which they carry out activities of daily living (Bigler, 2016, 2017; Genon, Reid, Langner, Amunts, & Eickhoff, 2018). Collins and Riley (2016) argue that understanding such complex and dynamic interactions requires that psychologists study the brain’s processes in ecologically valid contexts (i.e. environmental and social systems). While clinical neuropsychology is increasingly emphasizing ecological validity (Burgess et al., 2006; Chaytor & Schmitter-Edgecombe, 2003) and the need for com- mon ground with (Beauchamp, 2017), this is difficult to perform with current low dimensional paper-and-pencil assessments (Parsons et al., 2017). Furthermore, there is a need for greater to the development of neuro- psychological assessments of brain functions that include dynamic presentations of environmentally relevant stimuli (Barkley, 2012; Genon et al., 2018; Zaki & Ochsner, 2009). As opposed to the static/two-dimensional pictures that comprise most trad- itional neuropsychological test stimuli. New transdisciplinary efforts in the clinical and behavioral neurosciences are merging these areas of research. Neuropsychologists need to find ways to update their technologies to reflect high dimensional assessment approaches (e.g. deep learning; virtual reality) that move beyond low dimensional (traditional paper-and-pencil) neuropsychological assessment data.

2.1.1. Need for high dimensional neuropsychological assessments One approach to address a patient’s functional capacities in terms of complex and dynamic interactions can be found in Larrabee’s(2008) call for the development of capacity-focused measures (based on factor analytic studies) to offer an evidence- based approach to understanding clinically relevant cognitive domains. A limitation is 8 T. D. PARSONS AND T. DUFFIELD that responses to low dimensional tasks found in traditional assessments that use static/two-dimensional stimuli can constrain task performance and neural activity to constructs (e.g. working memory). Low dimensional cognitive assessment tasks bind mean neural population dynamics to a low-dimensional subspace (Gao & Ganguli, 2015). As a result, the neuropsychologist’s assessment of the patient’s ability to per- form everyday activities may be occluded. Pitkow and Angelaki (2017) argue that observed low-dimensional neural may be an artifact of simple cognitive tasks. Moreover, many standard paper-and-pencil (low dimensional) tasks can be solved via basic responses to static/two-dimensional stimuli. From this they contend that ‘if nat- ural tasks could be solved with linear computation, then we wouldn’t even need a brain. We could just wire our sensors to our muscles and accomplish the same goal because multiple linear processing steps are equivalent to a single linear processing step’ (p. 944). Generalizing from low dimensional paper-and-pencil tasks to a patient’s everyday functioning is challenging at best, and uninterpretable at worst. Do low dimensional paper-and-pencil tests tell us about the patient’s dynamic brain function- ing in the real world, or do they reflect a laboratory artifact?

2.1.2. Function-led assessments using high dimensional virtual environments Burgess and colleagues’ (2006) suggest that neuropsychological assessments be devel- oped to represent real-world ‘functions’ and proffer results that are ‘generalizable’ for prediction of the functional performance across a range of situations. This ‘function- led approach’ to creating neuropsychological assessments would include neuropsycho- logical models that proceed from directly observable everyday behaviors backward to examine the ways in which a sequence of actions leads to a given behavior in normal functioning; and the ways in which that behavior might become disrupted. For example, a geriatric patient may become confused while navigating a simulated neighborhood in a driving simulator. Computational modeling and automated logging capabilities (see below for further explanation) of the patient’s performance and activ- ities in the simulator (e.g. head movements, eye-tracking, response latencies and pat- terns, etc.) offer neuropsychologists additional information for determining categorical, sequential, and hierarchical performance indicators. This will provide precise and dis- tinct data regarding each patient’s depth and breadth of real-world activity perform- ance impairments (and the spatiotemporal interrelationships) compared to other geriatric patients who may be experiencing similar navigational difficulties (see below for discussion of methodologies). This scenario is different from current assessment of driving concerns that use paper-and-pencil tests for simplistic reference to driving capacities (low dimensional tasks that do not necessarily emulate the environment, dynamics, or demands of driving). A new generation of neuropsychological tests should be developed that are ‘function led’ rather than purely ‘construct driven.’ These neuropsychological assessments should meet the usual standards of reliability, but should also include both sensitivity to brain dysfunction and generalizability to real-world function (see Parsons, 2016; Parsons et al., 2017 for book length discussion and examples of validated assessments). While a few function-led tests have been developed that assess cognitive function- ing in actual real-world settings, there are several drawbacks to experiments THE CLINICAL NEUROPSYCHOLOGIST 9 conducted in real-life settings: time consuming, require transportation, involve consent from local businesses, costly to use or build physical mock-up (e.g. a kitchen), and dif- ficult to replicate or standardize across settings. Further, data collection in these natur- alistic observations tends to be limited (i.e. difficult to maintain systematic control of real-world stimulus challenges and capture detailed performance data). One potential answer is adding high dimensional and function-led virtual environments (VEs) to the neuropsychologists’ cognitive assessment battery, which allow clients to become immersed within a computer-generated simulation. VE are increasingly considered as potential aids in enhancing the ecological validity of neuropsychological assessments (Parsey & Schmitter-Edgecombe, 2013; Parsons et al., 2017). VEs represent a special case of computerized neuropsychological assess- ment devices as they have enhanced computational capacities for administration effi- ciency, stimulus presentation, automated logging of responses, and data analytic processing. Since VEs allow for precise presentation and control of dynamic/three- dimensional perceptual stimuli, they can provide ecologically valid assessments that combine the veridical (i.e. performance on a test should predict some aspect of a patient’s day-to-day functioning) control and rigor of laboratory measures with a veri- similitude (i.e. the demands of a test and the testing conditions must resemble demands found in the everyday world of the patient) that reflects real-life situations (Bohil, Alicea, & Biocca, 2011; Franzen & Wilhelm, 1996). Additionally, the enhanced computation power allows for increased accuracy in the recording of neurobehavioral responses in a perceptual environment that systematically presents complex stimuli. Such simulation technology appears to be distinctively suited for the development of ecologically valid environments, in which three-dimensional objects are presented in a consistent and precise manner (Renison, Ponsford, Testa, Richardson, & Brownfield, 2012). VE-based assessments can provide a balance between naturalistic observation (e.g. capturing a variety of behaviors in a virtual classroom during testing) and the investigational need for exacting control over key variables (i.e. systematically manipu- lating the classroom demands). These high dimensional immersive VEs offer enhanced measures because traditional tests are limited by sterile tests and testing environments that fail to replicate the dis- tractions, stressors, and/or demands found in the real world (e.g. static and antiquated stimuli [e.g. abacus on confrontation naming task], secluded office or laboratory set- ting for testing, reference and extrapolation to referral question [e.g. driving ability] rather than live demonstration of referral question concern while measuring constitu- ent properties, etc.). VEs allow for precise control over stimulus parameters and the ability to adjust the potency or frequency of stimuli. With traditional assessments (low dimensional paper-and-pencil and computer automated), the psychologist may not receive a clear picture of the client’s ability to perform everyday activities. While many cognitive tests do give some insight into the client’s everyday performance, they do not provide direct knowledge about shortcomings in the functional capabilities of the client, which limits the accuracy and utility of the psychologist’s recommendations (Chaytor & Schmitter-Edgecombe, 2003; Manchester, Priestley, & Jackson, 2004). VEs offer a platform for shifting from tests that lack criterion validity to direct observation of behavior in real-world scenarios. 10 T. D. PARSONS AND T. DUFFIELD

3. Adoption of advances in measurement science to neuropsychological assessment Another area of interest found in the NIH Office of Behavioral and Social Sciences Research is the importance of transformative advances in measurement science. Over the past 20 years, there has been a substantial increase in federally funded studies with neuropsychological outcomes. Woodard (2017) discusses this rapid growth and move from basic data analytical approaches (paired t-test or repeated measures ana- lysis using analysis of variance) to new analytic approaches for evaluating change over time (e.g. linear mixed effects modeling) and enhanced data imputation approaches. Looking toward the next 25 years, it is important that neuropsychologists look at developments in deep learning and other computational modeling approaches. Advances in data analytics (e.g. linear mixed effects modeling; imputation, deep learn- ing) should be used for developments such as computer adaptive testing (CAT) and computational models derived from the construction of large databases through pas- sive data monitoring over time.

3.1. Computer adaptive testing and item-response theory 3.1.1. NIH toolbox CAT provides progressively precise assessments that will accelerate clinical neuropsychol- ogy’s knowledgebases. For example, the NIH toolbox uses item-response theory (IRT) and computerized adaptive testing (CAT). The IRT approach offers an alternative to classical test theory by moving beyond scores that are relative to group-specific norms (Thomas, 2011). In IRT the probability of an item response is modeled to the respondent’sposition on the underlying construct of interest. This approach can be useful for providing item- level properties of each NIH toolbox measure across the full range of each construct. While neuropsychological measures tend to meet the reliability and validity requirements of classical test theory, the equivalence of item properties (e.g. item difficulty and item discriminatory power) is often assumed across items. Consideration of item difficulty tends to be subsumed under independent variable manipulation (e.g. cognitive load) to modify the marginal likelihood of correct responses in item subgroups. A limitation of this approach is that it does not match well with current item-level analyses found in neuroimaging assessments of brain activations following stimulus probes. For neuro- psychological assessments to comport well with brain activation probes, item difficulty needs to be considered for avoiding ceiling and floor effects in patient performances across clinical cohorts. IRT models offer the neuropsychologist both individual patient parameters and individual item characteristics that can be scaled along a shared latent dimension. Neuropsychological assessments would benefit from greater adoption of developments in IRT that emphasize the accuracy of individual items. The NIH toolbox also uses CAT to shorten the length of time needed for an assess- ment. CAT tests are, on average, half as long as paper-and-pencil measures with equal or better precision at establishing ability levels (Gibbons et al., 2008; Weiss, 2004) through avoidance of floor or ceiling effects and concise item pools. The CAT approach offers the NIH toolbox with enhanced efficiency, flexibility, and precision. It THE CLINICAL NEUROPSYCHOLOGIST 11 also provides an opportunity to assess more domains of interest without adversely affecting participant burden. When applied to CAT, IRT principles allow for real-time assessment of item-level performance. For example, a computer adaptive neuro- psychological assessment based on IRT principles could begin with an item associated with a patient’s average performance and then move forward in item difficulty if the patient is correct. On the other hand, if the patient fails to respond appropriately a reverse rule can be used, in which the next simpler item is presented. Following this iterative process, computer automated neuropsychological assessments could modify the presented items difficulty in real time relative to the patient’s performance. Thomas et al. (2018) have developed IRT approaches that can be put together as sig- nal models. This allows for the connecting of corresponding but dis- crete methods. The combination of detection and IRT models results in an approach that has the measurement accuracy needed for robust modeling of item dif- ficulty and examinee ability from IRT and an interpretive framework covering the experimentally validated cognitive constructs from signal detection theory.

3.1.2. Passive data monitoring In addition to the coupling of IRT to CAT, Collins and Riley (2016) point to passive data monitoring from everyday technologies (e.g. smartphones; (IoT) that can be used to collect real-time neuropsychological assessments throughout the course of a day. For example, each patient has a digital footprint that emerges from regular use of everyday technologies. The coupling of these technologies with developments in measurement science can deliver novel methods for capturing cogni- tions, affects, and behaviors (e.g. Reece & Danforth, 2017). Moreover, the rapid pro- gress in sensor technologies allows for objective and effective measures of behavioral performance, psychophysiology, and environmental contexts (e.g. Wade et al., 2016). The usage of scientific and technologically enriched measurement protocols offers the neuropsychologist (as well as other behavioral and social sciences researchers) increased precision and granularity for enhanced data analytic science (Riley, 2016). The development of computerized neuropsychological assessments (including high dimensional VEs with graphical models) that move beyond automations of low dimen- sional paper-and-pencil tasks allow the neuropsychologist to present the patient with scenarios that require them to choose among multiple subtasks. Neuropsychologists can draw from neuroscience studies using cognitive tasks that require non-human pri- mates to choose and integrate noisy stimuli (i.e. sensory inputs) towards a choice (Mante, Sussillo, Shenoy, & Newsome, 2013; Saez, Rigotti, Ostojic, Fusi, & Salzman, 2015). From these higher dimensional tasks, context-dependent computational models have been developed that include latent context variables that can be disentangled through non-linear computation. Inferring latent context variables (causal attributions for choosing appropriate actions) requires perceptual models. Pitkow and Angelaki (2017) have proposed a framework for understanding probabilistic computations using graphical and statistical models of naturalistic behaviors (see Figure 4). Given that the probability distribution for high dimensional (ecologically valid simu- lations of everyday activities) tasks is complex, the brain likely simplifies the high dimensional simulation by focusing upon significant interactions. Variables of interest 12 T. D. PARSONS AND T. DUFFIELD

Figure 4. Probabilistic computations using graphical and statistical models of naturalistic behaviors. (X. Pitkow & D.E. Angelaki, , 2017. Reprinted by permission of the publisher).

(as well as their interactions) can be visualized as a sparsely connected graph (Figure 4(A)) and described mathematically as a probabilistic graphical model (Koller & Friedman, 2009). Following these authors lead, neuropsychologists could develop probabilistic graphical models that proficiently describe complex statistical distribu- tions relating a host of interactions and/or conditional dependencies among neuro- psychological variables.

3.2. Deep learning for higher dimensional algorithms An area that neuropsychologists could employ to enhance modeling is the use of deep learning algorithms that simulate the hierarchical structure of both healthy and THE CLINICAL NEUROPSYCHOLOGIST 13 damaged human . Deep learning processes data from lower levels to increas- ingly higher levels. This is utilized increasingly in the development of novel technology for big data and artificial (Najafabadi et al., 2015). Neuropsychologists could use it for analyzing studies that use both traditional (low dimensional paper-and-pencil) and high dimensional virtual reality-based neuropsycho- logical assessments. Deep learning could be used to first process the lower dimen- sional data from traditional neuropsychological assessments and then move to increasingly higher dimensional data from VE-based tasks to develop more and more semantic concepts (see Figure 4). These data-driven semantic concepts are likely more representative of brain functioning than historical theoretically based cognitive con- structs (e.g. working memory). Indeed, a recent meta-analysis from Weissberger et al. (2017) found relatively high sensitivity and specificity for current neuropsychological assessments of immediate memory (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%). Data from deep learning algorithms may allow for the creation of new neuropsychological assessments with ever greater diag- nostic power. As an illustration, Testolin and Zorzi (2016) used probabilistic models and genera- tive neural networks to propose a unified framework for modeling neuropsychological functioning in both healthy and clinical persons. These connectionist models can be understood as part of the more general framework of probabilistic graphical models. Their approach utilizes computational models of neuropsychological performance in terms of Bayesian computations (computational methods rooted in Bayesian statistics) that offer posterior probabilities. These Bayesian approaches to brain function express and actions as inferential processes. Neuropsychological deficits may be understood as false inferences that arise due to aberrant prior beliefs. The Bayesian approaches afford a process for computational phenotyping. Their use of graphical models can be implemented as a stochastic process involving a randomly determined sequence of observations (each of which is considered as a sample of one element from a probability distribution) via generative neural networks (see Figure 4). Moreover, deep learning models have a structured architecture that permits enhanced simulation of neuropsychological deficits following damage to the brain. Testolin and Zorzi (2016) use the example of visual object recognition (e.g. facial processing). A neuropsychologist could apply selective lesions to computational models of visual object recognition to assess the impact of damage to a range of specific cortical regions (e.g. early visual processing; extrastriate areas; anterior associative areas). Using this approach, neuropsychologists could develop new assessments of visual agnosia and examine the appearance of category-specific deficits. A further use of deep learning architectures is the simulation of selective impair- ment to particular connection pathways. Cappelletti, Didino, Stoianov, and Zorzi (2014) applied stochastic decay (stochastic reduction of weight values that decreases respon- sivity to afferent signals) to synaptic strengths to investigate decline in numerosity comparison in an elderly cohort. They looked at both global degradation of all net- work synapses and local degradation of inhibitory synapses from a particular process- ing layer. They found that while older participants accurately performed arithmetical tasks, they had impaired numerosity discrimination on trials requiring the inhibition of 14 T. D. PARSONS AND T. DUFFIELD incongruent information. Moreover, they found that these results were related to poor inhibitory processes measured by standard neuropsychological assessments. Using computational modeling and simulation, they found that the specific degradation of inhibitory processes resulted in a pattern that closely resembled older participants’ performance. This combination of neuropsychological assessment and computational modeling could represent a new standard for clinical neuropsychological studies.

4. Digital intervention platforms Another area that Collins and Riley (2016) emphasize in their discussion of transforma- tive opportunities from the NIH Office of Behavioral and Social Sciences Research is the potential application of technological advances to interventions. In the past quar- ter century, progress in neurocognitive rehabilitation has been aided by widely avail- able imaging tools for measuring brain plasticity. Likewise, there has been a notable increase in the use of non-invasive brain stimulation approaches that leverage neural plasticity for rehabilitation (see Crosson et al., 2017 for review). From an intervention perspective, neuropsychologists focus on promoting brain plasticity by increasing a patient’s capacity for performing everyday activities. Collins and Riley (2016) point to the resource- and labor-intensiveness of these interventions and the resulting limita- tions in terms of reach, scalability, and duration consistent with use of real-world assessment environments discussed previously. Collins and Riley (2016) point to the need for improving the precision of these interventions by personalizing approaches at the start, adapting these interventions over the course of treatment, as well as operationalization of these interventions into coded databases to ensure treatment fidelity. These suggested intervention improvements would naturally follow adoption of the aforementioned assessment advances.

4.1. Smart environment technologies for unobtrusive monitoring and interventions Technological advances in smart environment technologies provide a wonderful exem- plar of the assessment and intervention enhancements possible through integration and incorporation of several of the aforementioned capabilities (e.g. function-led evaluation, passive data monitoring, deep learning, etc.). These smart environments allow for unobtrusive monitoring trends in performance of everyday activities that may indicate changes in clinical status (e.g. mobility patterns that can predict neuro- cognitive status) and provide automatic interventions within the real-world settings (Alberdi, Aztiria, & Basarab, A, 2016; Cook & Schmitter-Edgecombe, 2009; Dawadi, Cook, & Schmitter-Edgecombe, 2016; Galambos, Skubic, Wang, & Rantz, 2013; Hayes et al., 2008; Schmitter-Edgecombe, Cook, Weakley, & Dawadi, 2017). This is accom- plished through machine learning algorithms (e.g. naïve Bayes, Markov, conditional random fields, and dynamic Bayes networks) that are used to model, recognize, and monitor large amounts of labeled training data (Aramendi et al., 2018; Singla, Cook, & Schmitter-Edgecombe, 2010). Building on the monitoring and predictive capabilities of smart environments, activity aware prompting can be used to aid in the promotion of THE CLINICAL NEUROPSYCHOLOGIST 15 independent living. Studies using these prompting technologies have reported increases in independent engagement in activities by patients with neurocognitive impairment (Boger & Mihailidis, 2011; Boll, Heuten, Meyer, & Meis, 2010).

4.2. Virtual environment technologies For some interventions it may be preferable to develop smart virtual reality environ- ments that simulate real-world scenarios and offer refined stimulus delivery for inter- ventions (Bohil et al., 2011). The use of VEs allows the neuropsychologist to present and control stimuli across various sensory modalities (e.g. visual, auditory, olfactory, haptic, and kinesthetic). Moreover, there are an increasing number of validated VEs that can be used for assessment and intervention: virtual apartments (Henry, Joyal, & Nolin, 2012), grocery stores (Parsons & Barnett, 2017), libraries (Renison et al., 2012), classrooms (Iriarte et al., 2016; Parsons, Bowerly, Buckwalter, & Rizzo, 2007; Parsons & Carlew, 2016; Rizzo et al., 2006), driving (Schultheis, Rebimbas, Mourant, & Millis, 2007), cities (Plancher, Barra, Orriols, & Piolino, 2013; Plancher, Gyselinck, & Piolino, 2018), and even military environments (Armstrong et al., 2013; Parsons & Rizzo, 2008; Parsons & Courtney, 2014). In addition to use of novel measurement science for more efficient assessments using behavioral performances, real-time psychophysiologic data (e.g. eye-gaze) can also be used to adapt the assessment/intervention environments for a more individualized approach using factors such as emotional reactivity and ongoing skill development (e.g. Wade et al., 2016). The data logging, additional assess- ment/intervention variables, and computational modeling capabilities of these VEs may help clinicians better understand and treat patient problems in their daily and complex routines (for a review of validated VEs, see Parsons, 2016). While many VE- based neuropsychological assessments and interventions need enhanced norms, some normative studies have been conducted (e.g. virtual reality classroom environments; Iriarte et al. 2016; Parsons et al., 2007; Parsons & Carlew, 2016; Parsons & Rizzo, in press; Rizzo et al., 2006) and several other normative efforts are underway.

4.3. Smartphones and other digital technologies As Collins and Riley suggest, there is a need for intervention technologies that move beyond the historically limited duration of traditional rehabilitation interventions that negatively impact the maintenance of behavioral change. They point to the promise of smartphones and other digital technologies for extending treatment duration for improved behavioral maintenance. These mobile technologies allow clinicians to extend interventions into patients' everyday activities by prompting behaviors and skill building between treatment sessions. This often involves ecological momentary inter- ventions typified by the provision of interventions to patients as they perform activ- ities of daily living (Heron & Smyth, 2010). Ecological momentary intervention studies have found promising results using mobile electronic devices promoting self-aware- ness (Runyan et al., 2013), emotional regulation (Bylsma, Taylor-Clift, & Rottenberg, 2011), and behavioral prevention (Cook, McElwain, & Bradley-Springer, 2010; MacDonell, Naar-King, Murphy, Parsons, & Huszti, 2011). 16 T. D. PARSONS AND T. DUFFIELD

A further advance found in ecological momentary intervention studies using digital devices is apparent in the abundant streams of continuous data (Berke, Choudhury, Ali, & Rabbi, 2011; Spruijt-Metz et al., 2015). Similar to aforementioned technology examples, advances in computational modeling proffer unique opportunities for real- time behavioral interventions in ecological contexts using mobile devices (Nilsen & Pavel, 2013; Rivera & Jimison, 2013; Saranummi et al., 2013; Spring, Gotsis, Paiva, & Spruijt-Metz, 2013). The ultimate challenge for neuropsychologists across technologies is the development and validation of neuropsychological measures and interventions that include ontologies for neurocognitive and behavioral constructs that will allow for development of collaborative knowledgebases to bring together data from various disciplines.

5. Large-scale population cohorts, data integration, and cognitive ontologies Another area of interest for the NIH OBSSR strategic plan is big data, as well as ana- lytics and data integration techniques that can be used to develop collaborative knowledgebases. Collins and Riley (2016) call for open data-sharing approaches in the behavioral and social sciences. As noted earlier, large-scale longitudinal neuropsycho- logical studies were rather less apparent 25 years ago, but there has been a substan- tial increase in federally funded studies with neuropsychological outcomes (Woodard, 2017). Predicated on previously noted novel measurement science, neuropsychologists are beginning to discuss the ways in which neuropsychological concepts may be for- malized into cognitive ontologies with formal designations of distinct cognitive enti- ties in terms of hierarchical and/or spatiotemporal relations (Bilder, 2011; Jagaroo, 2009; Jagaroo & Santangelo, 2017). Neuropsychologists leading the initiative on cogni- tive phenomics have called for genome-to-phenome mapping with well-matched descriptions of neurocognitive constructs (Bilder et al., 2009a, 2009b; Jagaroo 2009; Sabb et al. 2008). A potential roadblock to developing analytics and data integration techniques that can be used to develop collaborative knowledgebases for neuropsychologists is that neuropsychological assessments involve measurement of hypothetical interdimen- sional constructs inferred from research findings (Burgess et al., 2006; Jagaroo & Santangelo, 2017). According to Dodrill (1997), poor test specificity may be revealed in the median correlations for common neuropsychological tests. For example, Dodrill asserts that while the median correlation within domain groupings on a test was .52, the median correlation between groupings was .44. From this, Dodrill extrapolates that the tests are not unambiguously domain specific because the median correlations should be notably higher for the within groupings and lower for the between group- ings. Consequently, the principal assessment measures used by practitioners may not be quantifying domains to a level of specificity that accounts for the covariance among the measures (Dodrill, 1997). Jagaroo and Santangelo (2017) point out that the ‘cognitive constructs’ assessed by traditional neuropsychological tests encompass several latent constructs, compound constructs, and/or multivariate operations. While normative use of these traditional THE CLINICAL NEUROPSYCHOLOGIST 17 neuropsychological assessments in clinical practice (as well as commercial interests) has resulted in acceptance, they often have limited correspondence with distinct neural systems. An example can be seen in the neurocognitive domain of ‘working memory.’ Instead of being a unitary concept, working memory represents a latent con- struct that is appraised from neuropsychological measures (Burgess et al., 2006). Evidence can be seen in a review by Sabb and colleagues (2008) of over 478 articles to investigate relations among phenotypic estimates of executive functioning. Although some cognitive control measures had good consistency across studies in the use of a specific measure (e.g. Digit Span Backwards only used one indicator: correct recall of digits), considerable variation was noted in other measures (e.g. various meas- ures of Go/No-go performance were used). As a result, inconsistency is introduced that may decrease the ability to pool data and interpret results across studies. Work is needed to develop and consistently implement neuropsychological assessments that will generate reliable data for with other neuroscience disciplines. In addition to previously discussed advantages, improved access to large databases will allow for refined assessment of relations within and between neuropsychological assessment domains. Novel stochastic approaches can also be aimed at alleviating covariance among neuropsychological measures will also likely improve understanding of endophenotypes by linking cerebral activations with functional task performance. While ontologies abound in other biomedical disciplines, neuropsychological assess- ment lags in its development of formal ontologies (Bilder, 2011; Jagaroo, 2009; Jagaroo & Santangelo, 2017; Parsons, 2016). The idea of ‘ontologies’ in neuroinfor- matics reflects the formal specification of entities that exist in a domain and the rela- tions among them. A given ontology contains designations of separate entities along with a specification of ontological relations among entities that can include hierarch- ical relations (e.g. ‘is-a’ or ‘part-of’) or spatiotemporal relations (e.g. ‘preceded-by’ or ‘contained-within’). These knowledge structures allow for consistent representations across models, which can facilitate communication among domains by providing an objective, concise, common, and controlled vocabulary. This consistency also allows for enhanced interoperability and provision of links among levels of analysis. Such ontologies have become central within many areas of neuroscience. In the realm of neuropsychology, several projects have been initiated to develop cognitive ontologies at the Consortium for Neuropsychiatric Phenomics (www.phenomics.ucla. edu). This consortium aims to enable more effective collaboration, and facilitation of knowledge sharing about cognitive phenotypes to other levels of biological know- ledge. Also, applications like the Cognitive Atlas (http://www.cognitiveatlas.org/) have been developed to address the need for a collaborative knowledge base that captures the comprehensive collection of conceptual structures within neuropsychology (Poldrack et al., 2011). The Cognitive Atlas project describes the ‘parts’ and processes of cognitive functioning in a manner similar to descriptions of the cell’s component parts and functions in gene ontology. While this project does offer promise, it is still in its infancy and requires development if it is to provide a solid basis for annotation of neuropsychological data (e.g. neuroimaging of cognitive processes). Further, like other collaborative knowledge bases (e.g. Wikipedia), its realization will depend on the involvement of a large number of interested neuropsychologists (Poldrack et al., 2011) 18 T. D. PARSONS AND T. DUFFIELD as neuropsychology has previously been slow to embrace the call for advancement of formal cognitive ontologies (Bilder et al., 2009b; Jagaroo, 2009; Price & Friston, 2005).

6. Conclusions Clinical neuropsychology has reached a point in its development where it is ready to embrace measurement and technological advances. These changes are consistent with those that are occurring in other areas such as computational ’s attempt to bridge from neuroscience to clinical applications (Huys, Maia, & Frank, 2016). That said, there are a number of issues that will need to be addressed. This paper has attempted to discuss nascent and related advancements being endea- vored and how those opportunities may apply to Neuropsychology 3.0 adherence to current NIH initiatives to advance scientific developments: (1) Integrating neuroscience into behavioral and social sciences; (2) Transformative advances in measurement sci- ence; (3) Digital intervention platforms; and (4) Large scale population cohorts and data integration. While there are ways in which neuropsychology is making inroads into these areas (e.g. Au, Piers, & Devine, 2017), a great deal of development is needed to maintain relevance as a scientific discipline. For example, the neuropsychology of learning disabilities has a rich history (Fletcher & Grigorenko, 2017) and when a learn- ing disability is suspected, a comprehensive neuropsychological evaluation has been considered necessary for identification of both specific disorder etiology and cognitive strengths to be leveraged for compensatory strategies and treatment options (Silver et al., 2008). However, new eye-tracking technology using algo- rithms has demonstrated both 95% sensitivity and specificity at identifying those at high and low risk of a reading disability within minutes (Benfatto et al., 2016), as well as the ability to function as an intervention platform. The software and hardware are being marketed to school districts at a low cost (approximately 15–20 dollars a stu- dent; Wood-Shapiro, 2018), potentially negating the need for an expensive neuro- psychological evaluation where treatment is often deferred either to school special education staff or other providers (e.g. speech and language pathologist). Ominous examples aside, this brief summary is meant to be a prologue for future suggested advancements that retain past successes that solidified the discipline of neuropsychology. As the longitudinal nature of the Framingham Heart Study clearly demonstrates, embracing technology has great potential for building upon neuropsy- chology’s rich history of quantitative advancements, rather than disregarding it, so that we may continue to better understand brain–behavior relationships in the future. As Casaletto and Heaton (2017) contend, neuropsychological questions are limited only by the tools we use to assess brain performance. These authors encourage neuro- psychologists to consider if high dimensional technologies have demonstrated com- mensurate psychometric soundness to traditional low dimensional tools, which these authors posit to be the case. Following this premise, an important question when deciding to adopt high dimensional technologies is then what additional data do high dimensional technologies offer the neuropsychologist, which we hope we have delineated is a considerable, efficient, and diverse amount of brain–behavior informa- tion. From this appraisal it is then left to the volition of the individual THE CLINICAL NEUROPSYCHOLOGIST 19 neuropsychologist to either not adopt high dimensional technologies, to use as an additional set of tools, or to fully transition to exclusive use of these novel assets. These authors hope this can avoid becoming entrenched in replacement arguments, which is neither helpful for the practice of neuropsychology nor the point of this nar- rative review. That being said, if neuropsychologists update their methods and technologies, they may be able to take part in the NIH OBSSR strategic plan. To do so, neuropsycholo- gists will need to enhance their focus upon the several emerging opportunities and challenges that may transform the behavioral and social sciences. First, neuropsycholo- gists unquestionably need to embrace advances in measurement sciences. Secondly, neuropsychologists need to be open to the potential that neuropsychological con- cepts may evolve as increasing capabilities of data capture, computational models, neuroimaging, physiologic responses, etc. in more real-world evaluation scenarios improve our understanding of brain–behavior relationships. Additionally, neuropsy- chologists should adopt approaches that will allow for improved sharing of data in collaborative online knowledgebases to bolster these aforementioned opportunities. Once neuropsychologists adopt these scientific priorities, they will better align themselves with the OBSSR’s emphases upon communication, coordination, evalu- ation, and training. This last emphasis (i.e. training) has major implications for neuro- psychology. Major changes to the methodology of a profession inevitably require changes to the core curriculum taught in graduate school and continuing education. It may be that more advanced statistics and courses necessarily be taught in graduate school (or a pre-requisite for matriculation) for the neuropsycho- logical tools and methods of the twenty-first century. This would appear consistent with the American Psychological Association’s (APA) Guidelines and Principles for Accreditation of Programs in Professional Psychology (1947), and possibly in the not too distant future become a mandate of professional standards and ethics, to have proper training on the most advanced practice parameters available.

Disclosure statement No potential conflict of interests was reported by the authors.

ORCID Thomas D. Parsons http://orcid.org/0000-0003-0331-5019 Tyler Duffield http://orcid.org/0000-0003-3101-7027

References Adams, K. M., Kvale, V. I., & Keegan, J. F. (1984). Relative accuracy of three automated systems for neuropsychological interpretation. Journal of Clinical and Experimental Neuropsychology, 6(4), 413–431. doi:10.1080/01688638408401232 Alberdi, A., Aztiria, A., & Basarab, A. (2016). On the early diagnosis of Alzheimer's disease from multimodal signals: A survey. Artificial Intelligence in Medicine, 71,1–29. 20 T. D. PARSONS AND T. DUFFIELD

Alhanai, T., Au, R., & Glass, J. (2017, December). Spoken language biomarkers for detecting cog- nitive impairment. In Automatic and Understanding Workshop (ASRU), 2017 IEEE (pp. 409–416). IEEE. APA Committee on Training in Clinical Psychology. (1947). First report of the new accreditation process in psychology. American Psychologist, 2, 539–558. Aramendi, A. A., Weakley, A., Schmitter-Edgecombe, M., Cook, D. J., Goenaga, A. A., Basarab, A., & Carrasco, M. B. (2018). Smart home-based prediction of multi-domain symptoms related to Alzheimer's disease. IEEE Journal of Biomedical and Health . Armstrong, C. M., Reger, G. M., Edwards, J., Rizzo, A. A., Courtney, C. G., & Parsons, T. D. (2013). Validity of the virtual reality stroop task (VRST) in active duty military. Journal of Clinical and Experimental Neuropsychology, 35, 113–123. Au, R., Piers, R. J., & Devine, S. (2017). How technology is reshaping cognitive assessment: Lessons from the Framingham Heart Study. Neuropsychology, 31(8), 846. Barkley, R. A. (2012). Executive functions: What they are, how they work, and why they evolved. New York, NY: Guilford Press. Beauchamp, M. (2017). Neuropsychology’s social landscape: Common ground with social neuro- science. Neuropsychology, 31, 981–1002. Beaumont, J. G. (2008). Introduction to neuropsychology. New York, NY: Guilford Press. Benfatto, M. N., Seimyr, G. Ö., Ygge, J., Pansell, T., Rydberg, A., & Jacobson, C. (2016). Screening for dyslexia using eye tracking during reading. PLoS One, 11(12), e0165508. Benton, A. L., & Sivan, A. B. (2007). Clinical neuropsychology: A brief history. Disease-A-Month, 53(3), 142–147. Berke, E. M., Choudhury, T., Ali, S., & Rabbi, M. (2011). Objective measurement of sociability and activity: Mobile sensing in the community. The Annals of Family Medicine, 9(4), 344–350. Bigler, E. D. (2016). , neuroimaging, neuropsychology, neuroconnectivity and traumatic brain injury. Frontiers in , 10. Bigler, E. D. (2017). Evidence-based integration of clinical neuroimaging findings in neuropsych- ology. Neuropsychological assessment in the age of evidence-based practice. Diagnostic and Treatment Evaluations, 1, 183–210. Bilder, R. M. (2011). Neuropsychology 3.0: Evidence-based science and practice. Journal of the International Neuropsychological Society, 17(01), 7–13. Bilder, R. M., Poldrack, R. A., Stott, P. D., et al (2009a). Cognitive phenomics. The neuropsychology of mental illness (pp. 271–284). Cambridge: Cambridge University Press. Bilder, R. M., Sabb, F. W., Parker, D. S., Kalar, D., Chu, W. W., Fox, J.,… & Poldrack, R. A. (2009b). Cognitive ontologies for neuropsychiatric phenomics research. Cognitive , 14(4–5), 419–450. Boger, J., & Mihailidis, A. (2011). The future of intelligent assistive technologies for cognition: Devices under development to support independent living and aging-with-choice. , 28, 271–280. Bohil, C. J., Alicea, B., & Biocca, F. A. (2011). Virtual reality in neuroscience research and therapy. Nat. Rev. Neurosci. 12, 752–762. Boll, S., Heuten, W., Meyer, H. M., & Meis, M. (2010). Development of a multimodal reminder sys- tem for older persons in their residential homes. Informatics for Health and Social Care, 35, 104–124. Burgess, P. W., Alderman, N., Forbes, C., Costello, A., Laure, M. C., Dawson, D. R., … & Channon, S. (2006). The case for the development and use of “ecologically valid” measures of executive function in experimental and clinical neuropsychology. Journal of the International Neuropsychological Society, 12(2), 194–209. Bylsma, L. M., Taylor-Clift, A., & Rottenberg, J. (2011). Emotional reactivity to daily events in major and minor depression. Journal of Abnormal Psychology, 120(1), 155. Cappelletti, M., Didino, D., Stoianov, I., & Zorzi, M. (2014). Number skills are maintained in healthy ageing. , 69,25–45. Casaletto, K. B., & Heaton, R. K. (2017). Neuropsychological assessment: Past and future. Journal of the International Neuropsychological Society, 23(9–10), 778–790. THE CLINICAL NEUROPSYCHOLOGIST 21

Cendes, F., Theodore, W. H., Brinkmann, B. H., Sulc, V., & Cascino, G. D. (2016). Neuroimaging of epilepsy. In Handbook of clinical neurology (Vol. 136, pp. 985–1014). Elsevier. Cernich, A. N., Brennana, D. M., Barker, L. M., & Bleiberg, J. (2007). Sources of error in computer- ized neuropsychological assessment. Archives of Clinical Neuropsychology, 22,39–48. Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills. Neuropsychology Review, 13, 181–197. Collins, F. S., & Riley, W. T. (2016). NIH’s transformative opportunities for the behavioral and social sciences. Science Translational Medicine, 8, 366ed314. Cook, D. J., & Schmitter-Edgecombe, M. (2009). Assessing the quality of activities in a smart environment. Methods of Information in Medicine, 48, 480–485. Cook, P. F., McElwain, C. J., & Bradley-Springer, L. A. (2010). Feasibility of a daily electronic survey to study prevention behavior with HIV-infected individuals. Research in Nursing & Health, 33(3), 221–234. Crosson, B., Hampstead, B. M., Krishnamurthy, L. C., Krishnamurthy, V., McGregor, K. M., Nocera, J. R., … & Tran, S. M. (2017). Advances in neurocognitive rehabilitation research from 1992 to 2017. The Ascension of Neural Plasticity. Cullum, C. M., Hynan, L. S., Grosch, M., Parikh, M., & Weiner, M. F. (2014). Teleneuropsychology: Evidence for video teleconference-based neuropsychological assessment. Journal of the International Neuropsychological Society, 20(10), 1028–1033. http://doi.org/10.1017/ S1355617714000873 Dawadi, P. N., Cook, D. J., & Schmitter-Edgecombe, M. (2016). Automated cognitive health assessment from smart home-based behavior data. IEEE Journal of Biomedical and , 20(4), 1188–1194. Dodrill, C. B. (1997). Myths of neuropsychology. The Clinical Neuropsychologist, 11(1), 1–1. doi: 10.1080/13854049708407025 Duff, K. (2012). Evidence-based indicators of neuropsychological change in the individual patient: Relevant concepts and methods. Archives of Clinical Neuropsychology, 27(3), 248–261. Fletcher, J. M., & Grigorenko, E. L. (2017). Neuropsychology of learning disabilities: The past and the future. Journal of the International Neuropsychological Society, 23(9–10), 930–940. Franzen, M. D., & Wilhelm, K. L. (1996). Conceptual foundations of ecological validity in neuro- psychological assessment. Galambos, C., Skubic, M., Wang, S., & Rantz, M. (2013). Management of dementia and depression utilizing in-home passive sensor data. Gerontechnology, 11(3), 457. Gao, P., & Ganguli, S. (2015). On simplicity and complexity in the brave new world of large-scale neuroscience. Current Opinion in Neurobiology, 32, 148–155. Genon, S., Reid, A., Langner, R., Amunts, K., & Eickhoff, S. B. (2018). How to characterize the func- tion of a brain region. Trends in Cognitive Sciences. Gershon, R. C., Cella, D., Fox, N. A., Havlik, R. J., Hendrie, H. C., & Wagster, M. V. (2010). Assessment of neurological and behavioral function: the NIH toolbox. Lancet Neurology, 9(2), 138–139. Gershon, R. C., Wagster, M. V., Hendrie, H. C., Fox, N. A., Cook, K. F., & Nowinski, C. J. (2013). NIH toolbox for assessment of neurological and behavioral function. Neurology, 80(11 Suppl 3), S2–S6. Gibbons, R. D., Weiss, D. J., Kupfer, D. J., Frank, E., Fagiolini, A., Grochocinski, V. J., … Immekus, J. C. (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59, 361–368. Greher, M. R., & Wodushek, T. R. (2017). Performance validity testing in neuropsychology: Scientific basis and clinical application—A brief review. Journal of Psychiatric Practice, 23(2), 134–140. Hayes, T. L., Abendroth, F., Adami, A., Pavel, M., Zitzelberger, T. A., & Kaye, J. A. (2008). Unobtrusive assessment of activity patterns associated with mild cognitive impairment. Alzheimer's & Dementia: The Journal of the Alzheimer's Association, 4(6), 395–405. 22 T. D. PARSONS AND T. DUFFIELD

Henry, M., Joyal, C. C., & Nolin, P. (2012). Development and initial assessment of a newparadigm for assessing cognitive and motor inhibition: The bimodal virtual-reality Stroop. Journal of Neuroscience Methods, 210, 125–31. Heron, K. E., & Smyth, J. M. (2010). Ecological momentary interventions: incorporating mobile technology into psychosocial and health behaviour treatments. British Journal of Health Psychology, 15(1), 1–39. Huys, Q. J., Maia, T. V., & Frank, M. J. (2016). Computational psychiatry as a bridge from neurosci- ence to clinical applications. Nature Neuroscience, 19(3), 404. Insel, T. R., Landis, S. C., & Collins, F. S. (2013). Research priorities. The NIH BRAIN initiative. Science, 340(6133), 687–688. Iriarte, Y., Diaz-Orueta, U., Cueto, E., Irazustabarrena, P., Banterla, F., & Climent, G. (2016). AULA—Advanced virtual reality tool for the assessment of attention: Normative study in Spain. Journal of Attention Disorders, 20(6), 542–568. Jagaroo, V. (2009). Neuroinformatics for neuropsychology (pp. 25–84). USA: Springer. Jagaroo, V., & Santangelo, S. L. (Eds.). (2017). Neurophenotypes: Advancing psychiatry and neuro- psychology in the “omics” era. Springer. Koller, D., & Friedman, N. (2009). Probabilistic graphical models: Principles and techniques. MIT Press. Larrabee, G. J. (2008). Flexible vs. fixed batteries in forensic neuropsychological assessment: Reply to Bigler and Hom. Archives of Clinical Neuropsychology, 23(7–8), 763–776. MacDonell, K. E., Naar-King, S., Murphy, D. A., Parsons, J. T., & Huszti, H. (2011). Situational temp- tation for HIV medication adherence in high-risk youth. AIDS Patient Care and STDs, 25(1), 47–52. Manchester, D., Priestley, N., & Jackson, H. (2004). The assessment of executive functions: Coming out of the office. Brain Injury, 18(11), 1067–1081. Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503,78–84. Meehl, P. E. (1987). Foreword. In J. N. Butcher (Ed.), Computerized psychological assessment (pp. xv–xvi). New York, NY: Basic Books. Miller, J. B., & Barr, W. B. (2017). The technology crisis in neuropsychology. Archives of Clinical Neuropsychology,1–14. Najafabadi, M. M., Villanustre, F., Khoshgoftaar, T. M., Seliya, N., Wald, R., & Muharemagic, E. (2015). Deep learning applications and challenges in big data analytics. Journal of Big Data, 2(1), 1. National Institutes of Health (NIH). OBSSR Strategic Plan, Fiscal Years 2017-2021 (2015). Retrieved from https://obssr.od.nih.gov/2017-strategic-plan. Nilsen, W. J., & Pavel, M. (2013). Moving behavioral theories into the 21st century: Technological advancements for improving quality of life. IEEE Pulse, 4(5), 25–28. Olson, K., & Jacobson, K. (2015). Cross-cultural considerations in pediatric neuropsychology: A review and call to attention. Applied Neuropsychology: Child, 4(3), 166–177. Parsey, C. M., & Schmitter-Edgecombe, M. (2013). Applications of technology in neuropsycho- logical assessment. Clin. Neuropsychol. 27, 1328–1361. Parsons, T. D. (2016). Clinical neuropsychology and technology. New York, NY: Springer Press. Parsons, T. D., & Barnett, M. (2017). Validity of a newly developed measure of memory: Feasibility study of the virtual environment grocery store. Journal of Alzheimer's Disease, 59, 1227–1235. Parsons, T. D., Bowerly, T., Buckwalter, J. G., & Rizzo, A. A. (2007). A controlled clinical compari- son of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods. Child Neuropsychology, 13, 363–381. Parsons, T. D., & Carlew, A. R. (2016). Bimodal virtual reality stroop for assessing distractor inhib- ition in autism spectrum disorders. Journal of Autism and Developmental Disorders, 46(4), 1255–1267. THE CLINICAL NEUROPSYCHOLOGIST 23

Parsons, T. D., Carlew, A. R., Magtoto, J., & Stonecipher, K. (2017). The potential of function-led virtual environments for ecologically valid measures of executive function in experimental and clinical neuropsychology. Neuropsychological Rehabilitation, 37(5), 777–807. Parsons, T. D., & Courtney, C. (2014). An initial validation of the virtual reality paced auditory ser- ial addition test in a college sample. Journal of Neuroscience Methods, 222,15–23. doi:10.1016/ j.jneumeth.2013.10.006 Parsons, T. D., McMahan, T., & Kane, R. (2018). Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms. The Clinical Neuropsychologist, 32(1), 16–41. Parsons, T. D., & Rizzo, A. A. (in press). A virtual classroom for ecologically valid assessment of attention-deficit/hyperactivity disorder. In P. Sharkey (Ed.), Virtual reality technologies for health and clinical applications: Psychological and neurocognitive interventions. Germany: Springer- Verlag. Parsons, T. D., & Rizzo, A. A. (2008). Initial validation of a virtual environment for assessment of memory functioning: Virtual reality cognitive performance assessment test. Cyberpsychology and Behavior, 11,17–25 doi:10.1089/cpb.2007.9934 Piers, R. J., Devlin, K. N., Ning, B., Liu, Y., Wasserman, B., Massaro, J. M., … & Penney, D. L. (2017). Age and graphomotor decision making assessed with the digital clock drawing test: The Framingham heart study. Journal of Alzheimer's Disease, 60(4), 1611–1620. Pitkow, X., & Angelaki, D. E. (2017). Inference in the brain: Statistics flowing in redundant popu- lation codes. Neuron, 94(5), 943–953. Plancher, G., Barra, J., Orriols, E., & Piolino, P. (2013). The influence of action on episodic mem- ory: A virtual reality study. Quarterly Journal of Experimental Psychology, 66(5), 895–909. Plancher, G., Gyselinck, V., & Piolino, P. (2018). The integration of realistic episodic memories relies on different working memory processes: Evidence from virtual navigation. Frontiers in Psychology, 9, 47. Poldrack, R. A., Kittur, A., Kalar, D., Miller, E., Seppa, C., Gil, Y., … & Bilder, R. M. (2011). The cog- nitive atlas: toward a knowledge foundation for . Frontiers in neuroin- formatics, 5, 17. Ponsford, J. (2017). International growth of neuropsychology. Neuropsychology, 31(8), 921. Price, C. J. (2018). The evolution of cognitive models: From neuropsychology to neuroimaging and back. Cortex. Price C. J., & Friston K. (2005) Functional ontologies for cognition: The systematic definition of structure and function. Cognitive Neuropsychology, 22(3–4), 262–275. Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical neuropsy- chologists in the United States and Canada over a 10-year period: A follow-up survey of INS and NAN members. Archives of Clinical Neuropsychology, 31(3), 206–230. Rabin, L. A., Spadaccini, A. T., Brodale, D. L., Grant, K. S., Elbulok-Charcape, M. M., & Barr, W. B. (2014). Utilization rates of computerized tests and test batteries among clinical neuropsychol- ogists in the United States and Canada. Professional Psychology: Research and Practice, 45, 368–377. Reece, A. G., & Danforth, C. M. (2017). Instagram photos reveal predictive markers of depression. EPJ Data Science, 6(1), 15. Renison, B., Ponsford, J., Testa, R., Richardson, B., & Brownfield, K. (2012). The ecological and con- struct validity of a newly developed measure of executive function: The virtual library task. Journal of the International Neuropsychological Society, 18, 440–450. Riley, W. T. (2016). A new era of clinical research methods in a data-rich environment. In B. W. Hesse, D. K. Ahem, & E. Beckjord (Eds.), Oncology informatics: Using health information technol- ogy to improve processes and outcomes in cancer (pp. 343–355). Cambridge, MA: Academic Press. Rivera, D. E., & Jimison, H. B. (2013). Systems modeling of behavior change: Two illustrations from optimized interventions for improved health outcomes. IEEE Pulse, 4(6), 41–47. Rizzo, A., Bowerly, T., Buckwalter, J., Klimchuk, D., Mitura, R., & Parsons, T. D. (2006). A virtual reality scenario for all seasons: The virtual classroom. CNS Spectrums, 11(1), 35–44. 24 T. D. PARSONS AND T. DUFFIELD

Runyan, J. D., Steenbergh, T. A., Bainbridge, C., Daugherty, D. A., Oke, L., & Fry, B. N. (2013). A smartphone ecological momentary assessment/intervention “app” for collecting real-time data and promoting self-awareness. PLoS One, 8(8), e71325. Sabb, F. W., Bearden, C. E., Glahn, D. C., Parker, D. S., Freimer, N., & Bilder, R. M. (2008). A collab- orative knowledge base for cognitive phenomics. Molecular Psychiatry, 13(4), 350–360. Saez, A., Rigotti, M., Ostojic, S., Fusi, S., & Salzman, C. D. (2015). Abstract context representations in primate amygdala and prefrontal cortex. Neuron, 87, 869–881. Saranummi, N., Spruijt-Metz, D., Intille, S. S., Korhone, I., Nilsen, W. J., & Pavel, M. (2013). Moving the science of behavior change into the 21st century: Novel solutions to prevent disease and promote health. IEEE Pulse, 4(5), 22–24. Schmitter-Edgecombe, M., Cook, D., Weakley, A., & Dawadi, P. (2017). Using smart environment technologies to monitor and assess everyday functioning and deliver real-time intervention. In T. Parsons & R. Kane (Eds.), The role of technology in clinical neuropsychology (pp. 293–325). Oxford University Press. Schultheis, M. T., Rebimbas, J., Mourant, R., & Millis, S. R. (2007). Examining the usability of a vir- tual reality driving simulator. Assistive Technology, 19(1), 1–10. Silver, C. H., Ruff, R. M., Iverson, G. L., Barth, J. T., Broshek, D. K., Bush, S. S., … Planning Committee. (2008). Learning disabilities: The need for neuropsychological evaluation. Archives of Clinical Neuropsychology, 23(2), 217–219. Singla, G., Cook, D. J., & Schmitter-Edgecombe, M. (2010). Recognizing independent and joint activities among multiple residents in smart environments. and Humanized Journal, 1,57–63. Spring, B., Gotsis, M., Paiva, A., & Spruijt-Metz, D. (2013). Healthy apps: Mobile devices for con- tinuous monitoring and intervention. IEEE Pulse, 4(6), 34–40. Spruijt-Metz, D., Hekler, E., Saranummi, N., Intille, S., Korhonen, I., Nilsen, W., … & Sanna, A. (2015). Building new computational models to support health behavior change and mainten- ance: New opportunities in behavioral research. Translational Behavioral Medicine, 5(3), 335–346. Sternberg, R. J. (1997). Intelligence and lifelong learning: What’s new and how can we use it? American Psychologist, 52, 1134–1139. Testolin, A., & Zorzi, M. (2016). Probabilistic models and generative neural networks: Towards an unified framework for modeling normal and impaired neurocognitive functions. Frontiers in Computational Neuroscience, 10, 73. Thomas, M. L. (2011). The value of item response theory in clinical assessment: A review. Assessment, 18, 291–307. Thomas, M. L., Brown, G. G., Gur, R. C., Moore, T. M., Patt, V. M., Risbrough, V. B., & Baker, D. G. (2018). A signal detection–item response theory model for evaluating neuropsychological measures. Journal of Clinical and Experimental Neuropsychology,1–16. Wade, J., Zhang, L., Bian, D., Fan, J., Swanson, A., Weitlauf, A., … & Sarkar, N. (2016). A gaze-con- tingent adaptive virtual reality driving environment for intervention in individuals with autism spectrum disorders. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(1), 3. Weintraub, S., Dikmen, S.S., Heaton, R. K., Tulsky, D. S., Zelazo, P. D., Bauer, P. J., & Gershon, R. C. (2013). Cognition assessment using the NIH toolbox. Neurology, 80(11 Suppl 3), S54–S64. Weiss, D. (2004). Computerized adaptive testing for effective and efficient measurement in coun- seling and education. Measurement and Evaluation in Counseling and Development, 37(2), 70–84. Weissberger, G. H., Strong, J. V., Stefanidis, K. B., Summers, M. J., Bondi, M. W., & Stricker, N. H. (2017). Diagnostic accuracy of memory measures in Alzheimer’s dementia and mild cognitive impairment: A systematic review and meta-analysis. Neuropsychology Review,1–35. Wood-Shapiro, L. (2018). How technology helped me cheat dyslexia. Retrieved from June 18, 2018, https://www.wired.com/story/end-of-dyslexia/. Woodard, J. L. (2017). A quarter century of advances in the statistical analysis of longitudinal neuropsychological data. Neuropsychology, 31(8), 1020. THE CLINICAL NEUROPSYCHOLOGIST 25

Wu, D., Courtney, C., Lance, B., Narayanan, S. S., Dawson, M., Oie, K., & Parsons, T. D. (2010). Optimal arousal identification and classification for affective computing: Virtual reality stroop task. IEEE Transactions on Affective Computing, 1, 109–118. Wu, D., Lance, B., & Parsons, T. D. (2013). Collaborative filtering for brain-computer interaction using transfer learning and active class selection. PLOS ONE,1–18. Zaki, J., & Ochsner, K. (2009). The need for a cognitive neuroscience of naturalistic social cogni- tion. Annals of the New York Academy of Sciences, 1167(1), 16–30.