i

Children’s Psychological Processes Scale (CPPS)

Professional Manual

Milton J. Dehn, Ed.D.

Published by Schoolhouse Educational Services, LLC

ii

Published by Schoolhouse Educational Services, LLC

Copyright© 2012 by Schoolhouse Educational Services, LLC. All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying or by any information storage or retrieval system, without the prior written permission of Schoolhouse Educational Services, LLC. Address requests for permission to Schoolhouse Educational Services, LLC, N253 Johnson Road, Stoddard, WI 54658.

Printed in the United States of America.

iii

Table of Contents

List of Tables and Figures ………………………………………………………………….. vi

Acknowledgements ………………………………………………………………………….. vii

About the Author and Measurement Consultant …………………………………………. viii

Chapter 1 Introduction ………………………………………………………………………. 1 The Needs for the CPPS …………………………………………………….... 1 Psychological Processes Measured by the CPPS …………………………….. 2 Attention………………………………………………………………. 3 Auditory Processing ……………………………………………..….... 3 Executive Functions ………………………………………………….. 3 Fine Motor ………………………………………………………….... 3 Fluid Reasoning …………………………………………………….... 4 Long-Term Recall ……………………………………………………. 4 Oral Language ………………………………………………………... 4 Phonological Processing ……………………………………………… 5 Processing Speed ………………………………………………...... 5 Visual-Spatial Processing …………………………………………….. 5 Working Memory …………………………………………………….. 5 General Processing Ability ………………………………………….... 6 Psychological Processes and Academic Learning …………………………….. 6 Basic Reading Skills…………………………………………………… 7 Reading Fluency……………………………………………………….. 7 Reading Comprehension………………………………………………. 7 Mathematics Calculation………………………………………………. 8 Mathematics Reasoning………………………………………………... 8 Written Language……………………………………………………… 8 Applications of the CPPS………………………………………………………. 8 Diagnosis………………………………………………………………. 8 Identification of Student Strengths and Weaknesses………………….. 8 Screening………………………………………………………………. 9 Measuring Change……………………………………………………… 9 Caveats…………………………………………………………………. 9 User Qualifications……………………………………………………... 10

Chapter 2 Development, Standardization, and Norming…………………………………….. 11 Item Development, Calibration, and Selection………………………………….. 11 Pilot Study……………………………………………………………… 11 Item Tryout……………………………………………………………... 12

iv

Item Calibration Studies and Scaling…………………………………… 13 Rasch Item Analysis and Scale Development Procedures……………………………………………………… 13 Incorporation of the W-Scale…………………………………… 13 Final Item Selection …………………………………………… 14 The CPPS Standardization Sample ……………………………………………... 14 Socioeconomic Status ………………………………………………….. 16 Parent Education Level ……………………………………….. 17 Academic Skills Rankings ……………………………………. 17 Hispanic Status ………………………………………………………… 17 Gender Differences ……………………………………………………. 18 Age Differences ……………………………………………………….. 19 Construction of Norms …………………………………………………………. 21 Subject Weighting …………………………………………………….. 21 Calculation of CPPS Subscale and General Processing Ability Derived Scores ………………………………………………… 21 Calculation of CPPS Strength and Weakness Discrepancy Scores ………………………………………………………………….. 23

Chapter 3 Reliability and Validity ……………………………………………………………. 25 Reliability ………………………………………………………………………. 25 Internal Consistency …………………………………………………… 25 Standard of Measurement ……………………………………….. 26 Inter-Rater Reliability ………………………………………………….. 27 Construct Validity ………………………………………………………………. 28 Evidence Based on Test Content ……………………………………………….. 28 Expert Review …………………………………………………………. 28 Developmental Evidence ………………………………………………. 29 External Validity Evidence …………………………………………………….. 30 CPPS Relations with Academic Skills ………………………………… 30 Woodcock-Johnson III Tests of Achievement ………………... 30 Academic Skills Ranking ……………………………………... 32 CPPS Relations with Parent Education Level …………………………. 33 CPPS Relations with Cognitive Abilities ……………………………… 33 CPPS Relations with a Measure of Executive Functions ……………… 35 CPPS Diagnostic Utility for Students with Learning Disabilities ……………………………………………………………... 37 Internal Structural Validity Evidence …………………………………………... 38 Principal Component Analysis ………………………………………… 39 Exploratory Factor Analysis …………………………………………… 41 Cluster Analysis ……………………………………………………….. 44

Chapter 4 Use and Interpretation …………………………………………………………….. 47 Using the CPPS ………………………………………………………………… 47

v

Accessing the Online CPPS Teacher Rating Form ……………………. 47 Completing the Online CPPS Teacher Rating Form ………………….. 48 Generating the CPPS Report ………………………………………….. 48 Interpretation of CPPS Results ………………………………………………… 48 T-Scores and Percentiles ……………………………………………… 48 Confidence Intervals ……………………………………………... ….. 49 W-Scores ……………………………………………………………… 50 Discrepancy Scores …………………………………………...... 51 Interpretive Steps ……………………………………………………………… 52 Step 1: Interpret General Processing Ability ………………………… 52 Step 2: Evaluate and Interpret Clinical Groupings of Subscales …….. 53 Step 3: Interpret Individual Subscales ……………………………….. 53 Step 4: Interpret Intra-Individual Strengths and Weaknesses ……….. 54 Step 5: Determine Base Rate ………………………………………… 55 Step 6: Interpret Responses to Individual Items ……………………… 55 Step 7: Identify Psychological Processes for Intervention …….. …….. 55 Step 8: Identify Cognitive Processes for Additional Assessment ……. 56 Step 9: Compare Results from Multiple Raters ……………………… 56 Step 10: Consider Relations with Achievement Scores ……………… 57 Diagnosis of LD ……………………………………………………………….. 58 Case Study …………………………………………………………………….. 58 Step 1: Interpret General Processing Ability ………………………… 60 Step 2: Evaluate and Interpret Clinical Groupings of Subscales …….. 60 Step 3: Interpret Individual Subscales ……………………………….. 60 Step 4: Interpret Intra-Individual Strengths and Weaknesses ……….. 61 Step 5: Determine Base Rate ………………………………………… 61 Step 6: Interpret Responses to Individual Items ……………………… 62 Step 7: Identify Psychological Processes for Intervention …………… 63 Step 8: Identify Cognitive Processes for Additional Assessment ……. 63 Step 9: Compare Results from Multiple Raters ………………………. 63 Step 10: Consider Relations with Achievement Scores ……………… 63 Case Study Diagnosis …………………………………………………. 63

Appendix A CPPS 121 Items Grouped by Process ………………………………………….. 65

Appendix B Communities and States in CPPS Standardization Sample ………………….. 68

Appendix C Z-Score Normal Probability Tables ……………………………………………. 70

Appendix D Sample CPPS Score Report …………………………………………………….. 72

References ………………………………………………………………………………………. 75

vi

List of Tables and Figures

Tables Table 1.1 Psychological Processes Measured by the CPPS ………………………………………………… 2 Table 1.2 Psychological Processes Significantly Related With Types of Academic Learning ……………. 7 Table 2.1 Demographic Characteristics of CPPS Standardization Sample with Weighting ………………. 15 Table 2.2 CPPS Gender Differences Across All Age Groups ……………………………………………… 18 Table 2.3 CPPS W-Scale Ability Range ……………………………………………………………………. 21 Table 2.4 CPPS Summary Statistics by Age Group ………………………………………………………… 23 Table 3.1 Cronbach’s alpha (α) Reliability Coefficients by Age …………………………………………… 26 Table 3.2 Cronbach’s alpha (α) Reliability Coefficients and SEM Values by Normative Age Groups …... 27 Table 3.3 Inter-Rater Reliability Coefficients for 22 Subjects ……………………………………………… 28 Table 3.4 CPPS W-Scores for WJ III Study ………………………………………………………………… 31 Table 3.5 WJ III ACH W-Scores ……………………………………………………………………………. 31 Table 3.6 Pearson Correlation Coefficients between CPPS and WJ III ACH W-Scores …………………... 32 Table 3.7 Results of ANOVA Comparing Academic Skills Ranking with CPPS GPA Score …………….. 33 Table 3.8 Results of ANOVA Comparing Estimated Parent Education Level with CPPS GPA Score …… 33 Table 3.9 WJ III COG W-Scores ……………………………………………………………………………. 34 Table 3.10 Pearson Correlation Coefficients between CPPS and WJ III COG W-Scores …………………. 36 Table 3.11 Pearson Correlations between CPPS Subscales and BRIEF Scales ……………………………. 37 Table 3.12 CPPS Scores of LD Subjects Compared with a Matched Sample ……………………………… 38 Table 3.13 CPPS Subscale Intercorrelations ……………………………………………………………….. 39 Table 3.14 Subscale Loadings (sorted by median loading) on the CPPS General Component …………… 41 Table 3.15 Ages 5-6 Exploratory Factor Analysis Solutions ………………………………………………. 41 Table 3.16 Ages 7-8 Exploratory Factor Analysis Solutions ………………………………………………. 42 Table 3.17 Ages 9-10 Exploratory Factor Analysis Solutions …………………………………………….. 42 Table 3.18 Ages 11-12 Exploratory Factor Analysis Solutions ……………………………………………. 43 Table 4.1 Interpretive Guidelines for CPPS T-Scores ……………………………………………………… 49 Table 4.2 Example of Intra-Individual Strengths and Weaknesses Table ………………………………….. 52 Table 4.3 Definitions of Processes Measured by the CPPS ………………………………………………… 54 Table 4.4 Example of T-Scores from Multiple Raters …………………………………………………...... 57 Table 4.5 Case Study WJ III Scores ………………………………………………………………………… 59 Table 4.6 Case Study CPPS Scores ………………………………………………………………………… 60 Table 4.7 Case Study Intra-Individual Strengths and Weaknesses Table ………………………………….. 61 Table 4.8 Case Study Responses to Individual Items ………………………………………………………. 62 Table 4.9 Case Study T-Score to Standard Score Conversion Table ………………………………………. 64

Figures Figure 2.1 Median W-Score Plot of Working Memory Subscale ………………………………………….. 20 Figure 3.1 Cluster Tree (Wards method: Ages 5-6) ……………………………………………………….. 45 Figure 3.2 Cluster Tree (Wards method: Ages 7-8) ……………………………………………………….. 45 Figure 3.3 Cluster Tree (Wards method: Ages 9-10) ……………………………………………………… 46 Figure 3.4 Cluster Tree (Wards method: Ages 11-12) ……………………………………………………. 46

vii

Acknowledgements

I wish to thank the following people for their contributions to the development and publication of the CPPS: Kevin McGrew for his invaluable guidance and assistance with all aspects of the project (see About the Measurement Consultant); Paula A. Dehn for coordinating and assisting with the WJ III validity study; Benjamin Burns for collecting data for the WJ III validity study; Daniel Miller, Elaine Fletcher-Janzen, and Jocelyn Newton for reviewing the items for content validity; Pramila Srinivasan for programming the online materials, and Isabel Pratt for editing and formatting the manual.

viii

About the Author and Measurement Consultant

Author

Milton J. Dehn earned his BA in Psychology from the University of Minnesota, his MS in School Psychology from Moorhead State University, and his doctorate in Educational Psychology from the University of South Dakota (1992). He has practiced as a school psychologist in four Midwestern states and is currently a private practice school psychologist in Wisconsin. Dr. Dehn taught for 13 years in the School Psychology Program at the University of Wisconsin-La Crosse, where he also served as program director. His interest in assessment of psychological processes began with his dissertation research. He later developed a selective, cross-battery model for psychological processing assessment, which is described in his 2006 book—Essentials of Processing Assessment. That publication was followed by Working Memory and Academic Learning: Assessment and Intervention, Long-Term Memory Problems in Children and Adolescents: Assessment, Intervention, and Effective Instruction, and Helping Students Remember: Exercises and Strategies to Strengthen Memory. In 2003, Dr. Dehn co-founded Schoolhouse Educational Services and Schoolhouse Tutoring®. At Schoolhouse Tutoring® in La Crosse, Wisconsin he continues to conduct psychoeducational assessments, consult with parents, and provide memory training and interventions to students with memory problems. Dr. Dehn is a frequently requested speaker and has conducted 100+ state, national, and international presentations or workshops on processing assessment, working memory, and interventions for students with long-term memory impairments. Measurement Consultant

Dr. Kevin McGrew served as measurement consultant, technical advisor, and statistician for the development of the CPPS. Dr. McGrew’s contributions to the scale’s development included: conducting the Rasch-based item analysis procedures at each stage of item development; constructing the CPPS norms; providing advice on selecting the standardization sample; constructing tables that allow intra- individual discrepancy analysis; analyzing all of the data gathered in reliability and validity studies; and assisting with the writing of the technical sections of the manual. Dr. McGrew is the Director of the Institute for Applied Psychometrics (IAP), a Visiting Professor in Educational Psychology (School Psychology Program) at the University of Minnesota, Associate Director for Measurement Learning Consultants (MLC), and Research Director for the Woodcock-Munoz Foundation (WMF). He received his BA in Psychology (1974) and MS in School Psychology (1975) from Moorhead State University, Moorhead, MN. He received his doctorate in Educational Psychology at the University of Minnesota in 1989. Dr. McGrew, who is a co-author of the Woodcock-Johnson III (WJ III, 2001), has extensive experience in the development and psychometric analysis of nationally standardized norm-referenced psychological and educational assessment instruments. He was the primary measurement consultant (and first author of the technical manual) for the Woodcock-Johnson Psychoeducational Battery (WJ-R, 1991) and served in the same capacity as coauthor of the Woodcock-McGrew-Werder Mini-Battery of Achievement (MBA, 1993), Sharpe-McNear-McGrew Braille Assessment Inventory (BAI, 1996), Woodcock-Johnson Battery—III (WJ III, 2001), Woodcock-Johnson Diagnostic Supplement (WJ III DS, 2003), Batería III Woodcock-Muñoz (BAT III, 2005), Woodcock-Johnson III Normative Update (WJ III NU, 2007), and Woodcock-Johnson III—Australian Adaptation (2008).

1 Introduction

The Children’s Psychological Processes ACT (IDEA) reaffirmed the definition by stating Scale (CPPS) is an internet, web-based teacher "The term 'specific learning disability' means a rating scale designed to assess psychological disorder in one or more of the basic processes related to academic learning in psychological processes involved in children ages 5 through 12 years. The main understanding or in using language, spoken or function of the CPPS is to facilitate the written, which disorder may manifest itself in identification of psychological processing the imperfect ability to listen, think, speak, read, deficits in children referred for a learning write, spell, or do mathematical calculations” disability (LD) evaluation. It is also suitable for (United States Code 20 U.S.C. §1401 [30]). This screening and measuring response to ongoing federal definition creates the need to interventions. The CPPS consists of 121 items assess psychological processes whenever divided among 11 process scales (see Table 1.1). eligibility for learning disability services is The items describe academically related, considered. States have interpreted this federal psychological processing difficulties that can mandate in various ways, with many states readily be observed by classroom teachers. Most specifically requiring a psychological processing classroom teachers will complete the scale in 12 assessment component, even when a response- to 15 minutes. The ratings are compiled to to-intervention (RTI) problem-solving approach generate a report that includes a brief narrative, a has been adopted. graph of T-scores, a table of scores, change- What constitutes a “basic psychological sensitive W-scores, and a discrepancy table for process” has been open to interpretation and determining the pattern of intra-individual debate. Some states have defined it in broad strengths and weaknesses (see Appendix D). information processing terms. For example, the The CPPS was normed on a diverse state of Wisconsin defined a processing deficit sample of 1,121 children rated by 278 teachers as “a pattern of severe problems with storage, from 128 communities in 30 states and the organization, acquisition, retrieval, expression, District of Columbia. The sample’s demographic or manipulation of information” (PI 11.36, characteristics closely approximate the U.S. 2000). More recently, states have included Census percentages. The internal consistency “processes” that are essentially a list of cognitive reliability estimates for the subscales range from processes. For instance, Minnesota’s list of .88 to .98, with the majority in the mid-90s. acceptable psychological processes includes Several sources of internal and external validity phonological processing, working memory, evidence support the construct validity of the executive functions, and long-term retrieval CPPS. For example, the CPPS composite score (retrieved July 15th, 2011 from: discriminates well between students with and http://education.state.mn.us/MDE/Learning_Sup without a learning disability. port/Special_Education/Categorical_Disability_I nformation/Specific_Learning_Disabilities/inde The Needs for the CPPS x.html). Even when a state’s criteria do not

specify the presence of a psychological process Ever since Public Law 94-142 (1975) deficit, many educators and psychological defined a learning disability as “a disorder in practitioners who conduct learning disabilities one or more of the basic psychological evaluations include an assessment of processes” (Federal Register, December 29, psychological processes or examine cognitive 1977, p. 65083), federal legislation has testing results for signs of psychological maintained this definition. The 2004 processing disorders or deficits (Hale, Flanagan, Amendment to the Individuals with Disabilities

1

2

& Naglieri, 2008). This approach to the originally designed to meet the state of Illinois’ identification of learning disabilities is based on criteria for learning disability placement, is a 35- the premise that some type of psychological item scale that samples six processing areas. processing weakness or neuropsychological dysfunction is the reason for the student’s Psychological Processes Measured manifest academic difficulties (Flanagan, Fiorello, & Ortiz, 2010). The belief that learning by the CPPS disabilities are caused by an underlying deficit in cognitive or psychological processes is well Psychological processes are mental supported in the literature (e.g., Hale et al., operations that perceive, transform, manipulate, 2010). store, retrieve, and express information. In addition to the legal and professional Psychological processes range from basic needs for a psychological processes scale, the perceptual processes, such as recognizing field of psychoeducational assessment has distinct sounds or perceiving visual details, to needed such a scale to supplement direct tests of higher-level cognitive processes that contribute processes and to reliably differentiate between to language and reasoning performance. It is students with and without psychological difficult to identify all of the brain-based processing deficits and disorders. Educators processes that contribute to a specific cognitive have often had to resort to informal, non- operation or performance of a skill. It is equally standardized assessment methods, such as difficult to parse the contribution of each checklists, to evaluate psychological processes. process. Multiple processes underlie Psychologists, school psychologists, and performance on any given task, and any learning disability assessment professionals have identified process can be decomposed into more often relied on time-consuming, cross-battery specific components, functions, or operations. testing with cognitive scales to sample a broad The complexity of psychological processing range of psychological processes. The growing makes it difficult to identify discrete research base on cognitive and processing- processes—they are what transpire in the “black achievement relations (e.g., Flanagan, Ortiz, box” of the human mind, hidden from direct Alfonso, & Mascolo, 2006; McGrew & observation. The psychological processing Wendling, 2010) has also increased the demand constructs identified for assessment by the CPPS for more assessment options. To date, there has are groupings or aggregates of specific been only one published teacher rating scale processes, rather than discrete, isolated designed for psychological processing processes. The psychological processes on the assessment—the Psychological Processing CPPS scale (see Table 1.1) are best described as Checklist-Revised (PPC-R; Swerdlik, Swerdlik, “broad” inclusive processes. Kahn, & Thomas, 2008). The PPC-R, which was

Table 1.1 Psychological Processes Measured by the CPPS

Attention (AT) Auditory Processing (AP) Executive Functions (EF) Fine Motor (FM) Fluid Reasoning (FR) Long-Term Recall (LTR) Oral Language (OL) Phonological Processing (PP) Processing Speed (PS) Visual-Spatial Processing (VSP) Working Memory (WM) General Processing Ability (GPA)

3

Attention maintaining and judging rhythm, musical discrimination and judgment, absolute , and Attention is a complex, multi-faceted sound localization (Schneider & McGrew). psychological process that influences more than Children with auditory processing disorders behavior and performance (Chun, Golomb, & have difficulties recognizing and interpreting Turk-Browne, 2011). The efficiency of cognitive sounds, especially speech sounds, leading to operations, such as working memory, depends difficulties understanding verbal information. on attentional capacity and control (often called For example, such children will have difficulty controlled executive attention). The distinguishing between similar sounding words development of academic skills is similarly like “hat” and “bat”. Auditory processing dependent on the ability to sustain attention deficits may impair academic learning and have (Rabiner, Murray, & Schmid, 2004). The often been associated with learning disabilities functioning of attentional processes is highly (Flanagan et al., 2006; Gomez & Condon, 1999; related to the self-regulatory dimensions of McGrew & Wendling, 2010). Auditory executive functions. According to Barkley processing is closely related with phonological (1997), problems with sustaining attention are processing and language functions, especially caused by difficulties with inhibitory control, an receptive language. executive function. Consequently, the assessment of attention disorders usually Executive Functions involves a broad assessment of executive functions (e.g., Gioia, Isquith, Guy, & Of the psychological constructs assessed Kenworthy, 2000). In addition to evaluating the by the CPPS, executive processing is probably ability to control attention, the abilities to the broadest. Executive functions include an sustain, divide, and focus (also referred to as array of mental processes responsible for selective attention) need to be included in an regulating cognitive, emotional, and behavioral assessment of attention. An Attention subscale is functions, especially during purposeful, goal- included in the CPPS because students with directed, problem-solving behavior (Gioia, et al., learning disabilities often demonstrate deficits in 2000). The different executive functions, which attention (Kroesbergen, Van Luit, & Naglieri, are analogous to the brain’s board of directors, 2003). Also, contemporary Cattell-Horn-Carroll monitor and manage other cognitive functions. (CHC) intelligence theory is increasingly The complexity of executive functioning is recognizing the importance of attention, and illustrated by McCloskey, Perkins, and Van attentional control in particular, in the process of Divner (2009), who identify 23 different self- complex working memory (Schneider & regulation executive functions. Relations McGrew, in press). between broad executive functions (sometimes referred to as complex executive functions) and Auditory Processing both reading and mathematics have been reported by Best, Miller, and Naglieri (2011). Auditory processing is the ability to Deficits in executive functions have been perceive, analyze, synthesize, and discriminate associated with academic performance deficits auditory stimuli, but the construct of auditory (McCloskey et al., 2009) and with learning processing applies mainly to perceptual disabilities (Singer & Bashir, 1999). Because of processing. It is not the sensory input of auditory its location in the frontal lobes of the brain, stimuli, but rather what the brain does with the executive processing is one of the last information after sensory information is received psychological processes to fully develop. via the ear (Schneider & McGrew, in press). In the contemporary CHC theory of intelligence, Fine Motor auditory processing (Ga) subsumes such narrow abilities as phonetic coding, speech sound Fine motor functioning and skills are the discrimination, resistance to auditory stimulus product of psychological processes. Fine motor distortion, memory for sound patterns, skills involve the coordination of small muscle

4 movements that occur in the fingers. Although having difficulty mastering skills results from less cognitive than the other CPPS processes, the dysfunction of one or more of the primary fine motor processing has been included because memory processes: encoding, consolidation, it is related to academic learning and storage, and retrieval (Dehn, 2010). Deficits in performance, especially in the early elementary long-term memory processing can impair years (Pagani, Fitzpatrick, & Archambault, learning and are associated with learning 2010). For example, the ability to effectively disorders (Kramer, Knee, & Delis, 2000). The manipulate a pencil will have an impact on CPPS Long-Term Recall subscale includes items mastering writing skills. Difficulties with fine reflecting difficulties recalling both academic motor processing occur in developmental knowledge and non-academic experiences. dyspraxia and are often associated with learning Rapid automatic naming (RAN; also referred to disabilities and other childhood disorders, such as naming facility), an aspect of long-term recall as Attention Deficit Hyperactivity Disorder that is associated with early reading (ADHD; Jakobson & Kikas, 2007). Fine motor development, is included in the Long-Term performance is closely related to visual-spatial Recall subscale. The Long-Term Recall subscale processing abilities. The fine motor skills is most similar to the broad CHC domain of assessed by the CPPS Fine Motor subscale tap Long-Term Storage and Retrieval (Glr), which aspects of the CHC domains of psychomotor includes a wide array of learning and retrieval abilities (e.g., finger dexterity and control efficiency narrow abilities, such as associative precision) and psychomotor speed (e.g., writing memory, meaningful memory, free recall speed; Schneider & McGrew, in press). memory, ideational fluency, and naming facility (Schneider & McGrew, in press). CHC cognitive Fluid Reasoning processing-achievement relations research has implicated specific Glr narrow abilities Fluid reasoning, which primarily (associative memory and RAN) in reading and includes the CHC narrow fluid reasoning (Gf) math achievement (Flanagan et al., 2006). abilities of inductive, deductive, and quantitative reasoning, is the ability to reason and problem- Oral Language solve, especially when confronted with novel “on-the-spot” problems (Schneider & McGrew, The processing of oral language is a in press). Fluid reasoning has been closely broad processing ability that incorporates many associated with the construct of general basic psychological processes from phonological intelligence or g (Carroll, 1993), although the Gf encoding to word retrieval. Language = g conclusion is not a universal consensus development and proficiency is highly related among intelligence scholars. Fluid reasoning is a with academic learning, and language higher-level cognitive and psychological process impairments are often associated with learning that does not mature until adolescence. Fluid disabilities, especially reading disabilities (Catts, reasoning is included in the CPPS due to its 1996). Children with language disorders have strong relations with applied academic skills, difficulty with the structure of words, the such as reading comprehension and meaning of words, the relationships of words in mathematical reasoning (Flanagan et al., 2006; sentences, and the functional use of language. McGrew & Wendling, 2010). On the CPPS, The CPPS Oral Language items consist mostly quantitative reasoning items are included in the of expressive language items. The Oral Fluid Reasoning subscale. Language subscale items primarily measure a number of CHC abilities in the broad domain of Long-Term Recall Comprehension-Knowledge (Gc), such as general information, language development, Mastery of academic skills and lexical knowledge, and listening ability performance on classroom tests is highly (Schneider & McGrew, in press). The broad dependent on effective functioning of long-term domain of Gc, and the narrow abilities of lexical memory. Poor delayed recall of new learning or knowledge, listening ability, and general

5 information, have been differentially linked to CPPS Processing Speed subscale corresponds to achievement in reading, writing, and the CHC domain of Processing Speed (Gs), mathematics (Flanagan et al., 2006; McGrew & which subsumes such narrow abilities as Wendling, 2010). perceptual speed, rate of test-taking, number facility, and the academic abilities of writing and Phonological Processing reading fluency or speed (Schneider & McGrew, in press). Phonological processing is the manipulation of the phonemes that comprise Visual-Spatial Processing words (Gillon, 2004). Phonemes, the smallest units of speech, are combined to form syllables Visual-spatial processing refers to the and words. The English language consists of ability to perceive, analyze, synthesize, about 41 phonemes. Phonemic awareness—the manipulate, and transform visual patterns and understanding that words (spoken and written) images, including those generated internally can be divided into discrete sounds—is an (i.e., in the “mind’s eye”). The visual and spatial important dimension of phonological processing. dimensions are easily differentiated. The visual Normal development of basic reading skills is aspect involves processing of stimulus highly dependent on adequate phonological characteristics, such as shape and color. The processing (Flanagan et al., 2006; Kamhi & spatial dimension processes the location and Pollock, 2005; McGrew & Wendling, 2010; movement of visual stimuli; for example, mental National Reading Panel, 2000). Children who rotation of an image requires spatial processing. are better at detecting phonemes learn to decode This CPPS subscale has direct correspondence words more easily. A deficit in phonological to the CHC domain of Visual Processing (Gv), processing is a common occurrence among which includes the narrow abilities of individuals with reading disabilities (National visualization, speeded rotation, closure speed, Reading Panel, 2000). Also, children from lower flexibility of closure, visual memory, and spatial socioeconomic backgrounds have a higher risk scanning (Schneider & McGrew, in press). of phonemic awareness deficits (Brady, Fowler, Compared with most other processes assessed Stone, & Winbury, 1994). An early indication of by the CPPS, visual-spatial processing has a phonological processing deficit is difficulty relatively weak relations with academic recognizing words that rhyme. Later on, children learning, even with mathematics skills where with such a deficit may have difficulty significant relations have been reported (Geary, manipulating written phonemes and blending 1993). McGrew and Wendling (2010) them into complete words. According to CHC hypothesize that strong relations do not emerge theory, this CPPS subscale focuses primarily on because visual-spatial (Gv) processes “may the narrow auditory processing ability of function as a threshold ability—you need a phonetic coding. minimal amount to read and perform math, but beyond the minimal threshold level ‘more Gv’ Processing Speed does not improve performance” (p.666). McGrew and Wendling also suggest that this Processing speed refers to how quickly lack of Gv-math significance may be a function information is processed and how efficiently of contemporary intelligence tests not including simple cognitive tasks are executed over a measures of the specific Gv abilities important sustained period of time. Processing speed is for math (e.g., complex visual working typically tested with tasks requiring the memory). examinee to perform relatively easy overlearned procedures that require little reasoning or higher Working Memory level complex processing. Processing speed plays an important role in almost all aspects of Working memory is defined as the academic learning and performance (Flanagan et limited capacity to retain information while al., 2006; McGrew & Wendling, 2010). The simultaneously processing the same or other

6 information for a short period of time (Swanson, psychological processes, such as processing 2000). The widely accepted working memory speed, account for some of the improvement in model originally proposed by Baddeley and working memory performance (Henry & Millar, Hitch (1974) forms the basis of the working 1993). memory construct assessed by the CPPS. Specifically, auditory short-term memory (also General Processing Ability referred to as the phonological loop or phonological short-term memory) and visual- All of the psychological processes spatial short-term memory (also referred to as assessed by the CPPS are highly interrelated. the visual-spatial sketchpad) are viewed as Moreover, performance of academic skills subcomponents of working memory. There is depends on the integrated functioning of also an episodic working memory subcomponent numerous psychological processes. As discussed (Baddeley, 2006) that interfaces with long-term in Chapter 3, a general factor on which all of the memory. In Baddeley’s model, the essence of subscales load significantly consistently emerges working memory, referred to as executive during factor analysis of the CPPS working memory, consists of the self-regulatory standardization data. This general factor is and executive functions that control and allocate thought to represent general processing ability limited working memory resources. Because of (GPA) and may reflect an underlying efficiency its association with the frontal lobes, full or automaticity of processing dimension. The development of the executive dimension occurs CPPS GPA score is highly correlated with later than the auditory, visual-spatial, and achievement scores and other markers of episodic components (Gathercole, 1999). In academic progress (see Chapter 3). It also may contrast with Baddeley’s model, the CHC model have significant relations with g (general currently subsumes working memory under the intelligence); however, this hypothesis requires domain of Short-Term Memory (Gsm). The additional research. other primary Gsm narrow ability is simple memory span. Psychological Processes and The strong relations between academic learning and working memory are well Academic Learning established (Dehn, 2008; Flanagan et al., 2006; McGrew & Wendling, 2010; Swanson & Collectively, psychological processes Berninger, 1996; Swanson & Jerman, 2006). are highly related to the acquisition and Reading decoding, reading comprehension, development of academic skills. Some mathematics calculations, mathematics psychological, or cognitive, processes— reasoning, written expression, and most aspects processing speed, working memory, and long- of classroom performance depend heavily on the term recall—are crucial for nearly all types of capacity and effective functioning of working academic learning (Dehn, 2010), whereas, the memory. For example, adequate working influence of others varies by type of memory capacity allows the reader to fluently achievement (see Table 1.2). For each academic decode words and to complete more complex area, psychological processing “aptitudes” have cognitive processes, such as reading been identified empirically (Flanagan et al., comprehension (McNamara & Scott, 2001). 2006). When a processing aptitude is deficient, Research has consistently reported that students the type(s) of academic learning it supports may with learning difficulties display poor working be impaired. In such instances, high IQ and memory performance, especially in the auditory processing strengths might not be enough to and verbal dimensions of working memory compensate for the impairment. Evidence of (Swanson & Berninger, 1996). Also, working learning disability may exist when there is memory plays a critical, integral role in higher- aptitude-achievement consistency (Flanagan, level cognitive activities, such as fluid reasoning Fiorello, & Ortiz, 2010); that is, both the (Dehn, 2008; McNamara & Scott, 2001). psychological process and the related area of Likewise, the development of other achievement are similarly low.

7

Table 1.2 Psychological Processes Significantly Related With Types of Academic Learning

Basic Reading Reading Reading Mathematics Mathematics Written Skills Fluency Comprehension Calculation Reasoning Language Attention Auditory Auditory Auditory Processing Processing Processing Executive Executive Functions Functions Fine Motor Fluid Reasoning Fluid Reasoning Fluid Reasoning Fluid Reasoning Long-Term Long-Term Long-Term Long-Term Long-Term Long-Term Recall Recall Recall Recall Recall Recall Oral Lang. Oral Language Oral Language Oral Language Phonological Phonological Phonological Processing Processing Processing Processing Processing Processing Processing Processing Speed Speed Speed Speed Speed Visual-Spatial Processing Working Working Working Working Working Memory Memory Memory Memory Memory Note: Sources: Flanagan et al., 2006; Dehn, 2006; McGrew & Wendling, 2010

Basic Reading Skills Reading Fluency

Basic reading skills are comprised Reading fluency is the rapid decoding primarily of phonetic decoding and word and comprehension of text. It is usually assessed recognition skills. The literature is replete with with measures of reading speed. Rapid evidence (reviewed by the National Reading processing of symbols and quick retrieval of Panel, 2000) documenting the causal link words and their meanings is necessary for between phonological processing and the reading fluency (Manis, Seidenber, & Doi, development of basic reading skills. Working 1999). Accordingly, processing speed, long-term memory, especially the phonological (auditory) recall, and phonological processes are short-term memory subcomponent, also plays a considered critical psychological processes for key role (Swanson, 2000). Executive aspects of reading fluency (Bekebrede, van der Leij, & working memory come into play when a reader Share, 2009; Benson, 2008). blends phonemes into whole words. A number of narrow abilities under the broad domains of Reading Comprehension long-term recall, processing speed, and oral language have also been found to be consistently Reading comprehension is an applied significant in the prediction of basic reading academic skill that involves the construction of skills development (Flanagan et al., 2006; meaning from text. Prior knowledge, an aspect McGrew & Wendling, 2010). Auditory of comprehension-knowledge or crystallized processing has also been implicated (Flanagan et intelligence that is a product of psychological al., 2006; Gomez & Condon, 1999). processing, not a psychological process itself, can greatly enhance reading comprehension

8

(Floyd, Bergeron, & Alfonso, 2006). Other than Written Language prior knowledge, memory processes—working memory and long-term recall—are important Expressing ideas through written factors (Dehn, 2010), along with auditory language is a challenging cognitive task for processing and oral language (Flanagan et al., learners of all ages. In addition to verbal abilities 2006; McGrew & Wendling, 2010). Fluid and crystallized intelligence (Floyd, McGrew, & reasoning also serves a critical function, Evans, 2008), several higher-level psychological especially in later elementary years when more processes must function in an integrated fashion inferential reading comprehension is required. to accomplish written language goals. These include executive functions and oral language Mathematics Calculation (Hooper, Costa, & McBee, 2011), as well as long-term recall, fluid reasoning (especially in Mathematics calculation skills, adolescence), and working memory (Floyd et al., sometimes referred to as basic math skills, 2008). Also, the more basic psychological include knowledge of basic math facts, processes of processing speed, auditory computation, and fluency. The composition of processing, phonological processing (Floyd et psychological processes essential for al., 2008), and fine motor skills (Hooper et al., mathematics performance is somewhat unique 2011) come into play. from those that align strongly with literacy skills. For example, auditory processing Applications of the CPPS typically does not play a consistent significant role in mathematics, whereas visual-spatial Diagnosis processing emerges as an influence (Geary,

1993; 2007). The development of mathematics The CPPS can be used to describe the skills certainly requires adequate functioning of present status of a child’s development and all memory dimensions (Floyd, Evans, & difficulties in 11 domains of psychological McGrew, 2003; Geary et al., 2009), especially processing, as well as in general processing visual-spatial working memory (Geary, 2011). ability. When used in conjunction with other Moreover, processing speed, fluid reasoning assessment data (e.g., direct tests of cognitive (Flanagan et al., 2006; Geary, 2011; McGrew abilities and psychological processes, classroom and Wendling, 2010), and attention work samples, etc.), CPPS results can contribute (Kroesbergen et al., 2003) play important roles. to the diagnosis of psychological processing

disorders and dysfunctions such as a Mathematics Reasoning phonological processing disorder or an executive

processing dysfunction. That is, the CPPS can Mathematics reasoning (problem- facilitate a multisource diagnosis when used in solving skills in math) is an applied academic accordance with professional judgment and skill that relies more on fluid reasoning, working adherence to the standards for test use outlined memory, and oral language than its basic skills in the AERA, APA, and NCME (1999) Joint counterpart (Floyd et al., 2003). Similar to Test Standards. In learning disability reading comprehension, background knowledge evaluations, CPPS results can be used as one and long-term recall influence performance. important source of evidence that may suggest Processing speed and fluid reasoning have been the presence of a “disorder in one or more basic found to be consistently significant predictors of psychological process.” mathematics reasoning (Flanagan et al., 2006;

Geary, 2011; McGrew & Wendling, 2010). Identification of Student Strengths and Also, because of their identified relationship Weaknesses with planning (Naglieri & Gottling, 1997), executive functions are thought to underlie The CPPS is designed to facilitate the mathematics reasoning performance. reliable identification of intra-individual

9 strengths and weaknesses across psychological Measuring Change processes. Through the use of CPPS strength and weakness discrepancy analysis procedures Because of the change-sensitive nature (see Chapter 4) a child’s pattern of strengths and of W-scores (see Chapter 2), the CPPS is ideally weaknesses can be evaluated for statistical and suited for measuring change, progress, or practical (i.e., base rate) significance. development over time. Historically, Identification of a pattern of relative processing standardized scores have been poor indicators of strengths and weaknesses should improve the change because they primarily indicate how an understanding of a student’s cognitive examinee ranks relative to same age peers functioning and academic learning challenges. instead of indicating how much change has Furthermore, during the pre-referral or occurred. W-scores (which are centered on the evaluation process, the teacher-supplied CPPS performance of the average 10-year old) provide results can be used by assessment professionals reliable data on change because they measure to: generate problem-solving hypotheses for abilities in smaller equal interval increments. possible pre-assessment interventions; construct Thus, the CPPS would be an appropriate a referral-focused, selective testing assessment instrument for measuring a child’s response to battery of cognitive tests; seek confirmation intervention when that intervention addresses evidence for a processing disorder suggested by processing weaknesses (see Chapter 4 for other assessment data; and recommend, design, details). and select evidence-based interventions (reviewed in Dehn, 2007) for psychological Caveats processing deficits. As with any psychoeducational Screening assessment instrument, CPPS results, and their interpretation, should be corroborated by other The normal development of assessment methods and sources. Other methods psychological processes is especially critical for might include informal assessment procedures, the development of academic skills in reading, such as observations and interviews, as well as mathematics, and written language. Screening standardized cognitive testing. Discrepancies the psychological processes of children at an between assessment results might be due to the early age could facilitate the identification of same constructs being measured in different children at-risk for subsequent learning ways, or they might be attributed to rater bias or problems. For example, measures of a rater’s response set. phonological processing and working memory Rating scales, by their nature, have (e.g., Gathercole & Pickering, 2001) have been inherent weaknesses that warrant careful shown to be predictive of later academic consideration. Specifically, raters using a performance. Consequently, such measures have response set can invalidate results for the been employed as screeners with young children individual being rated. A “response set” is the (e.g., Gathercole & Pickering, 2001). The CPPS tendency to respond to each item in a would be a suitable screener for evaluating the characteristic manner regardless of the content development of a broad range of psychological of the item (Nunnally, 1978). For example, a processes from individuals to large groups (e.g., rater with an acquiescence response set tends to classrooms) of children. For instance, the CPPS agree with every statement. Another concern is might be used to evaluate all children making that some raters may have a bias towards the the transition from kindergarten to first grade. subject that prevents them from responding in an Since the CPPS administration, scoring, and objective manner. When CPPS results are not reporting are web-based, it is relatively cost- concordant with the results from other effective and time-efficient. assessment procedures, the evaluator should consider biased responding or a response set as an explanation for the lack of consistency. Finally, it must be kept in mind that the CPPS

10 reflects a third-party informant’s perceptions and psychology is also recommended. Furthermore, ratings of a student’s function. Thus, CPPS only trained and knowledgeable professionals results should not to be interpreted in a manner who are sensitive to the conditions that may similar to scores obtained from direct compromise, or even invalidate, standardized standardized testing of cognitive abilities or rating scale results should make interpretations psychological processes. and critical decisions. Many qualified examiners possess state, User Qualifications provincial, or professional certification or licensure in a field or profession that includes, as The CPPS will be useful for a variety of part of its formal training and code of ethics, the educators and psychological practitioners responsibility for rendering assessment and involved with children who experience learning interpretation services regarding cognitive difficulties. Users of the CPPS should have a ability, learning disability, or psychological background in education or psychology and processing. Because professional titles, roles, should have completed training in educational or and responsibilities vary among states (or psychological testing and measurement. Users provinces), or even from one school district to who intend to interpret CPPS results or draw another, it is impossible to equate competency diagnostic conclusions (for example, diagnose with professional titles. Consequently, the Joint learning disabilities) should have completed Professional Standards (AERA et al., 1999) graduate level training in testing and suggest that it is the responsibility of each measurement that included supervised practicum school district to be informed by this statement or internship experience in educational or of user qualifications and subsequently psychological testing. Additional training in determine who, under its aegis, is qualified to child development, learning, and educational use and interpret the CPPS.

2 Development, Standardization, and Norming

Item Development, Calibration, subjects with learning disabilities, as well as Hispanics and all major U.S. races. Raters were and Selection asked to record the number of minutes to complete the scale and to comment on any items In developing the CPPS, the initial goal that needed clarification. was to accumulate a pool of items describing Statistical item analysis of the pilot data academically related psychological processing (see the Item Calibration Studies and Scaling difficulties that could readily be observed by section for a description of the procedures) classroom teachers. The ultimate goal was to revealed that most items possessed desirable arrive at a set of items that discriminated item characteristics, such as fitting well with between students with and without a learning their assigned subscale. However, the results disability (LD). A primary design objective was indicated that more items were needed to to operationalize each measured CPPS provide for more systematic spacing of items per psychological process with statements that item difficulty level and to extend the range of required no inferences about internal cognitive most subscales. Subscales needing expansion at or psychological processes. The item pool was the top and bottom of the measurement scale derived from task analysis of construct were Attention, Executive Functions, Long- definitions, a review of the processing literature, Term Recall, Oral Language, Processing Speed, an examination of items from similar rating and Visual-Spatial Processing. The Fine Motor scales, an analysis of cognitive tests measuring subscale produced a highly skewed distribution, specific processes, the author’s clinical with little discrimination among subjects over 7 experience with LD students, and input from years of age. Despite the shortcomings of the experts in school neuropsychology. The litmus pilot version, the pilot data analysis indicated test for items was that they were readily that subjects with LD generally received higher observable and the behavior had an identifiable processing problem ratings than the non- relationship with the learning of one or more disabled subjects. It was concluded that the academic skills. original limited pool of items had the potential to identify students with LD who have Pilot Study psychological processing difficulties. In an effort to extend the range at both An initial set of 75 items, divided the lower and upper ability ends of many of the among 10 subscales (fluid reasoning was not subscales the item pool was nearly doubled, included), was piloted during the summer of resulting in 147 items across 10 subscales. This 2010. A total of 37 regular education teachers goal was directed at improving the reliability from 15 school districts in three states and discriminability of the manifest subscale (Minnesota, Wisconsin, and Texas) participated scores at the bottom and top of the underlying in the pilot study. Each teacher rated three latent trait of each subscale. Special attention subjects (students)—one with average academic was focused on improving the reliable abilities, one with low average academic discrimination of subjects at younger ages when abilities, and one with high average academic a learning disability is typically identified. In abilities. Raters were instructed to select addition to generating new items, select items students from their 2009-2010 (previous year’s) were reworded to reduce ambiguity and others class and to select students who met certain were moved to a more appropriate subscale. All specified demographics. A total of 111 subjects 75 original items were retained. (47 males; 64 females) in kindergarten through 6th grade were rated. The sample included 14

11

12

Item Tryout negatively was to be consistent with similar scales. Also, it was believed that raters were Tryout of the 147 item–version CPPS more likely to reliably discriminate among occurred during the late fall of 2010. The data subjects on the basis of difficulties or poorly collection procedures were similar to those of developed abilities rather than on the basis of the pilot study. The teacher raters all taught in average to well developed abilities. Consistent the same Minnesota urban school district. with this thinking, the distribution of rater Twenty-seven teachers rated either three or six responses on negatively stated items was current students each, resulting in a total sample skewed, with “Never” being selected of 96 subjects. The kindergarten through 7th approximately 50 percent of the time and grade sample included subjects with learning “Almost Always” usually being selected less disabilities, as well as Hispanics and all major than 10 percent of the time. This item response U.S. races. pattern was expected since only a minority of Through the use of Rasch item response children should be exhibiting psychological theory (IRT) methods (see the Item Calibration processing problems. Studies and Scaling section for a description), Closer examination of how the raters the data from the original 75-item sample were responded to each item led to the conclusion that calibrated together with the sample data from the positively stated items had response 147-item CPPS. Two primary item and scale distributions that were inconsistent with characteristics were evaluated. Item fit/misfit negatively stated items, even after they were information was examined for possible items correctly reverse scored. The response that did not fit the Rasch model. More distribution for positively stated items did not importantly, the range of the person abilities display the reverse skewed distribution. measured by the items within each subscale was Apparently the preponderance of negatively reviewed. The analysis revealed that the stated items created a response set in many inclusion of the new items had extended the raters such that they responded to positively range of most of the subscales, with the W-score stated items as if they were worded negatively. (see the Item Calibration Studies and Scaling This led to poor item fit statistics for some of the section for a description of the W-score) range of positively stated items. As a result, it was the General Processing Ability score increasing decided to rewrite all of the positively stated 15 W-points, from 119 to 134. The measurement items so that all items were phrased negatively. precision at the lower end of the person ability For example, “Manages time effectively” was scale for all subscales increased due to the reworded to read “Has difficulty managing time inclusion of more items measuring lower level effectively.” psychological processes. In addition to reversing the wording of When completing the paper-and-pencil 38 positively stated items, the wording of three instrument, raters were directed to “Circle the negatively stated items was altered. Twenty-four response that best describes how frequently that items were eliminated because they either behavior occurs.” The four options were measured a process at the same level of “Never,” “Sometimes,” “Often,” and “Almost difficulty as other items, duplicated the process Always.” When items were stated negatively, measured by another item, were a poor fit within such as “Has difficulty with…,” the numerical their specific subscale, or were judged difficult values assigned to the responses were 0 for to observe. The majority of dropped items had “Never,” 1 for “Sometimes,” 2 for “Often,” and been written after the initial pilot study. All but a 3 for “Almost Always.” When items were stated couple of the original 75 items were retained. positively, such as “Manages time effectively,” Finally, it was decided to add an 11th subscale the scoring was reversed. that would measure fluid reasoning. Fifteen new During the item tryout 32 of the 147 items were created for the Fluid Reasoning items were stated in a positive manner and subscale. The total number of final items for the reverse scored from those stated negatively. The CPPS norming edition was 138. Because of the primary reason for originally wording items numerous item and subscale changes it was

13 decided to exclude the pilot and tryout characteristics of the scale’s items (de Ayala, calibration data from the normative sample item 2009). A Rasch -parameter, logistic test analysis. model from Winsteps software (Version 3.69.0; Linacre, 2009) was employed. In particular, the Item Calibration Studies and Scaling Andrich “rating scale” model implemented in Winsteps was used, given the polytomous Likert Because the CPPS is an internet-based rating scale nature of the item data (de Ayala, assessment instrument, the collection of the 2009). The Andrich rating scale model allows national standardization data for the CPPS was the item difficulties to vary across items but completed via an internet web-based data assumes the same relative distance between the collection program. A professional version of rating scale categories for all items within a Survey Monkey™ (www.surveymonkey.com), scale (Bond & Fox, 2001; de Ayala, 2009). The for which participants needed a password, was Rasch application of IRT allows a comparison of used to collect ratings and other subject actual responses with the predicted responses for demographic data. The collection of data a given level of ability, known as “person through Survey Monkey™ provided several ability.” That is, given how the item ranks in advantages. It reduced computational errors, terms of difficulty, it can be predicted how it increased the speed of data collection, and should perform across varying levels of ability. allowed easy, ongoing monitoring of the For the CPPS items, subjects with low demographic characteristics of the subjects. processing abilities (and higher total scores) After entering rater information and subject were predicted to receive more ratings of demographics, each rater read the CPPS items “Often” and “Almost Always” than subjects and clicked on one of the four possible response with high processing abilities, especially on ratings. A response to each item was required, more difficult items. Moreover, each item (and and raters could change their responses before respective “steps” within the items) is assigned a finalizing each rated norm subject. difficulty value that allows the scaling of items Data were collected from January to from easiest to most difficult. June 2011. School psychologists across the Once the difficulty levels of items are United States were contacted by email and asked determined, the range of person abilities to recruit kindergarten through 7th grade measured by each subscale is evaluated. The classroom teachers in their respective school scaling of items allows test developers to districts. Both the school psychologists and evaluate how consistent item rankings are with teacher raters received monetary compensation developmental expectations. The Winsteps item- for their participation. A total of 278 teachers person ability mapping statistics also reveal from 128 communities in 30 states (and the which ability levels are best measured by a District of Columbia; see Appendix B) subscale’s items. Finally, Rasch item analysis completed at least one online rating scale. allows test developers to assess whether there Seventeen of the 278 teachers were male and are too many items at the same level of three were special education teachers. All other difficulty, whether there are difficulty “gaps” raters were female regular education teachers. along the latent ability trait scale that need to be Each teacher was allowed to rate a maximum of filled with intermediate items, and whether five subjects, and the majority of them did. The specific items show poor fit with the underlying final normative sample consisted of 1,121 Rasch measurement model. For additional subjects (see The CPPS Standardization Sample explanations of the Rasch model the reader is section for demographic details). referred to Bond and Fox (2001), de Ayala (2009), Embretson and Hershberger (1999) and Rasch Item Analysis and Scale Development Embretson and Reise (2000). Procedures During all three phases of item Incorporation of the W-Scale development Rasch IRT item analysis During item calibration the Rasch logit procedures were used to evaluate the scores were transformed into W-scores. The W-

14 scale is centered on a value of 500, which is the were dropped as they were determined to approximate average performance of 10-year duplicate the psychological process of other olds. The W-scale uses the same metrics for items. An additional item was deleted as it was expressing both item difficulty and person deemed difficult to observe. Similar to the item ability (McGrew & Woodcock, 2001), thereby selection in other rating-scale based instruments increasing the accuracy of measuring differences (e.g., BASC-2; Reynolds & Kamphaus, 2004), between item difficulty and person ability. The not all items retained in each subscale use of the W-scale is particularly beneficial necessarily demonstrated strong factor loadings because it converts ordinal data from a rating on their respective factor. Critical indicators of a scale into an equal interval metric that allows for psychological process construct were retained mathematical and statistical computations, such even if their factor loadings were not salient, as averaging. Another benefit of the W-scale is reflecting content validity considerations. that it provides an understandable measure of the As reflected by the Rasch analysis of the range of ability measured by subscales, as well final item pool, the remaining 121 items as providing a means of assessing the ability possessed desirable fit statistics and desirable distance between individual items arranged in item difficulty characteristics as per the Rasch order of difficulty. The W-scale is also model of measurement. (See Appendix A for a particularly well suited to measuring growth listing of the 121 CPPS publication items by across time (see Chapter 4). subscale.) In order to create the raw-score-to-W- The W-scale facilitates ongoing research score scoring tables, 11 final Raschs were and provides for the possibility of linking and completed with the official set of publication equating the CPPS with other tests that use the items within each subscale. As expected, same metric (e.g., WJ III Battery of tests; “purifying” the subscales increased the W-ability Woodcock, McGrew, & Mather, 2001b), and it range measured on several subscales: Attention facilitates the development of new CPPS items increased by 4 W-units, Long-Term Recall by that can be integrated into the instrument via 13, Phonological Processing by 5, Visual-Spatial IRT equating-linking methods. In addition to by 3, and General Processing Ability by 5. In their use during item analysis and scale contrast, the Processing Speed subscale, which development, the W-scores were the foundation had been reduced to eight items, decreased in of the construction of the CPPS norms, with range of subscale measurement by 11 W-score eventual conversion to T-scores and percentile points. ranks (see the Construction of Norms section). Additional information regarding the derivation The CPPS Standardization Sample and uses of the W-scale can be found in Woodcock (1999), Woodcock and Dahl (1971), The CPPS standardization sample McGrew, Werder and Woodcock (1991), and consisted of 1,121 subjects from 128 McGrew and Woodcock (2001). communities in 30 states (and the District of Columbia; see Appendix B). The subject Final Item Selection demographic variables that were controlled were The final selection of items was aided age, grade, region of residence (there are four by the use of exploratory factor analysis (Mplus, U.S. regions), rural versus urban residency, v6.11; Muthén & Muthén, 2011) of the complete gender, Hispanic status, race, parent education set of items across the entire norm sample. The level, academic skills ranking, disability status, results of the exploratory factor analysis and free and reduced lunch participation (see provided insights into the empirical relations Table 2.1). among items (i.e., how well items aligned with The primary goal of collecting a their respective subscales). After examining the national standardization norm sample is to exploratory factor analysis results, seven items obtain a large, broad sample that is were moved to a different subscale and nine representative of the reference population of items were eliminated due to weak loadings with subjects against which an individual’s scores are their respective factors. Seven additional items

15 to be compared (AERA et al., 1999). This is best to the U.S. targeted sampling plan. If the accomplished by gathering a norm sample that obtained and targeted values are relatively matches, as closely as possible, a nationally similar, the differences between the sample and representative sampling plan defined by targeted target population percentages are adjusted prior percents of subjects as per specified to the calculation of the final norms and derived demographic variables in the current U.S. scores by subject weighting the contribution of population. When data collection is complete, each subject’s scores during the calculation of the obtained sample demographics are compared the norms.

Table 2.1 Demographic Characteristics of CPPS Standardization Sample with Weighting

Category Sample n Sample % U.S. Census Sample-Census Norming % Difference Subweights Region Northeast 170 15.2 18.0 -2.8 1.18 Midwest 249 22.2 22.0 0.2 0.99 South 258 23.0 37.0 -14.0 1.61 West 444 39.6 23.0 16.6 0.58 Urban/Rural Urban 908 81.0 76.5 4.5 0.94 Rural 213 19.0 23.5 -4.5 1.24 Gender Male 552 49.2 51.1 -1.9 1.04 Female 569 50.8 48.9 1.9 0.96 Hispanic Status Yes 182 16.2 22.8 -6.6 1.41 No 939 83.8 77.2 6.6 0.92 Race White 887 79.1 76.2 2.9 0.96 African American 153 13.6 14.6 -1.0 1.07 American Indian 21 1.9 1.2 0.7 0.63 Asian 46 4.1 4.4 -0.3 1.07 Mixed/Other 14 1.3 3.6 -2.3 2.77 Parent Education Not a high school graduate 78 7.0 11.4 High school graduate 237 21.1 29.8 Some college 183 16.3 27.4 College degree 229 20.4 20.4 Graduate/Professional degree 91 8.2 11.0 Don’t Know/Missing 303 27.0 Free/Reduced Lunch Yes 408 36.4 62.9a No 503 44.9 37.1 Don’t Know/Missing 210 18.7 Academic Skills Ranking Above Average 208 18.6 Average 712 63.5 Below Average 210 17.9 Disability Status None 1026 91.5 ADHD 17 1.5 LD 35 3.1 Speech/Language 17 1.5 Autism/Asperger 8 .7 Emotional/Behavioral 4 .4 Other Health Impaired 10 .9 Mental Retardation 4 .4

16

Table 2.1 (continued)

Category Sample n Sample % U.S. Census Sample-Census Norming % Difference Subweights Grade K 198 17.7 1 167 14.9 2 213 19.0 3 185 16.5 4 127 11.3 5 103 9.2 6 81 7.2 7 47 4.2 Age 5 122 10.88 6 136 12.13 7 179 15.97 8 205 18.29 9 157 14.00 10 122 10.88 11 112 9.99 12 88 7.77 a Source: U.S. Department of Agriculture

For standardization and norming of the provided with a link and password for the online CPPS, the most current U.S. population statistics rating scale. Once online at the secure website, from the U.S. Census Bureau were used. teachers entered the subject’s gender, date of (Retrieved January 7 and July 7, 2011 from birth, race/ethnicity, and type of disability (if http://www.census.gov). Sex, race, and Hispanic applicable.) Teachers were asked to provide status percentages were obtained from 2010 their best estimate of “the highest level of census data; regional population percentages education attained by either parent,” with an from 2008 census data; parent education level option of selecting “unable to determine.” Raters and rural/urban breakdowns from 2009 data; and also were asked to report whether or not the free and reduced lunch participation from 2009 subject was receiving free and reduced lunch. U.S. Department of Agriculture data Finally, the participants were asked to rank the (Retrieved January 7, 2011 from subject’s overall level of academic skills, with http://www.fns.usda.gov/cnd/lunch). For sex, the options being “Above Average,” “Average,” race, and Hispanic status, U.S. population and “Below Average.” statistics for youth aged 5 – 14 were used. For parent education level, the selected age range Socioeconomic Status was 25 – 64. For the distinction between urban and rural communities, the U.S government’s The socioeconomic status (SES) of a census definition was used. Residents of a child’s parents is an important demographic municipality with a population greater than variable due to its well-established relation with 2,500 were classified as urban; whereas, those cognitive and academic abilities (Hackman & residing in a municipality with a population Farah, 2009; Sirin, 2005; White, 1982). Children under 2,500 were classified as rural, along with from lower SES backgrounds typically perform those who reside outside of any municipality. significantly lower on cognitive ability and (See Table 2.1 to compare sample and academic skills tests. Since the psychological population percentages.) processes measured by the CPPS are related to After a teacher agreed to participate in cognitive abilities and academic skills, an the data collection, each teacher was requested appropriate distribution of parent SES was an to rate five students who met prescribed subject important consideration for insuring a demographic characteristics. The teacher was representative national sample.

17

A four-pronged sampling strategy was Academic Skills Rankings used to secure a sample that approximated the Knowledge of the subject’s overall level U.S. distribution of parent SES. First, teacher of academic skills was considered a critical SES- raters were selected from a wide range of U.S. related proxy variable since the purpose of communities. Raters and subjects were secured collecting SES information, such as parent from 128 communities in 30 states (and the education level, is to ensure that an appropriate District of Columbia; see Appendix B). Second, distribution of academic skills and educationally the number of rural communities was restricted related home stimulation is sampled. As reported so that the sample approximated the U.S. in Table 2.1, the distribution of subjects’ population of rural/urban percentages (see Table academic skills rankings closely approximates 2.1). Third, teachers were asked to estimate the the distribution of measured academic skills. highest level of education attained by either That is, the distribution of academic skills of parent, with an option of selecting “unable to U.S. students resembles that of the standard determine. (Parent education level is often used normal curve. Given that the average range of as a proxy for SES.) Fourth, teachers were asked the standard normal curve encompasses 68% of to rank each subject’s overall level of academic a distribution, the ranking of 63.5% of the skills. subjects as having average academic skills, Free and reduced student lunch program along with equally balanced above and below participation information has often been used as average percentages, indicates that the rated another indicator of a subject’s SES. However, subjects are representative of U.S. students in the use of this variable as an SES marker has regards to academic skills, as defined by teacher been criticized (Harwell & LeBeau, 2010). Also, judgment. (This interpretation must be regarded in 2010-2011, many middle SES families in the cautiously, as the correspondence between U.S. may have had children who qualified for teacher judgments of academic skill and direct free and reduced lunch because of that year’s achievement measures is unknown.) high unemployment rate. Consequently, the free and reduced lunch information was given less Hispanic Status credibility and was not used to weight the subject’s rating data during the development of Currently, the U.S. census classifies all the CPPS norms. U.S. residents as Hispanic or Non-Hispanic. This is considered a classification of ethnicity Parent Education Level and not race. Despite this distinction, some When teacher estimates of parent contemporary test manuals (e.g., BRIEF; Gioia, education level are considered, the sample et al., 2000 and SMALSI; Stroud and Reynolds, distribution of the CPPS cases approximates the 2006) confound Hispanic status with race, such U.S. population distribution of adult education that the Hispanic percentage becomes part of the levels. When the sample’s missing and unknown racial percentages. CPPS raters did classify each parent education level cases are omitted, the subject as Hispanic or Non-Hispanic but were sample percentages for less than high school not asked to determine the race of the Hispanic graduate, high school graduate, and subjects. According to the U.S. Census Bureau, graduate/professional degree are within two the vast majority of Hispanic adults classify points of the targeted U. S. census values. When themselves as white, with “Other” being the the totals for some college and college degree second most common category they are combined, the sample and population select (Retrieved July 13, 2011 from percentages are within three points. (This http://www.census.gov/prod/cen2010). It was information needs to be regarded cautiously, as assumed that teacher raters most likely would the correspondence between teacher estimated have classified all but a few of their Hispanic parent education level and self-reported parent subjects as white. Thus, for CPPS norming education level is unknown.) purposes, all Hispanic subjects were coded as white for the race demographic variable.

18

Table 2.2 CPPS Gender Differences Across All Age Groups

Subscale Sex Mean W- Standard Deviation Mean Difference p-Value Score Attention Male 509.59 25.33 11.66 <.01 Female 497.93 22.70 Auditory Processing Male 504.71 24.03 5.35 <.01 Female 499.37 22.38 Executive Functions Male 507.49 22.29 9.59 <.01 Female 497.90 19.87 Fine Motor Male 509.84 21.16 7.22 <.01 Female 502.61 17.21 Fluid Reasoning Male 503.75 30.42 2.02 .24 Female 501.73 27.28 Long-Term Recall Male 504.92 28.94 3.57 .03 Female 501.36 25.96 Oral Language Male 503.83 22.74 3.40 <.01 Female 500.43 20.31 Phonological Processing Male 506.23 26.78 2.83 .07 Female 503.39 24.88 Processing Speed Male 503.90 21.12 2.37 <.05 Female 501.53 18.48 Visual-Spatial Male 505.52 22.40 2.20 .08 Processing Female 503.32 19.43 Working Memory Male 506.99 25.31 5.60 <.01 Female 501.39 22.67 General Processing Male 506.07 22.06 5.07 <.01 Ability Female 501.00 19.59 Note: Male n = 552; Female n = 569; p-values are from two-sample t test.

Gender Differences Despite the CPPS differences between boys and girls, it was decided not to offer With the exception of three subscales— gender-specific norms. This decision was based Fluid Reasoning, Phonological Processing, and on several considerations. First, gender-based Visual-Spatial Processing—statistically differences also occur in some cognitive and significant differences were found across all age achievement domains as measured by groups between the subscale means obtained by standardized direct assessment batteries. For males and females (see Table 2.2). Male example, robust gender differences continue to subjects, on average, obtained significantly be reported between males and females in verbal higher mean ratings. Clearly, males as a group abilities (favoring females), spatial abilities have more psychological processing problems (favoring males), and math abilities (favoring than females. Gender-based differences in males) (Ardila, Rosselli, Matute, & psychological processing difficulties were also Inozemtseva, 2011). However, standardized reported by Swerdlik, Swerdlik, and Kahn cognitive and achievement batteries do not (2003). This finding is consistent with higher provide different gender norms for tests of these incidence rates for males in all types of abilities. Second, frequently there are significant educational disabilities, such as LD and ADHD differences on cognitive and achievement (Barkley, 1997). measures between racial, ethnic, and SES groups. Yet, the standard is not to offer separate

19 norms for these groups. Third and most of the subscale distributions (see later discussion important, gender-specific norms can make it as to the cause of the skewed distributions). more difficult to identify a child who has Such skewed distributions have also been significant deficits. This phenomenon has been reported in similar rating scales (e.g., Gioia et well explained by Reynolds and Kamphaus al., 2000). A precise understanding of the (2004) who recommend using combined-sex developmental trend lines and skewed nature of norms even when there are clear gender the CPPS subscales requires an appreciation that differences and even when gender-specific there is a developmental by ability-level norms are available. For example, when gender- differential skewed distribution interaction in the specific norms are used, the scores of a male norm data, which produces differential levels of with processing difficulties will be lower scale precision above or below the average because he is being compared with a group scores for each subscale. (consisting of males only) that has more Figure 2.1 illustrates the developmental problems. Whereas, if the male is compared to changes and skewed nature of the CPPS all males and females, as is also typically the Working Memory distribution, which was also case when evaluating a male’s classroom observed for all CPPS subscales. Briefly, the performance and academic skills, the male with data presented in Figure 2.1 were obtained by (a) difficulties will have a higher score relative to sorting the 1,121 norm subjects by chronological all students and thus will be more readily age (in months), (b) dividing the age-sorted identifiable. In essence, combined-gender norms subjects into successive blocks of 50 subjects reflect individual differences and diagnostic each, (c) calculating the median age score for score levels better than gender-specific norms. each block, (d) calculating the median (50th Gender-specific norms reveal how rare a score is percentile) and 10th and 90th percentile W-scores compared only to that gender, as opposed to the for each block, (e) plotting the parameters from overall general reference population. Finally, the step d by age and smoothing the data points with use of gender-specific norms has not been the the distance weighted least squares exploratory prevailing practice when diagnosing children data smoother. The last step (e) smoothes out the with learning disabilities. Providing gender- jitter present in the small sample data (due to specific norms would be inconsistent with the sampling error) with the resulting smoothed purpose of the CPPS. curves representing the best estimates of the population values. Age Differences A review of the top two sets of values in Figure 2.1 (90th and 50th percentiles) reveals Consistent with known developmental steady developmental change from the earliest growth curves for cognitive abilities (e.g., see ages to approximately 120 months, after which McGrew & Woodcock, 2001), analysis of CPPS both curves are relatively flat. Conversely, the data (see Validity section of Chapter 3) indicates 10th percentile plot only shows slight that most psychological processes develop developmental change up to approximately 80 rapidly during early elementary years and then months, after which the curve is relatively flat. plateau in later elementary years. Assessed from Plots for all CPPS subscales show similar the problem or difficulty perspective taken by patterns, although the point at which the the CPPS, this means that younger subjects different plotted values level out differs across should display significantly more difficulties subscales. The Working Memory plot reveals (higher CPPS scores) with psychological expected developmental growth (decreasing processes than older children. Although younger psychological processing problem W-scores) up CPPS norm groups do have higher subscale to approximately 10 years of age (120 months) scores than the older norm groups, the scores for CPPS norm subjects with average or above quickly level off as age increases. The observed, average scores (those with higher psychological relatively flat developmental trend lines for the processing problems), while CPPS norm subscales are the result of the skewed response subjects with below average problem scores distributions of the individual CPPS items and show minimal developmental change.

20

Figure 2.1 Median W-Score Plot of Working Memory Subscale

The Working Memory plot illustrates This inherent bias partials out most of the skewed nature of this type of rating scale the true age and developmental variance that data. It also suggests that the CPPS Working emerges with direct testing of students on Memory T-score and percentile rank normative cognitive and achievement tests. When this scores will cover a greater range of abilities (in a occurs at all grade levels, little if any age manner more consistent with developmental and variance in scores will be present across age normative curve expectations) for students with groups. Specifically, when teacher raters use a more psychological processing problems (higher relative shifting standard reference point for scores) than for students experiencing fewer comparison not much change will be observed processing problems. The CPPS norm-based when only the median scores are compared scores for subjects experiencing fewer across age groups. However, plots of the CPPS processing problems (lower scores) will display score distributions do reveal evidence of a less discriminating, more truncated scale. developmental trends and age-related variance The “shifting standard reference point” among those subjects who are exhibiting inherent in teacher rating scales is the most difficulties with various psychological likely explanation of the lack of clearly defined processes. Thus, for students in the upper half of developmental curves (reported from direct the problem-based distribution (those with lower assessment batteries) across the complete age psychological processing abilities), there will be range of the CPPS third-party informant rating noticeable variability of CPPS scores across age scale. This phenomenon most likely occurs groups (see Figure 2.1). In contrast, the scores of because a teacher rating a student uses the students with higher abilities and no or few student’s current grade as the reference point. problems will not change much across age That is, a teacher rates a student relative to the groups. abilities and performance of the student’s The analysis of CPPS scores as a classmates. A first grade teacher uses other first function of age also revealed that the Fine Motor grade students as the reference point; a third processing subscale has a more restricted range grade teacher uses typical third graders, etc. of ability and development than any other CPPS Thus, the built-in “grade-specific” norms used subscale. This characteristic of the fine motor by the teacher are for that specific grade, as response distribution was seen throughout the opposed to the broader course of development. three rounds of item development. All three

21 samples had very few older subjects (above 7 processing subscale, typically a range that spans years of age) who were rated as having any several standard deviations (see Table 2.3). For problems with fine motor processes and skills. example, the Fluid Reasoning subscale samples In spite of the skewed nature of the a W-ability range of 135.28 points. With a CPPS distribution, inspection of the Rasch data standard deviation of 27.82, the subscale covers reveals that a broad range of development/ability a range of 4.86 standard deviations. is being sampled by each psychological

Table 2.3 CPPS W-Scale Ability Range Subscale Maximum Minimum W-Score W-Score SD Units W-Score W-Score Range SD of Ability Attention 573.93 469.47 104.46 23.27 4.49 Auditory Processing 596.28 475.46 120.82 20.86 5.79 Executive Functions 573.64 464.57 109.07 20.60 5.29 Fine Motor 594.73 490.84 103.89 15.96 6.51 Fluid Reasoning 592.80 457.52 135.28 27.82 4.86 Long-Term Recall 603.84 469.49 134.35 25.84 5.20 Oral Language 588.99 475.05 113.94 19.55 5.83 Phonological Processing 604.52 478.36 126.16 23.83 5.29 Processing Speed 567.06 476.72 90.34 17.69 5.10 Visual-Spatial Processing 602.20 486.68 115.52 17.51 6.59 Working Memory 590.14 467.62 122.52 22.99 5.33 General Processing Ability 604.56 460.36 144.20 19.30 7.47 Note: For entire CPPS sample; n = 1,121

Construction of Norms U.S. Region resulted in four region subweights, two that were relatively small and two that were Subject Weighting relatively large (1.61 and 0.58 for Southern and Western regions, respectively). A comparison of the sample Sample-to-population target subweights demographic characteristics with the targeted were calculated for all subjects for five U.S. population characteristics (see Table 2.1) demographic variables—region, urban/rural, indicates that, with the exception of two gender, Hispanic status, and race. Each subject’s subcategories of the U. S. Census region final contribution to the development of the variable, the CPPS sample percentages were CPPS norm tables was based on a single subject reasonably close to the targeted U.S. population weight calculated by multiplying each subject’s values. For example, the CPPS sample consisted five subweights together. This subject weighting of 49.2 % males and 50.8 % females. The of the norm data increases the probability of the sampling plan target percentages for gender CPPS norm data of the 1,121 sample subjects were 51.1% and 48.9 %, respectively. To adjust being closer to the target U.S. population than for the discrepancy of + 1.9 percent, gender- the raw unweighted CPPS sample data. specific sampling subweights of 1.04 and 0.96 were applied to the male and female scores in Calculation of CPPS Subscale and General the norm data, respectively. The largest sample- Processing Ability Derived Scores target discrepancy was for the Southern and Western U. S. Census Regions, where the CPPS The standardized scores reported by the sample was found to under-represent the CPPS online scoring and report program are Southern region by 14.0 % and over-represent traditional T-scores with a mean of 50 and a the Western region by 16.6 %. As can be seen in standard deviation (SD) of 10. The development Table 2.1, the sample-target discrepancies for of age-based norms for the CPPS was based on a multi-step sequence of traditional and

22 contemporary norm construction procedures. Swerdlik et al., 2003). The development of the The description below is for one subscale, and CPPS T-scores differs from those of the BASC- was repeated for all subscales. 2, BRIEF, and PPC via the intermediate step of Ordinal raw scores were transformed transforming, via Rasch IRT methods, the into equal-interval W-scores via the previously subscale raw scores to the equal interval W-score described Rasch-based item calibration and metric prior to the linear T-score transformation. scaling procedures.1 Next, the total sample was Finally, percentile ranks (PR) were divided into four age groups (5-6 years, n = 258; calculated for T-scores in each of the four age- 7-8 years, n = 384; 9-10 years, n = 279; 11-12 differentiated norm groups. Eight different years, n = 200). Within each of the two-year age methods for calculating percentiles from the W- norm groups, the average (median) and SD of score distributions were applied to all subscales the W-score distribution were computed. The within the four norm groups (SYSTAT, v13, median was used to estimate average scores 2009). Inspection of the T-score and PR (instead of the mean), given the skewed nature correspondence results revealed only trivial of the obtained sample distributions (discussed differences between the eight methods. The earlier). The calculation of the median and SD Cleveland method (SYSTAT, 2009) was W-score parameters included the subject selected and used to calculate percentile ranks weighting variables as described previously. The that corresponded to the T-scores within each median and SD W-values were then used in a norm group. The T-score/PR correspondence linear transformation of the W-scores to T- process was completed independently in each of scores. the four age-differentiated norm groups. Finally, A linear T-score transformation was the four sets of preliminary T-score/PR norm selected (instead of an area curve or tables were inspected together and select values normalization transformation), given the skewed “smoothed” to reduce minor inconsistencies distributions of all subscales. Linear transformed across the four age groups (due to sampling T-scores provide a measure (in standard error), producing systematic age norms deviation units) of how far a particular W-score consistent with developmental expectations. For is from the norm group average W-score. The example, sometimes the same raw score did not presence of skewed raw score and W-score produce the same or higher T-score when distributions was not unexpected and is switching to the next oldest age group. In most frequently found in the items of many rating cases, these occurrences were small and due to scale instruments. As noted by Reynolds and sampling error. In such cases, “hand smoothing” Kamphaus (2004), “often, the content of these was utilized to produce consistent decreases and scales does not differentiate among people increases, or at least the same score, when the whose behavior is normal; thus, it would not be original derived score moved in the opposite justified to stretch out the nonproblem end of the developmental direction. scale, such as would happen if a normalizing It is important to recognize that the transformation were applied” (p. 128). The CPPS T-score mean and standard deviations will linear T-score transformation is similar to that not be exactly 50 and 10 when reported by age used in other rating scales, such as the Behavior group (see Table 2.4). On the CPPS, the same T- Assessment System for Children—Second scores across different age groups may have Edition (BASC-2; Reynolds & Kamphaus), the different percentile ranks and percentile ranks Behavior Rating Inventory of Executive that don’t match a normalized T-scale Function (BRIEF; Gioia et al., 2000), and the distribution. This occurs because the distribution Psychological Processing Checklist (PPC; of CPPS scores was not normalized and also because the percentile ranks were derived from

1 the equal interval W-scale metric, which has The General Processing Ability score was not based finer scale graduations than the T-scale. That is, on a raw-score to W-score transformation. The there are 3 – 4 W-scores, each with a different General Processing Ability score is the arithmetic average of the W-scores from the 11 separate percentile, for every 1 – 2 T-scores. Thus, more processing scales. than one percentile rank can be associated with

23 any given T-Score. Nonetheless, on the CPPS Calculation of CPPS Strength and Weakness the percentile ranks associated with higher T- Discrepancy Scores scores align reasonably well with normalized curve percentile ranks. The CPPS allows examiners to evaluate Finally, as reported in the section on a student’s pattern of strengths and weaknesses reliability in Chapter 3, the reliability via the use of intra-individual discrepancy coefficients for each of the four age groups were norms. Discrepancy norms are an appropriate used to calculate the average standard error of metric for evaluating the significance of measurement (SEM) values as per the following subscale score differences obtained by an standard formula, where SD is the T-score SD of individual. The procedures used to calculate the 10 and r is the reliability of the scale. strength and weakness discrepancy norms, which utilize the standard error of the estimate SEM = SD×√ (SEest), are described in Chapter 4. The CPPS intra-individual discrepancy norm procedures The SEM values for each subscale for each were modeled on those developed for, and used normative age group allow for the bracketing of in, the Woodcock-Johnson Battery—Third obtained T-scores and percentile ranks with Edition (McGrew & Woodcock, 2001). 68%, 90%, and 95% confidence bands (see Chapter 4).

Table 2.4 CPPS Summary Statistics by Age Group

5-6 Years (n = 258)

Type of Score AT AP EF FM FR LTR OL PP PS VSP WM GPA Raw Score Median 8.50 3.00 11.00 2.50 10.00 6.00 3.00 5.00 5.00 2.00 10.00 Mean 10.43 4.91 13.58 6.71 11.66 7.16 6.24 8.14 6.86 4.11 12.31 SD 9.31 5.50 11.52 8.33 10.19 7.36 7.18 8.92 6.46 4.65 11.45 W-Score Median 507.57 501.14 504.49 509.57 502.96 505.48 497.95 508.96 506.20 504.67 508.60 505.77 Mean 507.42 505.12 504.75 515.14 504.77 506.32 504.51 512.75 507.47 513.02 508.77 508.19 SD 26.37 27.50 23.11 23.10 32.29 31.62 25.47 29.60 21.80 25.68 26.92 24.73 T-Score Median 49.97 48.47 49.68 47.77 49.09 49.49 47.33 48.62 49.34 46.70 49.71 48.88 Mean 49.91 49.89 49.79 50.14 49.63 49.74 49.85 49.88 49.91 49.87 49.77 49.83 SD 9.96 9.83 9.96 9.86 9.64 9.63 9.78 9.78 9.71 9.76 9.71 9.75

24

7-8 Years (n = 384)

Type of Score AT AP EF FM FR LTR OL PP PS VSP WM GPA Raw Score Median 8.00 3.00 10.00 1.00 10.00 5.00 4.00 3.00 4.00 1.00 8.00 Mean 9.48 4.09 12.90 3.76 11.24 6.32 5.46 5.88 5.54 2.43 10.72 SD 8.78 4.31 10.63 5.92 8.73 6.01 5.63 6.89 5.49 3.22 9.67 W-Score Median 506.20 501.14 502.48 499.82 502.96 501.07 502.39 500.92 502.57 496.19 503.72 501.34 Mean 505.09 502.27 504.03 506.96 504.30 503.43 502.93 505.93 503.20 503.84 505.46 504.31 SD 24.91 23.07 21.63 18.83 28.07 26.84 21.01 24.67 19.52 19.46 23.83 20.35 T-Score Median 50.58 49.50 49.36 46.33 49.85 49.05 49.77 47.98 50.00 46.20 49.49 48.68 Mean 50.13 49.99 50.09 50.20 50.31 49.93 50.03 50.02 50.33 50.12 50.21 50.14 SD 10.10 10.01 10.11 10.20 9.85 9.98 9.95 10.04 10.02 9.98 9.99 9.96

9-10 Years (n = 279)

Type of Score AT AP EF FM FR LTR OL PP PS VSP WM GPA Raw Score Median 6.00 3.00 8.00 0.00 9.00 4.00 3.00 2.00 3.00 1.00 6.00 Mean 7.75 3.65 11.07 2.24 10.03 5.70 4.81 4.80 4.77 1.92 8.83 SD 7.72 3.78 9.60 4.83 8.24 5.64 5.14 6.71 4.89 2.94 8.41 W-Score Median 500.55 501.14 498.31 490.85 499.64 496.68 497.95 495.60 498.46 496.19 498.44 497.95 Mean 500.45 500.42 500.49 500.68 500.24 500.86 500.55 501.08 500.31 500.39 501.03 500.59 SD 23.10 20.87 20.24 15.98 27.22 25.58 19.78 24.32 18.63 18.11 21.64 18.62 T-Score Median 50.23 50.70 49.37 44.00 50.20 48.84 49.12 48.06 49.30 47.95 49.16 49.01 Mean 50.19 50.36 50.39 50.34 50.41 50.41 50.43 50.32 50.26 50.27 50.31 50.38 SD 9.85 9.76 9.55 10.29 9.52 9.64 9.99 10.04 9.72 10.01 9.63 9.67

11-12 Years (n = 200)

Type of Score AT AP EF FM FR LTR OL PP PS VSP WM GPA Raw Score Median 6.00 2.50 9.50 0.00 8.00 5.00 3.00 1.00 3.00 1.00 7.00 Mean 7.99 3.55 11.22 2.23 10.11 5.82 4.55 3.83 4.40 1.83 8.80 SD 7.84 3.76 9.74 4.68 8.31 5.55 4.82 5.55 4.91 2.79 8.65 W-Score Median 500.55 497.78 501.46 490.85 496.42 501.07 497.95 487.67 498.46 496.19 501.15 496.35 Mean 500.60 499.67 500.15 500.75 500.55 501.50 499.58 497.49 498.89 499.97 500.04 499.35 SD 23.53 20.93 21.15 15.89 27.77 25.39 19.31 21.86 18.25 17.21 23.24 18.80 T-Score Median 49.80 48.92 50.61 43.74 48.03 49.62 48.57 44.91 49.41 47.71 50.37 47.78 Mean 49.82 49.87 49.99 50.24 49.58 49.80 49.44 49.59 49.66 50.02 49.87 49.76 SD 10.31 10.58 10.11 10.44 10.43 10.49 10.37 10.41 10.40 10.53 10.41 10.44

Note: AT= Attention, AP = Auditory Processing, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, LTR = Long-Term Recall, OL = Oral Language, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability.

3 Reliability and Validity

Reliability (Schweizer, 2011). With rating scales, inter-rater reliability can also be assessed. Reliability refers to the consistency and precision of measurement. As defined by the Internal Consistency Standards for Educational and Psychological Testing (AERA et al., 1999; hereafter referred to Internal consistency reliability is the as the Joint Test Standards), reliability is the extent to which the items comprising a scale are “degree to which test scores for a group of test interrelated (AERA et al., 1999). High internal takers are consistent over repeated applications of consistency reliability provides greater a measurement procedure and hence are inferred confidence that the scale’s items are measuring to be dependable, and repeatable for an individual the same construct. Cronbach’s alpha (α; test taker; the degree to which scores are free of Cronbach, 1951) was used to evaluate the errors of measurement for a given group” (p. reliability of the CPPS subscales (SYSTAT, 180). Technically, reliability is the ratio of true 2009). Historically Cronbach’s α has been score variance to observed score variance; that is, interpreted as an index of internal consistency. the proportion of variance that cannot be This historical interpretation has recently been attributed to measurement error (Raykov & challenged (see Raykov & Marcoulides, 2011 for Marcoulides, 2011). This test characteristic is an overview of the “myths” surrounding represented in the form of a reliability coefficient Cronbach’s α). Regardless of the detailed which can be considered an indicator of the academic and statistical debates, Cronbach’s α degree to which scores are free of measurement can still be considered a good estimate of error (AERA et al., 1999). Theoretically, reliability under most conditions, and it continues reliability coefficients can range from 0 to 1, to be employed extensively in the evaluation of where 1 indicates perfect reliability (which is not psychological measures (Raykov & Marcoulides, possible with manifest measures). 2011; Schweizer, 2011). Mosier’s formula for an In order to place confidence in the scores equally weighted composite score (Mosier, 1943) produced by a psychological measurement scale, was used to calculate the CPPS General reliability coefficients should be at least .80 Processing Ability score (Total Score) reliability (Nunnally, 1978). When psychological test scores from the separate subscale reliabilities. are used to make important decisions about The reliability coefficients of the CPPS individuals, a reliability coefficient of at least .90 subscales are exceptionally high at all ages, with is typically recommended, with .95 being even most in the mid-90s (see Table 3.1). The 5 – 6 more desirable. In general, scales with more year old group has the highest reliability level for items rather than fewer items tend to have higher each subscale (see Table 3.2), but there is only a reliability (Raykov & Marcoulides, 2011). very slight decline in reliability as age increases. Higher, more precise and accurate reliability Only the Auditory Processing and Visual-Spatial coefficients are also achieved by sampling a Processing subscales have more than one broad range of ability, as a very restricted range reliability coefficient below .90. At .99 for all age of ability within a sample makes it difficult for groups, the reliability of the General Processing any instrument to consistently discriminate Ability score exceeds the highest recommended between levels of ability among highly similar standard of .95 or above. sample subjects. However, the exceptionally high CPPS There are four major methods for reliabilities reported should be interpreted with estimating reliability: internal consistency, slight caution, as they may overestimate (to an parallel forms, test-retest, and split-half unknown degree) the true population reliabilities.

25

26

Although Cronbach’s α is often interpreted as a administrations of a test (or parallel forms of a “lower bound” estimate of reliability, under test) under identical conditions. Because such certain conditions it can provide overestimates of data cannot generally be collected, the standard a scale’s reliability (Raykov & Marcoulides, error of measurement is usually estimated from 2011). Conversely, the high CPPS reliabilities are group data” (p. 182). Thus, the SEM is the not unexpected, given that ratings scales have standard deviation of the theoretical distribution “steps” that are analogous to different of a test taker, or rating scale subject, if he or she dichotomous (0/1) items. In the case of the CPPS had been tested (rated) multiple times (Raykov & rating scale (0-1-2-3), there are three different Marcoulides, 2011). The SEM can be used to steps (0-1; 1-2; 2-3) for each single item, which is estimate the range of scores within which the conceptually the equivalent to three different individual’s true score is likely to fall. For dichotomous items per CPPS rating scale item. example, if the SEM is 3 and the individual’s Given the step-nature of the 0-1-2-3 rating scale, obtained test score is 50, there is a 68% chance a CPPS subscale with only 11 “items” is that the individual’s true score lies within the equivalent to 33 dichotomously scored (0/1) test interval (referred to as the 68% confidence items (11 items X 3 steps = 33). Thus, it is not interval) of 47 – 53. Reliability coefficients are surprising that the CPPS General Processing used to compute SEMs; the higher the reliability Ability score reliability is reported as high as .99, coefficient, the lower the SEM. The CPPS SEM as all subjects were rated on all items comprising (+ 1 SEM; 68% confidence band values) T-score the CPPS, which results in 363 items or steps values are reported in Table 3.2. The CPPS SEM (121 items X 3 steps = 363). values were calculated via the standard formula:

Standard Error of Measurement √

According to the Joint Test Standards where SD is the scale standard deviation and (AERA et al., 1999), the standard error of r(11) is the subscale’s reliability. measurement (SEM) is “the standard deviation of an individual’s observed scores from repeated

Table 3.1 Cronbach’s alpha (α) Reliability Coefficients by Age

Age Subscale 5 6 7 8 9 10 11 12 Attention .97 .96 .96 .96 .96 .95 .95 .95 Auditory Processing .94 .95 .91 .90 .88 .88 .92 .80 Executive Functions .97 .97 .96 .96 .96 .95 .95 .96 Fine Motor .95 .97 .96 .92 .95 .94 .95 .92 Fluid Reasoning .98 .98 .96 .96 .96 .96 .96 .95 Long-Term Recall .97 .97 .95 .94 .95 .94 .95 .93 Oral Language .95 .96 .93 .93 .92 .93 .92 .89 Phonological Process. .97 .97 .95 .95 .96 .96 .95 .93 Processing Speed .94 .94 .92 .93 .89 .90 .91 .93 Visual-Spatial Proc. .93 .94 .89 .89 .91 .90 .89 .84 Working Memory .97 .97 .96 .96 .96 .95 .96 .95 n 122 136 179 205 157 122 112 88

27

Table 3.2 Cronbach’s alpha (α) Reliability Coefficients and SEM Values by Normative Age Groups

Subscale Ages 5-6 Ages 7-8 Ages 9-10 Ages 11-12 (n = 258) (n = 384) (n = 279) (n = 200) Attention .97 .96 .96 .95 1.87 2.00 2.12 2.26 Auditory Processing .94 .91 .88 .89 2.37 3.03 3.46 3.36 Executive Functions .97 .96 .95 .95 1.84 1.97 2.17 2.14 Fine Motor .96 .94 .95 .94 2.02 2.39 2.35 2.51 Fluid Reasoning .98 .96 .96 .96 1.55 1.95 1.97 2.00 Long-Term Recall .97 .94 .94 .94 1.84 2.37 2.37 2.49 Oral Language .96 .93 .92 .91 2.12 2.72 2.81 3.08 Phonological Processing .97 .95 .96 .95 1.73 2.17 2.07 2.30 Processing Speed .94 .92 .89 .92 2.39 2.83 3.27 2.85 Visual-Spatial Processing .94 .89 .90 .88 2.53 3.29 3.10 3.49 Working Memory .97 .96 .95 .96 1.64 2.02 2.21 2.07 General Processing .99 .99 .99 .99 Ability (Total Score) .89 .89 .95 .95 Note: The top number in each cell is Cronbach’s alpha (α) coefficient and the bottom number is the SEM. SEMs are in T-Score units.

Inter-Rater Reliability the properties of the scale’s items. Knowing the inter-rater reliability of a rating scale is Inter-rater reliability is the degree to important because multiple raters are frequently which two independent respondents rate the used during psychoeducational evaluation. In an same subject in a similar manner. Variability inter-rater reliability study of a teacher rating across raters is expected with a rating scale. scale, such as the CPPS, two teachers Inter-rater reliability coefficients are usually independently rate the same student. significantly lower than with other measures of For the inter-rater reliability study, 22 reliability. For example, coefficients in the .5 to teachers from 9 different schools in 6 states .7 range are common among behavior rating completed CPPS rating forms on a total of 22 scales (Reynolds & Kamphaus, 2004), with student subjects, with the number of subjects some inter-rater coefficients between parents rated by each teacher ranging from 1 to 7. The and teachers in the .3 range (Gioia et al., 2000). interval between teacher ratings ranged from the The inconsistencies may be due to raters’ same day to 42 days, with the majority different response styles, different environments completed within a few days of each other. The in which the student’s behaviors are observed, or demographic characteristics of the 22 subjects

28

were: 11 males and 11 females; 4 Hispanics and To determine how consistently two 18 Non-Hispanics; 2 African-Americans and 18 teachers rated the same student, separate Whites; and 2 subjects with ADHD, 1 with a reliability coefficients were calculated for each learning disability, and the rest without any pair of raters (see Table 3.3). The W-scores for disabilities. The subjects came from grades 1 the 11 CPPS subscales were used in computing through 6, while their ages ranged from 6 years, the coefficients. The correlations ranged from 0 months to 11 years, 6 months, with a mean age .21 to .90, with a strong median coefficient of of 9 years, 2 months. 76.5. Only two cases had exceptionally low correlations of .21 and .38.

Table 3.3 Inter-Rater Reliability Coefficients for 22 Subjects

Rated Subjects 1 2 3 4 5 6 7 8 9 10 11 Age in Months 115 133 108 131 142 135 118 126 135 115 134 Coefficients .77 .38 .70 .87 .77 .90 .89 .88 .76 .84 .86

Table 3.3 (continued) Rated Subjects 12 13 14 15 16 17 18 19 20 21 22 Median Age in Months 113 119 109 108 105 101 82 81 76 72 74 114 Coefficients .79 .63 .59 .72 .61 .75 .21 .85 .85 .56 .68 76.5 Note: Correlations are between subscale W-scores.

Construct Validity the remaining sections of this chapter supports the construct validity of the CPPS. The most important quality of any psychometric instrument is how well it measures Evidence Based on Test Content the psychological construct it is intended to measure. This quality is known as construct Evidence based on test content validity, which the Joint Test Standards (AERA (historically called “content validity”) refers to et al., 1999) define as the “term used to indicate how adequately a test samples behavior that the test scores are to be interpreted as representative of the construct domain the test is indicating the test taker’s standing on the designed to measure (AERA et al., 1999). With psychological construct measured by the test” the CPPS, evidence based on test content (p. 174). In the case of the CPPS, construct represents how well a subscale’s items measure validity evidence should indicate that the CPPS a specific psychological process and how well is measuring a broad latent trait known as the total scale measures psychological psychological processing and that it also is processing in general. Traditional sources for measuring several specific psychological this form of validity evidence—internal processes. Construct validity is not a narrow consistency (see the Reliability section), expert type of validity but rather a broad concept that review, and developmental evidence—all encompasses all other types of validity support the content validity of the CPPS. (Cronbach & Meehl, 1955). Thus, support for construct validity comes from a network of Expert Review validity evidence—content, concurrent, and predictive validity evidence, as well as Expert review of the CPPS rating scale diagnostic accuracy. The evidence reviewed in item content occurred prior to the pilot study and

29 during the final construction of the standardized abilities/difficulties relative to the student’s scale. Prior to the pilot study, school classmates. Closer inspection of the data reveals psychologists with expertise in psychological evidence of developmental progression, test construction, psychological assessment, or especially in younger students who have less school neuropsychology reviewed and developed abilities and in students at all ages commented on the initial pool of 75 items. The who are experiencing processing difficulties. As experts were asked to provide feedback illustrated in Figure 2.1 (and explained in the regarding the clarity of the items and how well Age Differences section of Chapter 2) there is the items corresponded to the psychological not much difference by chronological age for process they were intended to measure. The students at the 10th percentile (students who are experts suggested wording changes for a few having few or no processing difficulties), but items, questioned the subscale assignment of a once scores reach the 50th percentile and above a few items, and also submitted ideas for new developmentally decreasing slope is present. items. In general, there was high consensus The downward trendline indicates the regarding the content validity of the items. After developmental progression of having fewer and revising the wording of a few items and fewer processing problems as chronological age reassigning a couple to different subscales, the increases. This example is consistent with the 75 items were pilot tested. After piloting, experts fact that most basic psychological processing were consulted again as the number of items was abilities improve through the early elementary increased to 147. At this stage, experts suggested years but plateau by adolescence (Dehn, 2006; items that would expand the range of ability McGrew & Woodcock, 2001). being sampled, and they also suggested items for Additional developmental evidence is the new Fluid Reasoning subscale. provided in Appendix A where the items within After narrowing the 147 items used each subscale are arranged in order of difficulty during item tryout to 138 norming-calibration from easiest to most difficult. The average W- items, two nationally recognized experts in score ability values of the items were used to school neuropsychology were provided with place the items within each subscale in a randomly arranged sets of the CPPS items. The developmental sequence (see also Table 2.3 in two experts were instructed to independently Chapter 2 for the W-score range of ability tapped assign each item to one of the 11 psychological by each subscale). The sequencing of items by processes they thought it best measured. The W-score values reveals the developmental first expert’s selections were in 75% agreement progression of items. When one considers the with the final placement of the items, and the ranking of the subscale items, the items appear second expert’s selections were at a 67% level to be generally consistent with developmental of agreement. Most of the disagreements expectations. For example, for the Attention occurred when there were two closely related subscale items (Appendix A), 12 year olds are subscales involved, such as Auditory Processing not normally “noisy and disruptive in class” nor and Oral Language or Executive Functions and do they have “difficulty staying seated” (the Working Memory. easiest items), but they might have difficulty “dividing attention between two tasks” or Developmental Evidence “concentrating on challenging tasks for extended periods of time” (the most difficult items). On As discussed in Chapter 2 (within the the Working Memory subscale, for a 5 year old, Age Differences subsection), developmental “loses place when counting” (the easiest item) trends from 5 through 12 years of age are not might be normal, whereas older children will clearly evident after a cursory review of the seldom display this difficulty. Conversely, even CPPS standardization sample descriptive children as old as 12 will experience “difficulty statistics. This is because the developmental organizing information when writing” (the most progression of W-scores is masked by the highly difficult Working Memory item). skewed subscale distributions and the tendency of teachers to rate a student’s

30

External Validity Evidence III ACH and the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III COG; Woodcock et External validity evidence refers to how al., 2001b). The subjects were recruited from six well test scores relate to variables external to a different schools located in a Midwestern measurement instrument. Significant relations community with a population of approximately between CPPS scores and other measures of 75,000. The student sample consisted of 22 psychological processes and academic learning males and 18 females. Subjects ranged in age would be important external validity evidence from 5 years, 6 months to 12 years, 11 months, for the CPPS. External validity evidence also with a mean chronological age of 9 years, 6 refers to how well test scores relate to other months. Six of the 40 subjects had a disability: external criteria such as a diagnosis or one with ADHD, one with Asperger’s, one with classification. External validity can be divided a traumatic brain injury, and three with a into concurrent and predictive types (Sattler, learning disability. The subjects’ overall 2001). Concurrent validity refers to the relation academic skills, as ranked by their teacher raters, between a test’s scores and a currently available consisted of 18 with above average rankings, 16 criterion measure, such as achievement test with average rankings, and six with below scores. In contrast, predictive validity refers to average rankings. All of the students were non- the relation between a test’s scores and future Hispanic whites except for one student identified performance on a relevant criterion; for as “Other.” example, a reading readiness test might be used W-scores (which have a mean of to predict later reading performance. approximately 500 at age 10) from the CPPS and the WJ III ACH were used to compute the CPPS Relations with Academic Skills descriptive statistics (see Tables 3.4 and 3.5) and the correlations between the scales. Given that Woodcock-Johnson III Tests of Achievement W-scores are developmental growth scores, it The Woodcock-Johnson III Tests of was first necessary to partial age variance from Achievement (WJ III ACH; Woodcock, the CPPS and WJ III W-scores. The resulting residual scores were used to calculate the McGrew, & Mather, 2001a) is an individually 2 administered achievement battery consisting of reported correlations. The WJ III Normative 22 achievement tests organized into five broad Update Compuscore® and Profiles Program curricular areas: reading, oral language, (Schrank & Woodcock, 2007) was used to mathematics, written language, and academic compute the WJ III ACH W-scores. Only the WJ knowledge. The WJ III ACH is commonly used III clusters (comprised of two or more tests) to evaluate the academic skills of students were compared with the CPPS subscales and referred for a learning disabilities evaluation. GPA score (see Table 3.6). Given that the CPPS Under the traditional IQ-achievement is reverse-scored relative to the WJ III ACH, the discrepancy model of LD determination, coefficients should be negative, and all of them students whose achievement scores are are except one. As expected, the majority of the significantly lower than their overall level of pairings have moderate to strong and statistically intelligence (IQ) qualify for placement in a significant correlations (mostly .3 to .7). The learning disabilities program. Unlike IQ, correlation coefficients between the CPPS and psychological process scores are expected to be the WJ III ACH provide strong external validity more consistent with achievement scores (Dehn, evidence for the CPPS. Notable findings from 2006; Flanagan et al., 2010; Naglieri, 1999). Table 3.6 include: Thus, significant relations between CPPS scores 1. The broader WJ III ACH clusters have and WJ III ACH scores were expected, with statistically significant correlations with subjects obtaining higher CPPS problem scores more CPPS subscales than less inclusive predicted to have lower achievement scores. Forty subjects whose teachers 1 The residual scores, which have age variance completed CPPS ratings were tested with the WJ removed, are conceptually similar to age-based standard scores.

31

Table 3.4 CPPS W-Scores for WJ III Study

AP AT EF FM FR LTR Median 501.00 507.50 500.00 500.00 494.50 487.00 Mean 497.88 499.82 495.38 501.95 493.55 491.53 SD 20.37 23.57 22.80 14.37 28.23 19.57

Table 3.4 (continued)

OL PP PS VSP WM GPA Median 506.00 498.50 500.50 496.00 504.00 497.00 Mean 502.57 502.85 500.30 499.40 500.65 498.80 SD 21.72 24.77 18.17 15.58 23.21 18.31 Note: n = 40. AP = Auditory Processing, AT = Attention, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, LTR = Long-Term Recall, OL = Oral Language, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability

Table 3.5 WJ III ACH W-Scores

BRF TOTAL BRD BRD BRD BRF BRF ACH ACH READ MATH WL READ MATH N 40 30 36 37 32 40 40 Median 499.50 499.00 501.50 497.00 498.50 500.50 495.50 Mean 488.88 495.73 495.11 495.41 493.59 486.78 493.03 SD 30.36 14.59 19.39 15.90 14.65 35.68 23.76

Table 3.5 (continued)

MATH BRF WRIT AC AC AC CALC WRT EXP SKILLS FLU APPS N 38 36 32 40 35 36 Median 495.00 496.50 497.00 500.00 494.00 497.00 Mean 492.74 489.67 492.34 489.35 493.14 490.36 SD 20.11 19.09 12.98 30.64 12.26 22.06 Note: BRF ACH = Brief Achievement, TOTAL ACH = Total Achievement, BRD READ = Broad Reading, BRD MATH = Broad Math, BRD WL = Broad Written Language, BRF READ = Brief Reading, BRF MATH = Brief Math, MATH CALC = Math Calculation Skills, BRF WRT = Brief Writing, WRIT EXP = Written Expression, AC SKILLS = Academic Skills, AC FLU = Academic Fluency, AC APPS = Academic Applications

clusters. Clusters that include contributions CPPS subscales and General Processing from reading, math, and written language Ability score. The Academic Fluency cluster (Brief Achievement and Total Achievement) consists of the three timed Fluency tests— have significant correlations with every Reading, Math, and Writing—on which the CPPS subscale. The same is true for Broad examinee must perform relatively easy basic Reading and Broad Written Language. With academic tasks quickly. Apparently, such Broad Math, the CPPS Phonological speeded demonstrations of basic skills Processing subscale is not significant, but mastery place high demands on phonological processing is required more for psychological processes in general. This also reading and writing than math. suggests that the CPPS scales may reflect an 2. The Academic Fluency cluster has some of underlying efficiency or automaticity of the highest correlations (.55 to .81) with the processing dimension. Written Expression

32

and Broad Written Language also have displaying lower academic skills receiving consistently high correlations with all of the significantly more ratings of psychological CPPS processes. Written Expression processing difficulties than students with higher consists of the Writing Fluency and Writing academic skills rankings (see Table 3.6). The W- Samples tests; Broad Written Language score difference between the lowest and highest includes these two tests along with Spelling. ability groups is even greater than those that 3. Of all the clusters, Mathematics Calculations occur across different parent education levels (composed of the Calculation and Math (see the next subsection). Fluency tests) has the fewest statistically Based on an ANOVA of all 1,121 cases, significant relations with the CPPS there was an F-ratio of 422.88 and a critical p- psychological processes. However, those value of less than .001 (see Table 3.7 for that are significant—Attention, Executive remaining ANOVA results). The high level of Functions, Fluid Reasoning, and Oral agreement between academic skills rankings and Language—are consistent with research overall psychological processing abilities was findings (McGrew & Wendling, 2010). expected given that the same teachers, with the same perceptions of each student’s school Academic Skills Ranking performance, provided the ratings for the Additional support for the external independent (academic skills level) and validity of the CPPS is the high concurrent dependent (CPPS scores) variables. These correlation of .66 between academic skills significant findings need replication in ranking and the CPPS General Processing additional samples where more than one teacher Ability score. During standardization teachers rates the students on both the independent and were asked to rank each subject’s overall level dependent variables and in studies that use a of academic skills. Their global ranking of teacher-independent measure of academic skills student academic skills was highly consistent levels (e.g., school grades; school work samples; with their overall perceptions of students’ standardized test scores). psychological processing skills, with students

Table 3.6 Pearson Correlation Coefficients between CPPS and WJ III ACH W-Scores

WJ III ACH Clusters CPPS BRF TOTAL BRD BRD BRD BRF BRF MATH BRF WRIT AC AC AC Subscales ACH ACH READ MATH WL READ MATH CALC WRT EXP SKIL FLU APPS AP -.51 -.50 -.38 -.43 -.56 -.27 -.28 -.18 -.63 -.69 -.36 -.65 -.43 AT -.52 -.53 -.39 -.52 -.56 -.40 -.45 -.33 -.50 -.63 -.43 -.68 -.48 EF -.42 -.54 -.34 -.54 -.43 -.38 -.51 -.39 -.32 -.48 -.39 -.62 -.43 FM -.50 -.62 -.46 -.44 -.63 -.23 -.27 -.19 -.64 -.71 -.34 -.55 -.50 FR -.66 -.68 -.57 -.57 -.64 -.62 -.59 -.47 -.63 -.72 -.62 -.71 -.74 LTR -.45 -.51 -.40 -.33 -.59 -.18 -.15 ..00 -.67 -.70 -.28 -.62 -.35 OL -.61 -.68 -.59 -.54 -.60 -.64 -.57 -.46 -.54 -.65 -.61 -.71 -.69 PP -.53 -.64 -.46 -.31 -.73 -.23 -.11 -.10 -.71 -.75 -.37 -.69 -.36 PS -.55 -.67 -.46 -.52 -.67 -.24 -.29 -.17 -.66 -.76 -.37 -.78 -.42 VSP -.58 -.64 -.53 -.58 -.64 -.33 -.39 -.29 -.65 -.73 -.43 -.76 -.49 WM -.62 -.67 -.56 -.55 -.67 -.49 -.49 -.31 -.63 -.74 -.53 -.77 -.60 GPA -.66 -.71 -.56 -.58 -.73 -.44 -.44 -.32 -.72 -.82 -.53 -.81 -.60 Note: Bolded coefficients have a p-value of less than .05. AP = Auditory Processing, AT = Attention, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, LTR = Long-Term Recall, OL = Oral Language, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability. BRF ACH = Brief Achievement, TOTAL ACH = Total Achievement, BRD READ = Broad Reading, BRD MATH = Broad Math, BRD WL = Broad Written Language, BRF READ = Brief Reading, BRF MATH = Brief Math, MATH CALC = Math Calculation Skills, BRF WRT = Brief Writing, WRIT EXP = Written Expression, AC SKIL = Academic Skills, AC FLU = Academic Fluency, AC APPS = Academic Applications

33

Table 3.7 Results of ANOVA Comparing Academic Skills Ranking with CPPS GPA Score

Academic Skills Ranking n Least Squares Standard Error W-Score Mean Below Average 201 530.77 1.19 Average 712 500.63 0.59 Above Average 208 486.93 1.10

Table 3.8 Results of ANOVA Comparing Estimated Parent Education Level with CPPS GPA Score

Parent Education Level n Least Squares Standard Error W-Score Mean Not a high school graduate 78 517.24 2.26 High School graduate 237 508.62 1.30 Some college 183 503.77 1.48 College degree 229 495.46 1.32 Graduate/Professional degree 91 496.15 2.10

CPPS Relations with Parent Education Level 25.42 was statistically significant (p <.001; see Table 3.7 for other ANOVA results). There is a clear and significant relationship between teacher estimates of parent CPPS Relations with Cognitive Abilities education level and the CPPS General Processing Ability score. As displayed in Table The same subjects who completed WJ 3.8, the children of parents with estimated lower III ACH testing (see previous section on the levels of education (and presumably a lower Woodcock-Johnson III Tests of Achievement) SES level; see the discussion of Socioeconomic also completed WJ III COG (Woodcock et al., Status in Chapter 2) have significantly more 2001b) testing. The WJ III COG consists of 20 difficulties with psychological processes than cognitive tests that are organized under three children of parents with estimated higher levels categories—verbal ability, thinking ability, and of education. This finding is consistent with cognitive efficiency—and that are also paired to research on the relations between parent produce seven broad factors consistent with the education levels and children’s cognitive Cattell-Horn-Carroll (CHC) taxonomy of abilities and academic skills (Sirin, 2005). Since cognitive abilities (Flanagan, Ortiz, & Alfonso, psychological processes correlate significantly 2007). The WJ III COG battery used for this with cognitive and academic abilities, a study also produces four clinical clusters— relationship between CPPS scores and estimated Phonemic Awareness, Broad Attention, parent education levels was expected. On the Working Memory, and Cognitive Fluency. CPPS, a moderate relationship is indicated by Individual WJ III COG tests included in the the multiple correlation coefficient of .33 analysis were Visual-Auditory Learning- between estimated parent education level and Delayed Recall, Rapid Picture Naming, and Pair the General Processing Ability score. Based on Cancellation. (See Table 3.9 for descriptive an analysis of variance (ANOVA; SYSTAT, statistics.) Similar to the CPPS/WJ III ACH 2009) of the 818 cases with teacher estimated correlations, the CPPS/WJ III COG correlations parent education level information, the F-ratio of are based on the respective scales age-partialled residual scores.

34

Table 3.9 WJ III COG W-Scores

GIA VER THK COG CMP LT VST AUD FLD EXT EFF KNW RET REA n 33 40 40 40 39 40 36 38 35 Median 503.00 502.50 503.00 495.00 502.00 498.50 503.00 505.00 500.00 Mean 499.00 500.18 501.18 494.73 499.69 498.13 502.06 503.74 499.57 SD 9.21 12.85 6.46 13.00 12.64 4.92 4.01 7.65 9.21

Table 3.9 (continued)

PRO STM PHO WM BRD COG VAL RPN PR SPD AW ATT FLU DR CAN n 39 38 39 40 32 35 35 35 32 Median 494.00 498.00 503.00 497.50 500.00 496.00 499.00 486.00 498.00 Mean 494.26 494.13 503.26 495.20 496.47 494.29 501.86 486.09 496.22 SD 11.68 16.74 7.60 14.12 11.19 7.45 9.72 12.57 14.38 Note: GIA EXT = General Intellectual Ability (Extended), VER = Verbal Ability, THK = Thinking Ability, COG EFF = Cognitive Efficiency, CMP KNW = Comprehension-Knowledge, LT RET = Long-Term Retrieval, VST = Visual-Spatial Thinking, AUD = Auditory Processing, FLD REA = Fluid Reasoning, PRO SPD = Processing Speed, STM = Short-Term Memory, PHO AW = Phonemic Awareness, WM = Working Memory, BRD ATT = Broad Attention, COG FLU = Cognitive Fluency, VAL DR = Visual-Auditory Learning-Delayed Recall, RPN = Rapid Picture Naming, PR CAN = Pair Cancellation

The cognitive abilities measured by the that the GIA is a weighted score, with CHC WJ III COG include a range of broad and factors that load higher on general narrow abilities, many which correspond to the intelligence (g) receiving higher weightings. psychological processes measured by the CPPS For example, the CPPS Fluid Reasoning (both include scales with similar and Oral Language subscales have the ability/processing labels such as fluid reasoning, highest correlations with GIA, consistent working memory, etc.). While these cognitive with the high loadings that the WJ III Fluid and processing abilities are measured directly Reasoning and Verbal Ability clusters have with the WJ III COG, they are evaluated on the WJ III GIA factor (McGrew and indirectly through teacher ratings with the Woodcock, 2001). CPPS. 2. The WJ III COG Verbal Ability cluster In general, the CPPS processes have appears to function similarly to the general stronger and more statistically significant processing ability factor identified with correlations with the WJ ACH clusters than two-factor and three-factor solutions for the with the WJ COG clusters and tests. One CPPS (see the section on Principal hypothesis for these differential ACH/COG Component Analysis later in this chapter). results is that teacher raters more readily That is, the eight CPPS subscales with the observe and report behaviors related with highest loadings on the CPPS general factor academic performance and achievement than (see Table 3.14) have statistically behaviors associated with cognitive abilities. significant correlations with Verbal Ability; The correlation coefficients between pairings of whereas, the three subscales with the lowest W-scores are reported in Table 3.10. Some of loadings on the CPPS general factor the notable findings and some possible (Attention, Fine-Motor, and Executive interpretations include: Functions) are not significantly related with 1. The pattern of CPPS correlations with the the WJ III Verbal Ability factor. General Intellectual Ability (GIA- 3. All of the CPPS subscales have strong Extended) score is consistent with the fact relations with Cognitive Fluency, defined as

35

the ability to quickly and fluently perform with Processing Speed and Pair simple to complex cognitive tasks. This Cancellation. finding is similar to the strong relations all 7. The WJ III COG score with the fewest of the CPPS processes have with the significant correlations is Visual-Auditory Academic Fluency cluster from the WJ III Learning-Delayed Recall. Only the CPPS ACH. Oral Language subscale has a significant 4. The CPPS subscales and WJ III Clusters that correlation with this WJ III COG test. measure similar or identical domains and 8. Of all the CPPS subscales, Fluid Reasoning that display statistically significant and Oral Language have more statistically coefficients (see Table 3.10) are: Auditory significant relations (13 each) with WJ III Processing with Auditory Processing; Fine COG clusters and tests than any other CPPS Motor with Visual-Spatial Thinking and Pair subscales. Cancellation; Executive Functions with 9. Of all the CPPS subscales, the Attention Broad Attention; Fluid Reasoning with Fluid subscale has fewer statistically significant Reasoning; Long-Term Recall with Rapid relations (only 4) with WJ III COG clusters Picture Naming; Oral Language with Verbal and tests than any other CPPS subscale, Ability; Phonological Processing with indicating that the relations between Auditory Processing and Phonemic Attention and cognitive abilities measured Awareness; Visual-Spatial Processing with by the WJ III COG are limited and weak. Visual-Spatial Thinking; Working Memory with Short-Term Memory; and the General CPPS Relations with a Measure of Executive Processing Ability with GIA Extended. Functions These correlations provide convergent validity evidence for the CPPS. The Behavior Rating Inventory of 5. The CPPS subscale and WJ III Clusters that Executive Function (BRIEF®; Gioia et al., measure similar or identical domains that do 2000) is a contemporary measure of executive not have corresponding statistically functions in children ages 5 - 18. The BRIEF’s significant coefficients are: Attention with teacher and parent rating forms consist of 86 Broad Attention; Executive Functions with items divided among eight scales that measure Pair Cancellation; Long-Term Recall with different aspects of executive functioning. Three Long-Term Retrieval; Processing Speed scales (Inhibit, Shift, and Emotional Control) with Processing Speed; and Working contribute to the Behavioral Regulation Index, Memory with Working Memory. which represents the ability to modulate 6. The lack of relations between constructs that behavior and emotions, and the remaining five are not theoretically related, known as scales comprise the Metacognition Index, which discriminant validity evidence, can also be is interpreted as the ability to cognitively self- gleaned from Table 3.10. CPPS and WJ III manage tasks and actively problem solve in a COG pairings that are not significantly variety of contexts. The BRIEF’s Global related (as expected) include: Auditory Executive Composite is a summary score that Processing with Fluid Reasoning, incorporates all eight scales. The measure’s Processing Speed, and Pair Cancellation; accompanying manual (Gioia et al.) reports that Executive Functions with Auditory the BRIEF has high clinical utility for the Processing and Phonemic Awareness; Fine diagnosis of ADHD. The manual also reports a Motor with GIA-Extended, Verbal Ability, study of 34 children with a reading disorder. The Thinking Ability, Fluid Reasoning, reading disabled group had significantly higher Phonemic Awareness, Working Memory, means on the BRIEF’s Working Memory and and Visual-Auditory Learning-Delayed Plan/Organize scales than a group of 77 normal Recall; Long-Term Recall with Processing control subjects. Although the BRIEF is not Speed and Pair Cancellation; Oral Language designed as a broad measure of psychological with Visual-Spatial Thinking and Pair processes, at least three CPPS subscales Cancellation; and Phonological Processing (Attention, Executive Functions, and Working

36

Memory) appear to measure domains similar to subscales have the strongest relations with the those on the BRIEF. BRIEF scales. Each of these three CPPS For the CPPS/BRIEF concurrent subscales has a significant correlation with every validity study, 33 teachers from across the U.S. BRIEF scale and composite score. The CPPS completed both a CPPS and a BRIEF on one of Attention and Executive Functions subscales their students. In each case, the CPPS was appear to be measuring similar behaviors and completed online first, followed by completion constructs as the BRIEF, as evidenced by of the paper-and-pencil BRIEF within 30 days. correlations consistently ranging from .45 to .86, The student sample consisted of 17 males and 16 with most at .70 or above. It appears that these females, none of whom had a diagnosed two CPPS subscales adequately sample most disability. There were two subjects from each of aspects of executive functioning included in the the 5th, 6th, and 7th grades and 5-6 students each BRIEF. There are also consistent significant from kindergarten through 4th grade. The overall relations between the other CPPS subscales and academic skills of the sample were also well the BRIEF Metacognition Index. That is, the balanced, with 22 students ranked as average, 6 other CPPS scales do not correlate strongly with ranked below average, and 5 ranked above behavioral and emotional control factors as average. The racial distribution was 24 white reflected by the BRIEF scales in these domains, students, 6 African-American, 2 Asian, and 1 but they do have strong relations with all of the Native American. Of the 24 white students, 7 self-management and problem-solving were Hispanic. The results of the correlation functions. The only CPPS scale that does analyses (SYSTAT, 2009) based on the BRIEF demonstrate consistently moderate to high and CPPS T-scores are reported in Table 3.11. correlations with the metacognitive scales is the As expected, the CPPS Attention, Oral Language subscale. Executive Functions, and Working Memory

Table 3.10 Pearson Correlation Coefficients between CPPS and WJ III COG W-Scores

WJ III COG Clusters CPPS GIA VER THK COG CMP LT VST AUD FLD PRO STM PHO WM BRD COG VAL RPN PR Subscales EXT EFF KNW RET REA SPD AW ATT FLU DR CAN AP -.36 -.45 .00 -.27 -.52 .01 -.42 -.52 .09 -.12 -.31 -.44 -.06 -.13 -.65 -.04 -.76 -.22 AT -.42 -.29 -.19 -.33 -.32 -.08 .04 -.22 -.23 -.29 -.29 -.15 -.27 -.30 -.51 -.12 -.44 -.04 EF -.43 -.23 -.26 -.39 -.26 -.18 -.13 -.02 -.28 -.46 -.33 .06 -.40 -.48 -.37 -.31 -.23 -.13 FM -.33 -.31 .03 -.31 -.47 -.14 -.47 -.44 .18 -.24 -.32 -.28 -.15 -.29 -.65 -.18 -.74 -.47 FR -.68 -.51 -.48 -.42 -.55 -.43 -.26 -.31 -.40 -.35 -.42 -.26 -.41 -.47 -.55 -.32 -.44 -.22 LTR -.16 -.46 .13 -.10 -.52 .12 -.47 -.50 .27 .00 -.12 -.44 .10 .07 -.61 -.04 -.80 -.12 OL -.61 -.42 -.50 -.40 -.42 -.39 -.12 -.21 -.44 -.37 -.35 -.21 -.37 -.42 -.49 -.33 -.28 -.18 PP -.27 -.52 .13 -.30 -.55 .08 -.58 -.62 .33 -.10 -.32 -.58 .01 -.01 -.69 -.08 -.88 -.31 PS -.28 -.55 .07 -.28 -.62 .07 -.57 -.64 .30 -.16 -.28 -.59 -.08 -.11 -.67 -.06 -.82 -.27 VSP -.46 -.45 .00 -.39 -.59 -.11 -.41 -.45 .08 -.26 -.42 -.35 -.25 -.38 -.59 -.14 -.71 -.42 WM -.51 -.43 -.27 -.37 -.47 -.21 -.22 -.34 -.20 -.33 -.33 -.27 -.29 -.37 -.62 -.24 -.55 -.19 GPA -.51 -.52 -.14 -.39 -.59 -.13 -.41 -.49 -.04 -.29 -.39 -.41 -.23 -.31 -.71 -.21 -.74 -.28 Note: Bolded coefficients have a p-value of less than .05.

AP = Auditory Processing, AT = Attention, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, LTR = Long-Term Recall, OL = Oral Language, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability.

GIAEXT = General Intellectual Ability (Extended), VER = Verbal Ability, THK = Thinking Ability, COG EFF = Cognitive Efficiency, CMP KNW = Comprehension-Knowledge, LT RET = Long-Term Retrieval, VST = Visual-Spatial Thinking, AUD = Auditory Processing, FLD REA = Fluid Reasoning, PRO SPD = Processing Speed, STM = Short-Term Memory, PHO AW = Phonemic Awareness, WM = Working Memory, BRD ATT = Broad Attention, COG FLU = Cognitive Fluency, VAL DR = Visual- Auditory Learning-Delayed Recall, RPN = Rapid Picture Naming, PR CAN = Pair Cancellation

37

Table 3.11 Pearson Correlations between CPPS Subscales and BRIEF Scales

Brief Scales CPSS Inhibit Shift Emot. BRI Initiate Work. Plan/ Org. Mon. MI GEC Subscales Cont. Mem. Org. Mats. AT .69** .45** .57** .61** .76** .81** .83** .81** .67** .83** .85** AP .27 .09 .18 .15 .67** .54** .50** .42* .46** .57** .49** EF .71** .48** .58** .61** .80** .81** .85** .77** .71** .85** .86** FM .37* .30 .42* .32 .67** .57** .59** .52** .58** .63** .59** FR .29 .11 .10 .16 .71** .59** .58** .40* .56** .63** .52** LTR .33 .19 .22 .27 .77** .65** .64** .50** .56** .70** .60** OL .19 .06 .04 .06 .53** .37* .40* .29 .39* .44* .36* PP .20 .09 .04 .08 .61** .46** .49** .36* .46** .51** .43* PS .28 .36* .22 .28 .74** .67** .63** .56** .54** .70** .60** VSP .24 .26 .17 .17 .69** .54** .56** .41* .50** .59** .51** WM .53** .35* .39* .39* .86** .81** .81** .69** .71** .84** .78** GPA .41* .28 .30 .31 .82** .71** .72** .59** .63** .76** .68** Note: Correlations are between T-Scores. *Significant at <.05; **Significant at <.01.

AT= Attention, AP = Auditory Processing, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, LTR = Long-Term Recall, OL = Oral Language, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability.

Emot. Cont. = Emotional Control, BRI = Behavioral Regulation Index, Work. Mem. = Working Memory, Plan/Org. = Plan/Organize, Org. Mats. = Organization of Materials, Mon. = Monitor, MI = Metacognition Index, GEC = Global Executive Composite

CPPS Diagnostic Utility for Students with disability specific information. The types of Learning Disabilities learning disabilities specified included reading, math, and written language. The 37 subjects The primary intended function of the with LD consisted of: 23 males and 14 females; CPPS is to serve as a psychological 8 students with an average academic skills measurement instrument that can in the ranking and 29 with a below average ranking; 2 reliable discrimination between students with Hispanic and 35 non-Hispanic students; and 27 and without a learning disability (LD). Based white, 4 African-American, and 6 American- on the premise that students with a learning Indian students. The mean age of the subjects disability have significantly more problems was 9 years, 6 months, and the mean grade was with one or more psychological processes, it 3.27. A matched sample of 37 subjects was was expected that subjects with LD would have selected from the national standardization significantly higher CPPS scores than a sample. A three-variable sequential matching matched control group of subjects without LD. strategy was employed—matching based first on To obtain a representative sample of gender, then chronological age, and then grade students with LD from across the U.S., placement. classroom teachers who participated in the A paired-sample t-test (SYSTAT, national norming were randomly selected and 2009) was used to analyze the LD and matched directed to select one student with LD from their normal groups’ CPPS T-scores. The LD class on which to complete the CPPS. If a group’s mean was significantly higher than the teacher reported having no students with LD, mean of the matched sample subjects on every another teacher was randomly selected. Teachers CPPS subscale, as well as on the General were not required to identify the specific type of Processing Ability score (see Table 3.12). All learning disability, but some did provide of the differences were significant at a critical

38 p-value of .001 or below. All but one of the derived most likely inflates the reported subscales had a mean difference greater than classification accuracy. the normal T-score standard deviation of 10 Informal analysis of this study’s data points. The Long-Term Recall, Phonological revealed that only one of the matched subjects Processing, Visual-Spatial Processing, Working without LD had a CPPS General Processing Memory and General Processing Ability mean Ability T-score higher than 60, whereas all but score differences were greater than 15 points. five of the subjects with LD had General To formally assess the ability of the Processing Ability scores of 60 or higher. If a CPPS General Processing Ability score to Total T-Score of 60 had been used to classify correctly classify students with LD and students the 74 students in this study, there would have without LD, a Classification and Regression been only one false positive and five false Tree (CART) analysis was employed negatives. With the particular 74 students in (SYSTAT, 2009). CART is an exploratory data this study, using a criterion of 60 on the CPPS mining procedure that determines the optimal General Processing Ability score would have cut-scores for classifying existing groups. The correctly diagnosed 92% of the subjects. The CART results indicate that the two groups were five LD subjects with a Total T-Score less than optimally split at a T-score of 57.49. Using a 60 were, on average, 10 years, 9 months of age, criterion cut-score of 57 correctly classified suggesting that the CPPS may differentiate 85.14% of all 74 subjects, 35.14% better than between students with LD and without LD classification based on chance. Still, these better at younger elementary ages than older encouraging results need to be interpreted elementary ages. (See Chapter 4 for cautiously given the small sample size and lack suggestions regarding the use of CPPS scores of cross-validation. Also, applying the cut- for LD diagnosis.) score rule in the sample from which it was

Table 3.12 CPPS Scores of LD Subjects Compared with a Matched Sample

Subscale LD Mean Non-LD Mean Difference p-Value Between Means Attention 61.03 49.84 11.19 <.001 Auditory Processing 64.68 50.03 14.65 <.001 Executive Functions 61.40 51.62 9.78 <.001 Fine Motor 62.57 49.43 13.14 <.001 Fluid Reasoning 64.19 50.46 13.73 <.001 Long-Term Recall 65.59 49.40 16.19 <.001 Oral Language 64.78 50.30 14.48 <.001 Phonological Process. 66.62 49.81 16.81 <.001 Processing Speed 64.95 50.57 14.38 <.001 Visual-Spatial Process. 66.40 48.81 17.59 <.001 Working Memory 65.11 49.81 15.30 <.001 General Processing Ability 66.03 48.95 17.08 <.001 Note: There was an n of 37 in each group. p-values are from a paired-sample t-test.

Internal Structural Validity and cluster analysis (SYSTAT, 2009) of the 11 CPPS subscales were conducted to identify Evidence psychological processing dimensions that would be useful for clinical interpretation of the CPPS Exploratory principal component subscale scores. On most psychological tests analysis, maximum-likelihood factor analysis, that measure cognitive traits or cognitive

39

processing abilities, subscales often tend to 11 subscales consistently load high (.70 or cluster in groups, indicating that the grouped above) on the general processing ability factor. subscales are measuring, at least in part, a (See Appendix A for the loadings of individual common trait. Typically, all of the subscales in items on the general component.) Due to their cognitive/ability batteries contribute high loadings on this general dimension, it is significantly to the measurement of a broad difficult for any of the CPPS subscales to general factor. The CPPS is no exception. demonstrate high subscale specificity; that is, it is difficult for any of the subscales to emerge as Principal Component Analysis primarily independent measures. Inspection of the eigenvalues (latent Regardless of the type of factor or roots) from the principal components of a principal component analysis conducted with dataset is typically used to ascertain the number the CPPS standardization data, a broad general of statistically and potentially meaningful dimension, which is typically interpreted as dimensions to retain in subsequent common- representing general intelligence (g) on factor or maximum likelihood factor analysis of cognitive scales, was the first dimension to the data. The principal components analysis for emerge. On the CPPS this general factor is each of the four age groups revealed one very interpreted as representing general large eigenvalue (9.43, 8.64, 8.26, 8.46, for ages (psychological) processing ability (GPA). This 5-6 thru 10-12), followed by a set of finding is consistent with the high dramatically smaller second set of eigenvalues intercorrelations among all the CPPS subscales (0.56, 0.68, 0.89, 0.89, respectively). These (see Table 3.13). Also, the factor analysis of results suggest that a single factor solution is the third-party rating scales infrequently identifies most parsimonious and plausible interpretation, more than one general factor, most likely due to although inspection of the scree plots suggested the third-party response methodology. As that a two-factor solution was worthy of reported in Table 3.14, this general component exploration. accounts for the majority of the CPPS scale’s total variance, more than 85% at ages 5-6. All

Table 3.13 CPPS Subscale Intercorrelations

Ages 5-6 years (n=258) AT AP EF FM FR OL LTR PP PS VSP WM GPA AT 1.00 ...... AP .79 1.00 ...... EF .95 .83 1.00 ...... FM .75 .79 .79 1.00 ...... FR .79 .90 .85 .77 1.00 ...... OL .68 .90 .76 .75 .86 1.00 ...... LTR .79 .93 .84 .79 .93 .92 1.00 . . . . . PP .67 .86 .73 .76 .86 .86 .88 1.00 . . . . PS .82 .86 .86 .79 .89 .84 .90 .84 1.00 . . . VSP .73 .88 .78 .84 .91 .84 .90 .89 .86 1.00 . WM .87 .92 .91 .82 .93 .87 .94 .84 .92 .89 1.00 . GPA .86 .95 .91 .86 .95 .91 .97 .91 .94 .94 .97 1.00 Note: AT= Attention, AP = Auditory Processing, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, OL = Oral Language, LTR = Long-Term Recall, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability.

40

Ages 7-8 years (n=384) AT AP EF FM FR OL LTR PP PS VSP WM GPA AT 1.00 ...... AP .77 1.00 ...... EF .93 .78 1.00 ...... FM .61 .67 .64 1.00 ...... FR .72 .83 .76 .57 1.00 ...... OL .66 .84 .70 .61 .83 1.00 ...... LTR .77 .91 .79 .62 .90 .86 1.00 . . . . . PP .59 .79 .63 .66 .78 .80 .81 1.00 . . . . PS .72 .75 .78 .61 .80 .73 .82 .72 1.00 . . . VSP .65 .80 .67 .73 .79 .75 .80 .81 .73 1.00 . . WM .85 .89 .88 .65 .90 .83 .93 .77 .85 .78 1.00 . GPA .85 .93 .88 .74 .92 .89 .95 .86 .87 .87 .96 1.00

Ages 9-10 years (n=279) AT AP EF FM FR OL LTR PP PS VSP WM GPA AT 1.00 ...... AP .74 1.00 ...... EF .91 .72 1.00 ...... FM .48 .62 .52 1.00 ...... FR .65 .85 .71 .50 1.00 ...... OL .62 .88 .64 .62 .82 1.00 ...... LTR .72 .88 .75 .55 .90 .84 1.00 . . . . . PP .51 .80 .55 .60 .77 .81 .77 1.00 . . . . PS .58 .73 .66 .53 .74 .73 .77 .65 1.00 . . . VSP .56 .79 .60 .75 .76 .77 .79 .81 .69 1.00 . . WM .82 .88 .85 .59 .87 .81 .91 .73 .79 .76 1.00 . GPA .80 .94 .84 .69 .91 .90 .94 .84 .82 .86 .95 1.00

Ages 11-12 years (n=200) AT AP EF FM FR OL LTR PP PS VSP WM GPA AT 1.00 ...... AP .74 1.00 ...... EF .94 .73 1.00 ...... FM .51 .65 .52 1.00 ...... FR .70 .82 .73 .51 1.00 ...... OL .69 .85 .71 .62 .84 1.00 ...... LTR .74 .88 .74 .60 .90 .87 1.00 . . . . . PP .55 .82 .57 .67 .77 .81 .80 1.00 . . . . PS .70 .79 .72 .55 .84 .81 .83 .73 1.00 . . . VSP .55 .79 .55 .74 .74 .74 .77 .81 .73 1.00 . . WM .85 .88 .85 .62 .88 .84 .92 .74 .84 .75 1.00 . GPA .83 .93 .84 .71 .91 .91 .94 .85 .89 .84 .96 1.00 Note: AT= Attention, AP = Auditory Processing, EF = Executive Functions, FM = Fine Motor, FR = Fluid Reasoning, OL = Oral Language, LTR = Long-Term Recall, PP = Phonological Processing, PS = Processing Speed, VSP = Visual-Spatial Processing, WM = Working Memory, GPA = General Processing Ability.

41

Table 3.14 Subscale Loadings (sorted by median loading) on the CPPS General Component

Subscale Ages 5-6 Ages 7-8 Ages 9-10 Ages 11-12 Median (n = 258) (n = 384) (n = 279) (n = 200) Working Memory .97 .95 .95 .95 .95 Long-Term Recall .97 .94 .94 .94 .94 Auditory Processing .95 .94 .94 .93 .94 Fluid Reasoning .95 .91 .91 .91 .91 Oral Language .91 .90 .90 .91 .91 Visual-Spatial Processing .94 .87 .87 .85 .87 Processing Speed .94 .83 .83 .89 .86 Phonological Processing .90 .84 .84 .86 .85 Executive Functions .91 .83 .83 .84 .83 Attention .87 .80 .80 .83 .81 Fine Motor .87 .70 .70 .72 .71 Total % of Variance Explained 85.71 78.55 75.13 76.99 Note: Values are loadings on first principal component extracted via principal components analysis.

Exploratory Factor Analysis the extraction of two-factors at most, extracting one additional factor often helps clarify the Maximum-likelihood exploratory factor primary factors of interest. Moreover, the analysis, followed by oblique rotation of extraction of more than one factor can often retained factors, was applied to the CPPS identify clinically meaningful dimensions. Given subscale correlation/covariance matrices at each the unknown internal structure of the collection of the four norm age groups. Given the of the 11 CPPS subscales, this deliberately eigenvalues reported from the principal lenient factor extraction strategy was applied to component analysis (PCA), it was decided that the data for each of the four age groups. The two- and three-factor solutions would be results are reported in Tables 3.15 through 3.18. extracted. Although the PCA results suggested

Table 3.15 Ages 5-6 Exploratory Factor Analysis Solutions

Maximum Likelihood 2-factor 1 2 Maximum likelihood 3-factor 1 2 3 oblique (oblimin) solution oblique (oblimin) solution Phonological Processing 1.03* -0.15 Oral Language 1.04* -0.07 -0.05 Oral Language 1.02* -0.12 Long-Term Recall 0.87 0.12 0.03 Visual-Spatial Processing 0.93 0.00 Auditory Processing 0.79 0.14 0.06 Long-Term Recall 0.91 0.08 Phonological Processing 0.64 -0.10 0.38 Auditory Processing 0.87 0.10 Fluid Reasoning 0.58 0.18 0.25 Fluid Reasoning 0.84 0.14 Working Memory 0.58 0.38 0.09 Processing Speed 0.68 0.30 Processing Speed 0.53 0.33 0.14 Working Memory 0.68 0.35 Attention 0.01 0.93 0.05 Fine Motor 0.58 0.31 Executive Functions 0.16 0.81 0.05 Attention 0.03 0.95 Visual-Spatial Processing 0.09 0.00 0.92 Executive Functions 0.20 0.82 Fine Motor 0.09 0.32 0.58

% Common Variance Explained 73.5 26.5 % Common Variance Explained 49.8 27.6 22.7 Correlations Between Factors Correlations Between Factors 1 1.00 1 1.00 2 .73 1.00 2 .76 1.00 3 .89 .71 1.00 Note: n = 258; Bold font designates primary subscale factor loadings > .50; Italic font designates secondary subscale factor loadings > .30 & <.50.*Denotes Heywood cases.

42

Table 3.16 Ages 7-8 Exploratory Factor Analysis Solutions

Maximum Likelihood 2-factor 1 2 Maximum likelihood 3-factor 1 2 3 oblique (oblimin) solution oblique (oblimin) solution Phonological Processing 0.99 -0.17 Working Memory 1.03* -0.02 -0.06 Oral Language 0.93 -0.05 Long-Term Recall 0.91 0.18 -0.01 Long-Term Recall 0.91 0.07 Fluid Reasoning 0.88 0.18 -0.03 Visual-Spatial Processing 0.87 -0.02 Processing Speed 0.79 0.00 0.08 Fluid Reasoning 0.86 0.06 Executive Functions 0.76 -0.16 0.06 Auditory Processing 0.82 0.14 Attention 0.75 -0.13 0.00 Working Memory 0.67 0.36 Auditory Processing 0.65 0.11 0.19 Processing Speed 0.63 0.27 Oral Language 0.62 0.22 0.20 Attention 0.05 0.92 Fine Motor -0.03 -0.10 0.90 Executive Functions 0.10 0.90 Visual-Spatial Processing 0.21 0.17 0.66 Fine Motor 0.47 0.26 Phonological Processing 0.27 0.27 0.54

% Common Variance Explained 72.0 28.0 % Common Variance Explained 69.7 5.7 24.6 Correlations Between Factors Correlations Between Factors 1 1.00 1 1.00 2 .78 1.00 2 .35 1.00 3 .83 .26 1.00 Note: n = 384; Bold font designates primary subscale factor loadings > .50; Italic font designates secondary subscale factor loadings > .30 & <.50; *Denotes Heywood cases.

Table 3.17 Ages 9-10 Exploratory Factor Analysis Solutions

Maximum Likelihood 2-factor 1 2 Maximum likelihood 3-factor 1 2 3 oblique (oblimin) solution oblique (oblimin) solution Phonological Processing 1.00* -0.19 Fluid Reasoning 1.01* 0.03 -0.17 Oral Language 0.94 -0.04 Long-Term Recall 0.92 0.13 -0.10 Visual-Spatial Processing 0.91 -0.06 Phonological Processing 0.87 -0.17 0.18 Fluid Reasoning 0.85 0.10 Oral Language 0.87 -0.02 0.09 Auditory Processing 0.83 0.15 Auditory Processing 0.78 0.15 0.07 Long-Term Recall 0.81 0.18 Processing Speed 0.69 0.14 0.07 Processing Speed 0.69 0.15 Working Memory 0.67 0.38 -0.02 Working Memory 0.62 0.42 Visual-Spatial Processing 0.65 -0.03 0.41 Fine Motor 0.61 0.06 Attention 0.04 0.90 0.06 Attention 0.04 0.92 Executive Functions 0.08 0.88 0.08 Executive Functions 0.09 0.90 Fine Motor 0.08 0.18 0.76

% Common Variance Explained 73.0 27.0 % Common Variance Explained 63.6 24.5 11.9 Correlations Between Factors Correlations Between Factors 1 1.00 1 1.00 2 .70 1.00 2 .69 1.00 3 .56 .29 1.00 Note: n = 279; Bold font designates primary subscale factor loadings > .50; Italic font designates secondary subscale factor loadings > .30 & <.50; *Denotes Heywood cases.

43

3.18 Ages 11-12 Exploratory Factor Analysis Solutions

Maximum Likelihood 2-factor 1 2 Maximum likelihood 3-factor 1 2 3 oblique (oblimin) solution oblique (oblimin) solution Phonological Processing 0.99 -0.17 Fluid Reasoning 1.01* 0.02 -0.17 Visual-Spatial Processing 0.95 -0.14 Long-Term Recall 0.96 0.05 -0.06 Long-Term Recall 0.86 0.13 Oral Language 0.84 0.05 0.05 Oral Language 0.85 0.09 Phonological Processing 0.82 -0.15 0.27 Fluid Reasoning 0.82 0.14 Processing Speed 0.79 0.14 -0.03 Auditory Processing 0.81 0.15 Auditory Processing 0.73 0.15 0.14 Processing Speed 0.73 0.20 Working Memory 0.67 0.37 0.00 Fine Motor 0.65 0.04 Visual-Spatial Processing 0.66 -0.08 0.41 Working Memory 0.62 0.42 Attention 0.03 0.93 0.07 Attention 0.06 0.93 Executive Functions 0.10 0.87 0.04 Executive Functions 0.10 0.90 Fine Motor 0.09 0.22 0.73

% Common Variance Explained 72.4 27.6 % Common Variance Explained 63.4 24.6 12.0 Correlations Between Factors Correlations Between Factors 1 1.00 1 1.00 2 .71 1.00 2 .72 1.00 3 .56 .27 1.00 Note: n = 200; Bold font designates primary subscale factor loadings > .50; Italic font designates secondary subscale factor loadings > .30 & <.50; *Denotes Heywood cases.

Across all age groups, the first factor additional external validity support for the extracted in a two-factor solution was interpretation of this factor. consistently defined by high factor loadings for In all age groups except the 7-8 year all CPPS subscales except Attention and group, a third minor factor was suggested that Executive Functions. The first factor is was consistently defined by the Fine Motor interpreted as representing General Processing subscale. Given the previously described age- Ability (GPA). The composition of the second related truncation of the Fine Motor subscale, it factor in the two-factor solution was consistently is possible that this third factor may be due to defined (across all age groups) by high loadings the restricted range of Fine Motor subscale of the Attention and Executive Functions scores (i.e., a method “arti-factor”). Thus, it may subscales, with secondary moderate loadings for not represent a valid interpretable processing Working Memory, and occasional, salient dimension. However, across all three-factor secondary loadings for Processing Speed. solutions, the Visual-Spatial subscale often Working Memory’s loading on this second displayed a salient factor loading on this third factor was only half the size of its loading on the factor. Although cross-validation in independent GPA factor. samples is necessary, these results, particularly The second factor, which is thought to when supplemented with cluster analysis of the represent Self-Regulatory Processes (SRP), is same data, suggest that a Visual-Motor defined primarily by the Attention and processing dimension might be present when the Executive Functions subscales. SRP includes the Fine Motor and Visual-Spatial subscale scores self-monitoring and self-regulatory processes group together and are discrepant from the involved in cognition and learning. The binding subscales comprising the General Processing of the Attention subscale with the Executive Ability and Self-Regulatory factors. Functions scale was expected, given that self- When the three-factor solutions are regulation, a primary characteristic of executive considered, there is evidence for the three major functioning (McCloskey et al., 2009), underlies factors of General Processing Ability (GPA), all of the observable behaviors expressed by the Self-Regulatory Processes (SRP), and Visual- Attention items. The high correlations that both Motor Processes (VMP). However, some subscales have with the BRIEF® provide developmental exceptions must be noted (see Tables 3.15 to 3.18):

44

1. An interpretable three-factor solution The strength of cluster analysis (i.e., emerged in all age groups except 7-8 years, discovering structure in data with more relaxed where only two-factor solutions were statistical assumptions and mathematics than interpretable (either GPA and SRP or GPA data reduction methods such as exploratory and VMP). In the three-factor solution at factor analysis) is also one of its major ages 7-8 years, the second factor (SRP) was limitations. Cluster analysis will find groups or considered an uninterpretable factor. clusters in random data (Kettenring, 2006). The 2. Whenever the SRP factor is clearly present, algorithms are designed to find any structure, the Working Memory subscale has a even if structure is not present. As a result, the significant secondary SRP factor loading. Its later clusters in a hierarchical approach are often most significant loading is at ages 9 – 10. “necessary evils or byproducts” as the procedure 3. With a three-factor solution, VMP always must end with one grand cluster. Thus, in cluster appears, but the association of Fine Motor analysis a point is reached where the further with Visual-Spatial is stronger for the two collapsing of meaningful groupings ceases to younger age groups than for the two older make substantive sense. It is important to groups. This most likely reflects the recognize this in the cluster dendogram figures previously mentioned age-related truncation presented here. Given these caveats, the results of the Fine Motor subscale at the older age of the four cluster analyses (Wards algorithm; groups. SYSTAT, 2009) are presented as cluster trees in Figures 3.1 to 3.4. Cluster Analysis A review of the four cluster trees reinforces the results of the exploratory factor The exploratory factor analyses were analysis. In general, interpretable and supplemented with cluster analysis of the 11 meaningful clusters emerged between the CPPS subscales in the same four age- distance values (x-axis) of 0.2 to 0.4 (the point differentiated samples. Cluster analysis refers to where “tree cutting” made the most sense; see an array of exploratory structure data analysis Kettenring, 2006), after which interpretation was algorithms that are frequently employed in difficult. In addition to identifying the two main classification/taxonomic investigations (Hale, components that emerged from the exploratory 1981; Kettenring, 2006; Lorr, 1983). Cluster factor analysis (GPA and SRP dimensions), the analysis is one of the leading multivariate cluster analysis suggests the following analysis procedures employed across a diverse potentially meaningful groupings of subscales, range of scientific disciplines, and its use results which warrant independent cross- continues to steadily grow (Kettenring, 2006). validation in new samples: The object of cluster analysis is to sort cases 1. The third component continues to be defined (e.g., people, things, events, tests, etc.) into by the Fine Motor subscale. However, the homogenous groups, or clusters, so that the Fine Motor and Visual-Spatial association degree of association is strong between members (VMP) does not emerge as it did in the of the same cluster and weak between members exploratory factor analysis. Fine Motor is of different clusters. Each cluster thus describes fairly independent, except for its clustering the class to which its members belong. Cluster with the general factor. analysis often helps confirm exploratory factor 2. A fourth cluster or dimension comprised of analysis and is particularly useful in allowing the Auditory Processing and Oral Language investigators to spatially represent the degree of subscales may be present in the two older similarity of tests measuring a common age groups. General auditory processing dimension. Its hierarchical sequential structure is includes more than the processing of often useful in suggesting higher-order language or verbal information. However, dimensions or factors that are hidden by the the CPPS Auditory Processing items strong linear data reduction emphasis of factor emphasize the processing of verbal stimuli. analysis methods. Thus, the combination of the Auditory Processing and Oral Language subscales

45

might be interpreted as broad verbal clinically as a general memory dimension. processing. This verbal processing The Fluid Reasoning subscale consistently dimension may be broader and stronger in clustered with this general memory the two older age groups. In the two younger grouping, but only after the Working age groups, the Auditory Processing Memory and Long-Term Recall subscales subscale clustered more closely with the first formed a cluster. The triad cluster memory subscales. (addition of Fluid Reasoning) is not readily 3. The Phonological Processing subscale, interpretable and should only be used if although logically another aspect of verbal clinicians have case-specific additional non- processing, did not cluster with the Auditory CPPS information from which to generate a Processing or Oral Language subscales. viable working hypotheses for a shared Rather, it consistently clustered with the underlying ability or processing dimension. Visual-Spatial subscale. This combination (See Chapter 4 for more advice on might represent cross-modal perceptual interpretation.) processing. 5. The only subscale which displayed no clear 4. There is another cluster comprised of the association with other subscales is Working Memory and Long-Term Recall Processing Speed. However, Processing subscales. The Working Memory and Long- Speed does not stand independently from the Term Recall dyad might be interpreted general factor.

Figure 3.1 Cluster Tree (Wards method: Ages 5-6)

Figure 3.2 Cluster Tree (Wards method: Ages 7-8)

46

Figure 3.3 Cluster Tree (Wards method: Ages 9-10)

Figure 3.4 Cluster Tree (Wards method: Ages 11-12)

4 Use and Interpretation

The Children’s Psychological Processes same student at different intervals. In situations Scale (CPPS) is an internet, web-based teacher where the CPPS is used to screen an entire class rating scale designed to be completed and scored of students, a teacher may complete separate online. Norm tables that provide standard scores rating forms on all of his or her students. (T-scores, percentiles, etc.,) are not provided in this manual. Rather, the scoring tables are Using the CPPS embedded in the online scoring and report- writing program. Qualified users who purchase A teacher who completes a CPPS rating CPPS rating forms will receive free online form is referred to as the “rater.” In the typical automated scoring and a computer-generated screening or psychoeducational assessment report for each completed rating form. Although process, a rater will complete the form after a teacher (rater) may elect to complete a free receiving a request from a qualified professional printed copy of the rating form, his or her user (see Chapter 1 for user qualifications) who responses must be entered online in order to has submitted professional credentials and access the computerized scoring and report- purchased the CPPS online rating forms and writing program. reports. The qualified user is referred to as the The CPPS should only be used to rate “administrator.” students between the ages of 5 years, 0 months to 12 years, 11 months, 30 days—the age range Accessing the Online CPPS Teacher Rating covered by the norms. Teachers who complete a Form rating form should have been familiar with the student’s school performance for at least six To gain access to a CPPS online teacher weeks. The CPPS is primarily intended for the rating form, the rater should use the internet link student’s regular education classroom teachers. provided by the administrator or go to However, other educational professionals who www.psychprocesses.com and click on “CPPS teach one or more academic skills to the student Online Assessment Center.” The rater should are also appropriate raters. These other raters then select “Complete a CPPS Teacher Rating include, but are not limited to, special education Form” and enter the user name and password or learning disability teachers, Title I teachers, received from his or her administrator. reading specialists, and educational Administrators will be provided with a user paraprofessionals. The CPPS has not been name and password upon purchase of a set of standardized for ratings by parents or the CPPS rating forms. The administrator will then students themselves. create a password that allows raters to access When a student has more than one rating forms. Raters will not have access to the academic skills instructor, it is advisable to student records or the report-writing program. If obtain CPPS ratings from two or more of the the rater prefers to complete a paper copy of the instructors. Ratings from multiple sources are CPPS rating form, a free copy of the rating form advantageous as they provide information on the may be printed by clicking on the appropriate student’s psychological processes across option on the entry webpage. If the teacher different academic settings. Information from elects to print and complete a paper copy, the multiple raters can also increase the reliability CPPS administrator will need to enter the and validity of the interpretations and teacher’s ratings before they can be scored and conclusions for an individual student. In an online report generated. situations where the CPPS is used to measure a student’s progress, the same teacher may rate the

47

48

Completing the Online CPPS Teacher Rating W-scores, a discrepancy table for determining Form the pattern of intra-individual strengths and weaknesses, and an optional T-score to standard In order to complete the online rating score conversion table (see an example in form, the rater must enter his or her complete Appendix D). The administrator must select name, the student’s first and last name, and the which standard error of measurement (SEM) student’s gender, grade, and date of birth. After confidence interval to display: 68%, 90%, or entering the required identifying information, 95%. The administrator also can elect to display the rating scale items become available on the the rater’s response to each item, in which case user’s computer screen. For each of the 121 the items are grouped by subscale with items items, the rater is directed to select the response arranged in the same sequence found in that “best describes how frequently that behavior Appendix A. This sequence of items reflects the occurs.” The options are “Never,” “Sometimes,” W-scale developmental progression of items “Often,” and “Almost Always.” The numerical from least occurring difficulties to most values assigned to the responses during the frequently occurring difficulties (see the automated scoring are 0 for “Never,” 1 for Developmental Evidence section in Chapter 3 “Sometimes,” 2 for “Often,” and 3 for “Almost for details). After the report is generated it is Always.” The rater must respond to all items for automatically stored alphabetically by the the rating form to be accepted by the online student’s last name in a file labeled “Student program. The rater is directed to “Give your best Reports.” These reports can be downloaded and estimate if you don’t know or are unsure.” The printed at any time. Unless the administrator typical, first-time rater will complete the CPPS elects to delete the reports, they will remain in rating form in 12 to 15 minutes. Those familiar online storage. with the requirements, items, and ratings may Schoolhouse Educational Services LLC, complete the rating form in less time. Responses the publisher of the CPPS, is committed to may be changed any time during completion of protecting the confidentiality, privacy, and the rating form. After all required information is security of the student information transmitted entered, the completed rating form will be stored or stored on its website. The publisher cannot alphabetically by student’s last name in the access, manage, update, or delete the administrator’s online file of student records. information stored and controlled by the The rater’s name will also be displayed in the administrator. However, the publisher, when listing of the completed rating forms. The granted permission by the administrator, may administrator will be able to open the completed receive, store, and utilize de-personalized (all rating form and view (but not change) all of the identifying information removed) and rater’s responses. aggregated data for research and statistical purposes. (For details see the Terms and Generating the CPPS Report Conditions of Use found in the CPPS Online Assessment Center at To generate and complete the CPPS www.psychprocesses.com.) score report, the administrator must enter the CPPS Online Assessment Center and enter his Interpretation of CPPS Results or her user name and password. The administrator should then select “Completed T-Scores and Percentiles CPPS Rating Forms” and then select the student for which a report is to be generated. When the The CPPS scoring tables transform raw administrator selects “Generate Report” the scores into equal interval Rasch-based W-scores, computerized scoring and report program will age-based T-scores, and percentile ranks for automatically compute the student’s age and each of the 11 subscales and General Processing utilize the appropriate scoring tables for that age. Ability (GPA). The T-scores for the subscales The report will include a brief narrative, a graph were created through a linear transformation of of T-scores, a table of scores, change-sensitive

49

the equal-interval, change-sensitive W-scores the individual’s T-score ranks relative to same- which were derived from the raw scores (see age peers in the standardization sample. Because Chapter 2). The GPA T-score was derived from the CPPS percentiles are based on a linear the arithmetic average of the W-scores from the transformation, the percentile reported for any 11 separate process subscales. The T-score scale given T-score may not be identical to that found has a mean of 50 and a standard deviation of 10. in a normalized T-scale distribution. Also, more Because the CPPS items are worded and scored than one percentile rank may be associated with from a problem (negative) perspective, T-scores any given CPPS T-score (see Chapter 2). On the above the mean indicate processing problems or CPPS, higher percentiles (84th percentile and poorly developed processes, whereas T-scores above) are indicative of psychological below the mean indicate the absence of processing difficulties; whereas, lower processing difficulties or well developed percentiles (below the 84th percentile) indicate processes (see Table 4.1). the absence of significant psychological For any T-score, the percentile rank processing difficulties and normal development indicates the percentage of subjects in the of psychological processes. standardization sample who obtained a lower T- score. That is, the percentile rank indicates how

Table 4.1 Interpretative Guidelines for CPPS T-Scores

T-Score Range Classification Processing Description 70 and above Well above average Severe processing difficulty 65 – 69 Above average Moderate processing difficulty 60 – 64 Above average Mild processing difficulty 40 – 59 Average No significant processing difficulty 39 and below Below average No processing difficulty

Confidence Intervals – 53 (the obtained score – and + 3), and the 95% confidence interval is 44 – 56 (the obtained The CPPS report will also display one of score – and + 6). The CPPS confidence intervals three standard error of measurement (SEM) are based on the obtained scores, without confidence intervals—68%, 90%, or 95%— regression towards the mean taken into account. around all reported T-scores, depending on In addition to identifying the range of which level of measurement confidence the scores within which the individual’s true score administrator selects. The confidence interval, will lie, confidence intervals can be used when also referred to as a confidence band, is a range comparing different subscale scores from the of scores around the obtained T-score that likely same rater or the same subscale score across includes the individual’s true score. For multiple raters. Although complex statistical example, the 95% confidence interval is the calculations and p-value based decisions could range of scores within which there is a 95% be provided, such precision would require probability the individual’s true score will lie. considerable calculations and tables and lends an Stated differently, for a 95% confidence interval “aura” of numerical precision that is not there is a 95% chance the individual will obtain consistent with the technical sophistication of a score within this range if rated again. The third-party rating scale methodology. Instead, standard error of measurement (SEM) is used to commonly used and accepted professional calculate the confidence interval for each CPPS guidelines are offered for users to evaluate the subscale and the GPA. For example, if the SEM reliability and importance of subscale score for a subscale is 3, then the 68% confidence differences (see McGrew et al., 1991 and interval for a T-score of 50 on that subscale is 47

50

McGrew and Woodcock, 2001 for discussion)3. confidence bands (90 % and 95%) are used to The presence of a reliable difference (not due to compare two different subscale scores from the chance) between any two CPPS subscales from same rater or for the same subscale across within a single rating session or the same different raters, one can assume that there is a subscale scores from different raters can be strong probability of a real difference in determined by comparing the SEM confidence perceived processing difficulty between the bands. If the SEM confidence bands of T-scores raters. from two respective subscale scores touch or overlap when plotted, then the administrator can W-Scores assume there is no statistically significant difference. For example, if the SEM band for a W-scores are equal-interval, Rasch T-score of 44 is 41 – 47 and the SEM band for a (IRT)-based ability growth scores that are T-score of 49 is 46 – 52, there is overlap from 46 centered on a mean of 500 to represent the to 47. In this instance, the administrator can average performance of 10 year-olds (see conclude that the 5 point difference between the Chapter 2). On the CPPS subscales, the W-scale two T-scores is not significant. When the two standard deviations range from 23.1 to 32.3 W- respective SEM bands do not overlap, one can points for 5 – 6 year olds and 15.9 to 27.8 for 11 assume that the difference between the two – 12 year olds (see Table 2.4). The GPA W-scale scores is a reliable difference (not due to distribution has an SD of 24.7 for 5 – 6 year olds chance). and 18.8 for 11 – 12 year olds (see Table 2.4). When using the lowest confidence The purpose of the reported W-scores is not to band (68%), reliable subscale differences will be determine an individual’s ranking or strengths identified more frequently. In these situations, a and weaknesses, but rather to provide a more difference should only be interpreted as sensitive score for assessing change over time. meaningful when additional information verifies W-scores are more sensitive than T-scores the probability that the difference is not due to because there are 3 – 4 W-scores for every 1 – 2 chance factors. This can be ascertained via T-scores. Thus, the W-scores provided in the clinical interviewing of the rater(s) to determine CPPS report should be examined when the reasons for the rating decisions (e.g., in the evaluating the degree of change over time. For case of multiple raters, different standards or example, when a student has received an different academic contexts). If valid reasons intervention after the initial completion of the cannot be found to explain the subscale rating CPPS, one or more follow-up CPPS scales differences from within a session or across might be completed to assess progress. In such raters, it is recommended that the conservative instances, a W-score change of one standard position of “no real differences exist” be deviation (approximately 20 W-score points) can entertained. That is, the difference is most likely be interpreted as evidence of significant change due to measurement error if it cannot be verified (see Table 2.4 for standard deviations by via other information. If the bands do not subscale and age group). Alternatively, the overlap when the more conservative SEM CPPS might be used to track the development of psychological processes in the absence of an 1 Typically, these guidelines are used to compare intervention. For example, a student might be scores from a single testing session or a single rater. evaluated with the CPPS at age 6, and then re- The guidelines have been generalized here to evaluated at a later time. In such cases, W-score compare scores across raters for the sake of changes of 20 or more points would be simplicity. The additional degree of numerical indicative of significant improvement or calculations and statistical sophistication necessary to progress in the functioning of a psychological make precise subscale comparisons across raters was process. deemed unwarranted. The confidence band The above guideline reflects, in general, guidelines are offered to provide a structured a change in ratings of one standard deviation procedural framework for evaluating subscale score differences across raters. The guidelines need to be (approximately 20 W-points). Administrators augmented by clinical judgment. may decide to use a more lenient rule of thumb

51

to determine change in rated processing, such as statistically estimate (accounting for ½ SD of change—10 W-score points. The regression towards the mean) the predicted defensibility of the magnitude of W-score or expected subscale score. The final change required to be considered significant is predicted or expected subscale score is then the responsibility of the user/administrator. The rounded to the nearest whole number. CPPS author recommends the more conservative 3. The “Difference” column is the difference in 1.0 SD rule of thumb (20 W-score points) as T-score points between the predicted and indication of a significant change. A 10-point obtained scores. It is calculated by (approximately ½ SD) W-score change is subtracting the predicted (regression-to-the- suggested to be indicative of a possible mean adjusted) score from the obtained significant change, and needs to be verified by score. For example, in Table 4.2, the other non-CPPS sources of information. student’s obtained score of 72 on the Attention subscale is 16 points higher than Discrepancy Scores predicted. 4. The values in the “Discrepancy SD (SEE)” The CPPS report includes an “Intra- column are based on the distribution of the Individual Strengths and Weaknesses Table.” discrepancies (or difference scores) for each The purpose of this table is to evaluate the subscale. The values in this column are the significance of within-person subscale score number of standard deviations the difference differences. That is, the table allows the scores are from the mean discrepancy score, determination of significant “within child” which is always “0.” This SD is actually the strengths and weaknesses. The CPPS Intra- Standard Error of the Estimate (SEE). (The Individual Strengths and Weaknesses SEE is conceptually similar to the SEM, but information is consistent with “the pattern of conveys information about the distribution strengths and weaknesses” that many states of difference or discrepancy scores.) For require examiners to consider during learning example, in Table 4.2, the SD for Attention disability evaluations. The CPPS Intra- is +2.79. This means that the +16 point Individual Strengths and Weaknesses difference is 2.79 standard deviations of the discrepancies were modeled on the same estimate above the predicted mean methodological procedures used in the difference of 0. This value was determined development and implementation of the intra- by dividing the 16-point discrepancy by the individual comparisons in the WJ III battery SEE value (not shown in the report) and then (McGrew & Woodcock, 2001). rounding to the nearest hundredth. The meaning and computation of data in 5. The “Strength/Weakness” column reports each column of the Intra-Individual Strengths whether the discrepancy and its and Weaknesses Table is as follows (for an corresponding SD meet a selected SD example, see Table 4.2): criterion. The administrator may select from 1. The “Obtained” column is the student’s the following SD levels: 1.00, 1.25, 1.50, obtained T-score. 1.75, and 2.00 (see the z-score Table in 2. The “Predicted” column is the T-score that Appendix C for corresponding is predicted or expected for the student. The probabilities). A discrepancy may be predicted score for each subscale is based on considered significant as low as a SD of 1.00 the average of the T-scores from the “other” because +1 standard deviation is usually 10 subscales. For example, in Table 4.2, the statistically significant. However, obtained score of 72 was not used in the administrators might prefer a more stringent computation of the predicted score for criterion and elect a higher SD value. In Attention. Thus, each predicted subscale Table 4.2, the administrator has selected a score will be unique. The predicted score is criterion of 1.50 SD. This means the calculated by first computing the arithmetic discrepancy must be equal to or greater than average of the “other” 10 subscales. Then, 1.50 SDs in either a negative or positive this average “other” score is used to direction. For example, in Table 4.2, the

52

discrepancy SD for Attention is +2.79. This counterintuitive since they are the reverse of will be identified as a significant what one might expect. For example, in discrepancy in the final column since it Table 4.2, +2.79 for Attention is labeled as a exceeds 1.50 SD. Because the CPPS is “Weakness,” not a “Strength,” because high reverse-scored from an ability perspective, CPPS scores and significantly higher with high scores indicative of processing discrepancies are indicative of processing difficulties (or weaknesses), the labels in the weaknesses. final column may at first appear

Table 4.2 Example of Intra-Individual Strengths and Weaknesses Table

T-Scores T-Scores T-Scores Discrepancy Strength/Weakness Process Obtained Predicted Difference SD (SEE) at + or –1.50 SD (SEE) Attention 72 56 +16 +2.79 Weakness Auditory Processing 52 58 -6 -1.45 -- Executive Functions 74 56 +18 +3.41 Weakness Fine Motor 66 56 +10 +1.42 -- Fluid Reasoning 53 58 -5 -1.10 -- Long-Term Recall 51 58 -7 -2.05 Strength Oral Language 48 59 -11 -2.16 Strength Phonological Processing 48 58 -10 -1.79 Strength Processing Speed 61 57 +4 +0.74 -- Visual-Spatial Process. 57 58 -1 -0.19 -- Working Memory 61 58 +3 +0.96 --

Interpretative Steps processing ability as perceived by the rater. However, such a score does not mean that the Step 1: Interpret General Processing Ability individual has a significant weakness in all or the majority of processes measured by the All of the CPPS subscales have high CPPS. Practitioners should review the subscale factor loadings (.70 or above; see Table 3.14) on T-scores and determine whether the majority of a general factor referred to as General subscale scores are 60 or above. The greater the Processing Ability (GPA). This general factor, number of subscales with T-scores of 60 or which is similar to the “g” factor that emerges in above, the greater the risk of significant the factor analysis of any intellectual and academic learning problems or a learning cognitive battery, occurs because of the strong disability, and the increased probability that interrelationships among the CPPS processes. more than one academic domain will be GPA is thought to represent overall affected. In some instances, an extremely high psychological processing ability and to reflect an score on one or two subscales may produce a underlying efficiency or automaticity of high GPA score while the majority of the processing dimension. The CPPS GPA score remaining subscale scores are in the average or provided is the arithmetic average of the W- below average range. In such instances, the GPA scores of the 11 subscales. score may not be a valid representation of CPPS users should examine the GPA T- overall psychological processing ability, and score and percentile rank to determine the should not be emphasized or should be individual’s level of general processing ability. cautiously interpreted. Conversely, a GPA T- A T-score of 60 or greater suggests the score of less than 60 does not mean that the individual has significant difficulty in general student does not have a specific processing

53

weakness or deficit that is impairing some aspect Finally, cluster analysis of the CPPS of academic learning. For example, some subscales revealed other potential groupings of students may have only a phonological subscales (see Chapter 3) that might be used to processing deficit. Yet, this isolated processing generate clinical hypotheses about the deficit can have a profound impact on the functioning of the student’s psychological development of their basic reading skills. processes. The following clinical pairings can be considered: Step 2: Evaluate and Interpret Clinical 1. The Auditory Processing and Oral Language Groupings of Subscales subscales might represent broad language processing, especially in the two older age In addition to the GPA factor, two groups. secondary level factors were identified during 2. The Phonological Processing and Visual- factor analysis of the CPPS (see Chapter 3). The Spatial subscales might represent cross- first of these is the Self-Regulatory Processes modal perceptual processing. (SRP) factor, which is defined primarily by the 3. The Working Memory and Long-Term Attention and Executive Functions subscales, Recall subscales might be interpreted as a with the Working Memory subscale sometimes general memory factor. contributing. SRP is thought to represent the self-monitoring and self-regulatory processes Step 3: Interpret Individual Subscales involved in cognition and learning. To informally assess the level of a student’s SRP, Although the individual CPPS subscales the examiner should first evaluate the are highly interrelated and consequently all load consistency of the student’s Attention and high on the GPA factor, each subscale may be Executive Functions T-scores. If the SEM examined and interpreted separately. A subscale confidence interval bands from these two with a T-score of 65 or higher (see Table 4.1) subscales overlap, the scores can be considered suggests that the student may be experiencing unitary and interpretable as representing SRP. moderate to severe difficulty (relative to When these two subscale scores are unitary and students with scores in the average range) with are both 60 or above, there is an increased the psychological process measured by that probability that a significant difficulty with self- subscale. A subscale score of 60 – 64 suggests regulatory processes may exist. In such cases, mild difficulty. When scores are in the mild the student may be demonstrating processing difficulty range, corroborating information from problems consistent with executive other sources and assessment methods is dysfunctions, ADHD (see section on the BRIEF important. When reporting a subscale T-score, validity study in Chapter 3), LD, or significant the practitioner also should report the associated academic learning and performance problems. percentile rank and confidence band, along with The second mid-level clinical factor to the classification and processing description (see consider is the association of the Fine Motor and Table 4.1). When discussing a subscale, it is Visual-Spatial processing subscales. When the advisable to define the psychological process SEM confidence interval bands from these two (see Table 4.3) and to identify the specific subscales overlap, they may represent Visual- academic skills with which it is highly related Motor Processes (VMP). Interpretation of VMP (see Table 1.2). It is also important to emphasize is most relevant with younger elementary-aged that the results portray the perceptions of a children. Furthermore, the Fine Motor subscale student’s psychological processes by third party score should always be examined alone, as the informants (teachers), and not direct test Fine Motor subscale has the lowest relation with performance of processing abilities as occurs in other subscales (see Chapter 3). individual standardized cognitive or processing testing.

54

Table 4.3 Definitions of Processes Measured by the CPPS

Process Definition Attention The self-inhibitory abilities that allow one to control, sustain, divide, and focus attention. Auditory The ability to perceive, analyze, synthesize, and discriminate auditory stimuli. Processing Executive An array of mental processes responsible for regulating cognitive functions. Functions Fine Motor The coordination of small muscle movements that occur in the fingers. Fluid The ability to reason, deductively, inductively, and quantitatively, especially when Reasoning solving novel problems. Long-Term Delayed recall of new learning and the long-term memory processes of encoding, Recall consolidation, storage, and fluent retrieval. Oral The expressive language development dimensions of vocabulary, grammar, and Language functional communication. Phonological The manipulation of phonemes, the smallest units of speech that are used to form Processing syllables and words. Processing How quickly information is processed and how efficiently simple cognitive tasks Speed are executed over a sustained period of time. Visual-Spatial The ability to perceive, analyze, synthesize, manipulate, and transform visual Processing patterns and images, including those generated internally. Working The limited capacity to retain information while simultaneously processing the Memory same or other information for a short period of time. General General, or overall, psychological processing ability and the underlying efficiency Processing Ability or automaticity of processing.

Step 4: Interpret Intra-Individual Strengths learning impairment. Also, a process identified and Weaknesses as a possible intra-individual weakness may not be impairing academic learning if the score for The Strength/Weakness column of the that process is within the average range. For Intra-Individual Strengths and Weaknesses table example, a subscale score of 58 might be in the CPPS report flags significant within-child flagged as a significant discrepancy when the strengths and weaknesses. (See the Discrepancy predicted score is 48. However, a score of 58 is Scores subsection earlier in this chapter for more within the average range and indicates that there interpretative advice.) When reporting a is no significant processing difficulty. Therefore, processing strength or weakness, it is important when considering the potential impact of an to specify that the strength or weakness is intra-individual processing weakness, the range relative to the average of the student’s scores on the score falls within needs to be considered. the other 10 subscales, as opposed to a strength When the subscale score is above average (in the or a weakness relative to the normative mean T- processing difficulty range) and identified as an score of 50. individual weakness, the psychological A profile that includes some significant processing difficulty measured by that subscale intra-individual weaknesses is not unusual and is more likely to be suggestive of impaired should not, by itself, be taken as evidence of a academic learning.

55

Step 5: Determine Base Rate analysis of the items within subscales can help determine which specific items contributed most The SD values in the Intra-Individual to the subscale score. Item responses of “Often” Strengths and Weaknesses Table can be used to (2 points) or “Almost Always” (3 points) should determine the base rate of occurrence, which is be the items of primary interest. When the frequency with which a particular Intra- generating a CPPS report, the user should select Individual Strength or Weakness discrepancy the option of displaying the rater’s response to occurred in the standardization sample. For each item. When the items are displayed they example, if the base rate is 2%, this means that will be grouped by subscale with items arranged less than 2% of the standardization sample had a in the same developmental sequence found in discrepancy of similar or greater magnitude. The Appendix A (that is, from least occurring to base rates are not provided in the CPPS report. most frequently occurring difficulties). Practitioners who wish to determine a base rate The decision to provide item responses should use the z-score table in Appendix C in to parents, students, and teachers should be this manual (z-scores are standard deviation based on the usefulness of this information and units). For example, the following steps can be the evaluation team participants’ abilities to used to determine the base rate for the understand psychological measurement Discrepancy SD of -1.45 for the Auditory principles. For example, when provided with the Processing subscale in Table 4.2: rater’s responses to individual items, some 1. In Appendix C, use the table with the parents might dispute a specific rating and thus negative z-score values. deny the existence of a processing deficit, not 2. In the first column, find -1.4. realizing that several items must be scored as 3. In the -1.4 row, select the 0.05 column. problematic in order for a subscale score to 4. The value for -1.45 is .0735. reach the above average range. Also, it is not 5. Round the number to the nearest hundredth. considered good practice to report responses to In this case, the rounded value would be .07 individual items because they lack reliability and or 7%. validity until they are grouped with similar This percentage indicates the percent of the items. Nonetheless, providing examples (e.g., standardization sample that had a discrepancy of not the exact CPPS items but conceptually < -6 points between the obtained and predicted similar statements) of high scoring items might, score. Another example is the +2.79 under unique circumstances, help participants Discrepancy SD for the Attention subscale in better understand the constructs. Table 4.2. From the z-score table the rounded Also, the sharing of specific items probability value is .99 (do not round these should only be done judiciously and with values up to 1.00). This means that the +16 point extreme caution due to test security concerns. discrepancy between this individual’s Obtained (Copies of individual items should not be given score of 72 and Predicted score of 56 is to others as this would be a violation of Standard extremely rare as 99% of the standardization 12.11 in the APA/AERA/NCME Joint Test sample had a smaller (or negative) discrepancy. Standards.) Or, from the base rate perspective, the probability of this +16 (or greater) discrepancy Step 7: Identify Psychological Processes for occurring was less than 1% in the norm sample. Intervention

Step 6: Interpret Responses to Individual In conjunction with Step 6, reviewing Items responses to individual items might also help identify undeveloped processing skills that For each process subscale with a T-score might be targeted for intervention or greater than 60 or for each process that has been accommodations. This step is particularly useful identified as a significant intra-individual when the CPPS is used as a screener with an weakness, the practitioner should review the entire class or cohort. Similar to Step 6, the rater’s response to individual items. Clinical focus should be on subscales that have a T-score

56 greater than 60 or have been identified as are inconsistent, hypotheses might be generated significant intra-individual weaknesses. The to account for the inconsistencies, or additional arrangement of subscale items in developmental assessment of the domain in question might be sequence from least occurring to most frequently conducted. occurring difficulties should allow users to fine- tune intervention strategies. The Phonological Step 9: Compare Results from Multiple Processing items in Appendix A are discussed Raters here to illustrate this approach. If the first five items are scored “0” or “1” these aspects of When CPPS rating forms have been phonological processing can be assumed to be completed by multiple raters, a separate report fairly well developed. These mastered skills will need to be generated for each completed would likely not be the targets of intervention. rating form. The practitioner should then However, if the items beginning with “Has compare results across raters. Even when there difficulty blending sounds or syllables into is high inter-rater consistency, the CPPS scores words” are mostly scored “2” or “3” then these will vary across raters. The first question to items represent specific skill domains that might answer is: “Does the student have a similar benefit from intervention. profile of scores with each rater even though the It is important to note that the items scores differ in magnitude?” The easiest way to included within each subscale are not intended answer this question is to examine the graphs of to represent a set of criterion-referenced items T-scores in the CPPS reports. In cases where the that comprehensively represent a domain of profiles of scores are very similar, the raters are interest. The individual item analysis described generally in agreement about the student’s above only suggests approximate starting points relative pattern of processing difficulties and for intervention within a much larger strengths; they just differ in the absolute developmentally sequenced domain of specific magnitude of perceived processing difficulties. skills. In situations where the student has For example, if subscale scores from the first multiple subscales with scores above 60, the rater are 70 for Attention, 50 for Auditory processes with subscale scores in the severe Processing, and 65 for Executive Functions, and processing difficulty range should be given subscale scores from the second rater are 65 for priority for intervention planning. Attention, 47 for Auditory Processing, and 59 for Executive Functions, then both raters are Step 8: Identify Cognitive Processes for reporting similar profiles of strengths and Additional Assessment weaknesses, but are rating the problems at different levels of severity, perhaps because the When the CPPS is used as a screener or second rater is a more lenient rater of problems is used during the initial stages of an evaluation, than the first. it can be a useful tool for constructing a The second question to consider is: “Are selective, referral-focused assessment battery of the student’s CPPS subscale T-scores from cognitive tests. Processes with CPPS subscale T- different raters reliably different from each scores in the mild to severe range of difficulty other?” To determine this, follow the guidelines should be targeted for direct ability testing, provided in the Confidence Intervals section especially when these scores are supported by earlier in this chapter. When the T-scores from other formal and informal assessment data. different raters on the same subscale are CPPS processes with subscale scores indicating determined to reflect statistically and confirmed no processing difficulties might also be included differences, the differences should not be in direct testing. In situations where a CPPS attributed to error. In such instances, examine assessment and direct testing are conducted the responses to individual items and consider concurrently, the practitioner should examine why the raters’ scores are discrepant. A typical CPPS scores and corresponding cognitive ability explanation for score discrepancies is that the scores for consistency. When scores that purport raters observe the student in different to measure the same type of cognitive process

57

environments where the processing demands are raters reveals a similar profile for nine of the different. For example, a reading class does not eleven subscales, with Rater 2 tending to rate the require the same set of processes as a student similarly or lower (more leniently) on all mathematics class (see Table 1.2). Thus, the of the scales except Auditory Processing and demands placed on various processes will differ. Fine Motor. Oral Language is the only subscale When there are three raters, the practitioner where there is a significant difference between might place more weight on the rater’s subscale raters. Thus, the practitioner might question the scores that tend to be in the middle (similar to raters to ascertain why or how they are viewing reporting a median score). Also, different raters the student’s oral language difficulties may have different internal standards regarding differently. An alternative is to examine both what they perceive as problematic. raters’ responses to the Oral Language items to An example of results from multiple determine which items they disagree on most. raters is provided in Table 4.4. The data from From the data in Table 4.4 it appears that the this case study is an appropriate example of student does not have any processing difficulties multiple ratings because the correlation between because even in the higher set of ratings there are the two sets of subscale ratings is .77, which is no scores of 60 or above. In this case, obtaining very similar to the median inter-rater reliability data from multiple raters was beneficial as there coefficient of .76 reported in the inter-rater was “agreement” between raters that the student reliability study described in Chapter 3. An had no significant psychological processing informal comparison of scores from the two difficulties.

Table 4.4 Example of T-Scores from Multiple Raters

Processing Scale Rater 1 Rater 1 Rater 2 Rater 2 T-Scores Confidence T-Scores Confidence Band (95%) Band (95%) Attention 55 49-61 53 47-59 Auditory Processing 38 32-44 48 42-54 Executive Functions 57 53-61 55 51-59 Fine Motor 50 44-56 54 48-60 Fluid Reasoning 46 42-50 46 42-50 Long-Term Recall 38 32-44 38 32-44 Oral Language 51 47-55 42 38-46 Phonological Processing 50 46-54 45 41-49 Processing Speed 51 45-57 43 37-49 Visual-Spatial Processing 59 53-65 53 47-59 Working Memory 45 39-51 44 38-50 General Processing Ability 48 46-50 47 45-59

Step 10: Consider Relations with evidence-based connections between specific Achievement Scores psychological processes and specific achievement areas can be found in Table 1.2. Given that the development of specific The probability of a processing impairment academic skills is dependent on the adequate increases when CPPS process scores and related functioning of psychological processes (see areas of achievement are similarly Chapter 1 for details and citations), CPPS low/consistent (Flanagan et al., 2010). process scores and associated achievement To check for consistency, the CPPS user scores should be examined for consistency. The should select the T-score to standard score

58 conversion option when generating a report. for a pattern of processing strengths and This option converts the CPPS T-scores into weaknesses. This evidence can be gathered from standard scores with a mean of 100 and SD of the CPPS Intra-Individual Strengths and 15. Because the CPPS is reverse scored from the Weaknesses Table. (Use and interpretation of scoring in ability and achievement tests, the this table has been discussed earlier in this score conversion program also reverses the chapter.) Finally, processing aptitude- derived CPPS standard score’s distance from the achievement consistency should be considered mean. For example, a CPPS T-score of 60 is 1 (see Step 10 in the previous section). When both SD above the mean. When initially converted to the academic skill and a related process score a standard score it will be 115, also 1 SD above are low (consistent), there is additional evidence the mean. To correct for the reverse scoring, 15 suggestive of an LD diagnosis, especially when points is subtracted from the mean of 100, the low aptitude-achievement consistency exists resulting in a standard score of 85. Thus, the within an otherwise normal or average cognitive score conversion table in the report will allow ability profile (Flanagan et al., 2010). As with direct comparison between CPPS scores and any diagnostic decision, the practitioner should standardized scores from such batteries as the use professional judgment and multiple sources WJ III Tests of Achievement. For example, if of assessment data and information, and should the converted CPPS score for Phonological not base a diagnosis on any single test score or Processing is 85 and the WJ III ACH Basic any specific profiles. The CPPS cannot Reading Skills score is also 85, or close to 85, diagnosis a learning disability in isolation. It is then there is evidence of consistency between one source of standardized information (teacher the psychological process and the related perceptions of psychological processing) that academic skill. can be included in a multi-faceted, professional comprehensive assessment. Diagnosis of LD Case Study The diagnosis of a specific learning disability or disorder should never be based only The case study of “John” is used to on the results of the CPPS. However, CPPS illustrate the CPPS interpretation procedures results can be used to support such a diagnosis, described above. John, a 12 year old with a especially when determining whether the summer birthday, had just completed 5th grade referred student has a disorder in one or more when his classroom teacher completed the psychological processes. A CPPS validity study CPPS. He was also administered tests from the (see Chapter 3) reported that the CPPS GPA WJ III COG and WJ III ACH. score discriminates well between students with This was not John’s first evaluation. At and without a specific learning disability. The the end of 3rd grade he had been referred and results of the validity study suggested a high evaluated for a learning disability. The likelihood of LD when an individual’s GPA T- presenting concerns were: below average grades; score is 60 or above. A more conservative difficulty completing assignments; difficulty “cutoff” score for diagnostic purposes would be sustaining attention; and difficulties with a GPA T-score of 65. (With a T-score of 65, mathematics, reading comprehension, and approximately 7% of the population would written expression. According to his parents, obtain a higher score.) However, students with John’s early developmental milestones were GPA T-scores of less than 65 may still be normal and he had no health concerns. It was appropriately classified/diagnosed as LD, also reported that he had good social skills and provided they have at least one psychological did not display any behavior problems at school. process score that is in the moderate to severe That first evaluation found John’s range (a T-score of 65 or above) of processing general intellectual ability to be within the difficulty. Another important consideration in average range. Consistent with the referral the diagnosis of learning disability is evidence concerns, his achievement test scores in reading

59

comprehension, mathematics calculation, and tutoring in math, reading comprehension, and written expression were below average, whereas written expression. Also, a medical evaluation his other achievement scores were in the average found that John did not have ADHD. range. No formal evaluation of John’s According to his parents, John was psychological processes was conducted as part doing better academically by the end of 5th grade of this 3rd grade assessment. The school’s when he was re-evaluated. However, John evaluation team concluded that John did not continued to display difficulties with math qualify for placement in a learning disabilities calculation and written expression, and he program since the discrepancies between his IQ seemed to lack study skills. Also, he did not and all of his WJ III achievement scores were perform well on classroom exams. For the re- not severe enough to meet criteria. However, the evaluation, John’s 5th grade teacher completed school did agree to provide John with informal the CPPS online, and within 30 days John was assistance in the academic areas in which he was administered the WJ III battery (see Table 4.5 struggling. That summer John received private for his WJ III scores).

Table 4.5 Case Study WJ III Scores

Cluster Standard Score Confidence Interval (95%) Percentile General Intellectual Ability 93 89-97 32 Verbal Ability 97 89-104 41 Thinking Ability 97 91-102 41 Cognitive Efficiency 86 78-95 18 Comprehension Knowledge 97 89-104 41 Long-Term Retrieval 86 76-96 17 Visual-Spatial Thinking 93 84-102 33 Auditory Processes 97 86-109 42 Fluid Reasoning 103 95-111 58 Processing Speed 82 73-91 11 Short-Term Memory 90 80-101 26 Phonemic Awareness 103 92-114 59 Working Memory 87 78-96 19 Broad Attention 90 82-97 25 Cognitive Fluency 85 80-91 17 Brief Achievement 89 85-94 24 Total Achievement 89 85-92 22 Broad Reading 91 86-96 28 Broad Math 89 83-95 22 Broad Written Language 86 80-93 18 Brief Reading 92 87-97 29 Brief Math 90 84-97 26 Math Calculation Skills 90 82-99 26 Brief Writing 85 78-92 15 Written Expression 85 76-94 16 Academic Skills 91 87-96 28 Academic Fluency 90 83-97 25 Academic Applications 85 79-91 16

60

Table 4.6 Case Study CPPS Scores

Process Raw Score W-Score T-Score Confidence Interval (95%) Percentile Attention 18 529 62 58-66 87 Auditory Processing 7 521 61 55-67 86 Executive Functions 14 510 55 51-59 70 Fine Motor 5 519 63 57-69 85 Fluid Reasoning 22 539 64 60-68 91 Long-Term Recall 11 528 61 57-65 85 Oral Language 9 519 61 55-67 84 Phonological Processing 4 505 53 49-57 66 Processing Speed 7 513 58 52-64 77 Visual-Spatial Processing 1 496 48 42-54 55 Working Memory 20 529 63 59-67 87 General Processing Ability 519 60 58-62 84

Step 1: Interpret General Processing Ability When the VMP clinical factor is considered, the two subscales involved—Fine John’s GPA of 60 (84th percentile) Motor and Visual-Spatial processing—are not indicates that he is experiencing a mild level of unitary (the SEM score bands do not overlap), difficulty in overall processing and that there is a with the Fine Motor T-score of 63 (SEM = 57 – possibility that he has a specific learning 69) indicating mild difficulty and the Visual- disability. Seven of his eleven CPPS subscale Spatial T-score of 48 (SEM = 42 – 54) indicating scores are above 60, indicating that he has no processing difficulty. difficulty with several psychological processes, When other potential clinical clusters not just one or two. John’s GPA T-score of 60 are considered, the Auditory Processing and also indicates that he has mild difficulty with Oral Language subscales pair up (with identical general efficiency and automaticity of T-scores and SEM bands), indicating that John processing. This finding is supported by John’s has mild difficulty with general language low average WJ III Cognitive Fluency score of processing. Also, John’s Phonological 85. Processing and Visual-Spatial scores are both in the average range (with SEM bands of 49 – 57 Step 2: Evaluate and Interpret Clinical and 42 – 54 respectively), indicating that John’s Groupings of Subscales basic perceptual processes are rated as being normally developed. Finally, John’s Working The scores from the two subscales that Memory and Long-Term Recall scores are contribute to the SRP clinical factor—Attention unitary (with SEM bands of 59 – 67 and 57 – 65 and Executive Functions—are somewhat respectively) and both above 60, suggesting that discrepant, with Executive Functions in the he has general memory processing problems. average range and Attention in the above average range. It appears that John’s overall Step 3: Interpret Individual Subscales executive functions are normally developed, but he has mild processing difficulties with John does not have any CPPS subscale attention. However, there is a slight overlap in scores in the moderate or severe processing their 95% SEM confidence intervals of 58 – 66 difficulty range. However, he does have seven in for Attention and 51 – 59 for Executive the mild processing difficulty range: Attention, Functions, suggesting that the difference Auditory Processing, Fine Motor, Fluid between scores is not a reliable statistically Reasoning, Long-Term Recall, Oral Language, significant difference. and Working Memory. The remaining subscales

61 have scores within the average range. John’s oral language was not assessed during direct WJ CPPS Attention score of 62 is supported by the III testing. initial referral concern regarding attention, and the score is consistent with his WJ III low Step 4: Interpret Intra-Individual Strengths average Processing Speed score of 82 (a T-score and Weaknesses of 62 is equivalent to a standard score of 82; see Table 4.9). John’s Fine Motor score is supported As reported in Table 4.7, John has by his below average WJ III score in Written significant individual weaknesses (relative to his Expression and by the observation that his functioning in other processes) on the Fluid handwriting is slow and messy. His Fluid Reasoning and Working Memory subscales. His Reasoning score is consistent with reported scores on both of these subscales are not only difficulties in mathematics and reading individual weaknesses (at the + or – 1.00 SD comprehension, but the score is not supported by level), but they are also normative weaknesses his WJ III Fluid Reasoning score which is in the because both scores are above average and in the mid-average range. His problematic scores in mild processing difficulty range. Given that his Long-Term Recall and Working Memory are functioning in fluid reasoning and working both supported by WJ III below average scores memory are outside of the average range and on the Long-Term Retrieval and Working individual weaknesses, there is a possibility that Memory clusters (see Table 4.5). Finally, his his difficulties in these two areas are impairing CPPS Auditory Processing score is not academic learning, especially with regard to consistent with his mid-average WJ III Auditory mathematics and reading comprehension. The Processing standard score, and there is no intra-individual discrepancy analysis also corroboration for his Oral Language score, as reveals that John has a significant individual strength in visual-spatial processing.

Table 4.7 Case Study Intra-Individual Strengths and Weaknesses Table

Process T-Scores T-Scores T-Scores Discrepancy Strength/Weakness Obtained Predicted Difference SD (SEE) at + or – 1.00 SD (SEE) Attention 62 57 +5 +0.81 -- Auditory Processing 61 58 +3 +0.73 -- Executive Functions 55 57 -2 -0.34 -- Fine Motor 63 56 +7 +0.94 -- Fluid Reasoning 64 59 +5 +1.07 Weakness Long-Term Recall 61 58 +3 +0.79 -- Oral Language 61 58 +3 +0.66 -- Phonological Processing 53 58 -5 -0.90 -- Processing Speed 58 58 0 0.00 -- Visual-Spatial Processing 48 58 -10 -1.72 Strength Working Memory 63 68 +5 +1.46 Weakness

Step 5: Determine Base Rate fewer points, and only 14% had a discrepancy of 5 or greater points. The base rate for John’s Base rate can be checked (see Appendix Working Memory SD discrepancy of +1.46 is C) to better understand the severity of John’s .93, meaning that only 7% of the standardization processing difficulties. The base rate for John’s sample had a discrepancy of greater magnitude. Discrepancy SD of +1.07 on the Fluid Also, for the Visual-Spatial discrepancy of - Reasoning subscale (see Table 4.7) is .86 (see 1.72, the base rate is .04, indicating that this is a Appendix C). This means that 86% of the rather significant individual strength, as only 4% standardization sample had a discrepancy of

62 of the standardization sample had a greater reveals that John’s working memory difficulties discrepancy. are consistent with developmental progression, as the least frequently occurring difficulties (at Step 6: Interpret Responses to Individual the top of each list) have lower scores than the Items most frequently occurring difficulties (at the bottom of each list). Also, based on the “0” The items from the two intra-individual ratings on the first two Working Memory weaknesses—Fluid Reasoning and Working subscale items, it can be hypothesized that Memory—should be reviewed (see Table 4.8). John’s visual-spatial working memory is An examination of the rater’s responses to these stronger than his auditory or verbal working individual items may provide some insights into memory. From the Fluid Reasoning subscale the specific processing difficulties John is ratings, it appears that John has difficulties in experiencing. A review of the teacher ratings nearly all aspects of fluid reasoning.

Table 4.8 Case Study Responses to Individual Items

Fluid Reasoning Rating When given clues, does poorly at guessing the answer 1 Has difficulty identifying what’s missing in a sequence 2 Has difficulty understanding how concepts are the same or different 1 Has difficulty comparing and contrasting ideas 2 Has difficulty estimating quantities 1 Has difficulty understanding more than one logical rule at a time 2 Has difficulty with logical reasoning 2 Has difficulty discovering underlying rules in new problems 1 Has difficulty solving arithmetic story problems 2 Has difficulty understanding analogies 2 Has difficulty comprehending abstract ideas 2 Has difficulty with inferential reading comprehension 2 Has difficulty solving unfamiliar problems 2 Working Memory Loses place when counting 0 Has difficulty remembering what he or she just saw 0 Has trouble remembering information for just a few seconds 1 In the middle of an activity, forgets how to continue or finish it 2 Has difficulty repeating what was just said to him or her 2 Forgets what he/she was doing 1 Forgets the necessary information before completing a mental task 1 Has difficulty thinking and retaining information at the same time 1 Forgets what he/she was going to say when called on in class 1 Has difficulty multi-tasking without forgetting information 2 Requires frequent repetition and reminders 2 Has difficulty with mental arithmetic 1 Has difficulty listening and taking notes at the same time 2 Forgets some of the steps when given multi-step directions 2 Has difficulty organizing information when writing 2

63

Step 7: Identify Psychological Processes for might be gathered with a behavior rating scale, Intervention such as the BRIEF®. Finally, his Oral Language development and functioning will need to be Based on John’s CPPS results, it would investigated further, as it was not assessed be a reasonable evidence-based recommendation during the WJ III administration. to target fluid reasoning and working memory for intervention. Fluid reasoning was addressed Step 9: Compare Results from Multiple by John’s math tutor who taught John problem- Raters solving strategies. The internet-based Cogmed® training program (www.cogmed.com) was the In John’s case there was only one rater. intervention selected for John’s deficit in However, given his borderline GPA score of 60 working memory. Cogmed® is an evidence- and some inconsistencies between his CPPS based program, especially designed for children scores and WJ III scores, it would have been with working memory deficits (Holmes, beneficial if at least one more of his teachers had Gathercole, & Dunning, 2009). Over a six-week completed a CPPS Teacher Rating Form. period, John completed 30 Cogmed® training exercises of about 35 minutes each. John’s Step 10: Consider Relations with performance on the Cogmed® exercises Achievement Scores improved significantly (as measured by improved working memory span on the John’s CPPS psychological processing Cogmed® tasks) and his parents reported that deficits in fluid reasoning and working memory John had an improved ability to complete are consistent with difficulties in reading homework. However, his teacher did not comprehension, mathematics calculation, respond to requests to complete another CPPS mathematics reasoning, and written language rating form, which could have been used to (see Table 1.2). Of these, only his written assess change. language skills (see Table 4.5) are low enough to be considered an area of academic weakness. To Step 8: Identify Cognitive Processes for check for consistency, John’s CPPS T-scores Additional Assessment were converted into standard scores (see Table 4.9). Both the Fluid Reasoning transformed In John’s case, concurrent cognitive score of 79 and the Working Memory testing was conducted with the WJ III. The transformed score of 80 are consistent with correspondence between the CPPS subscales and John’s Written Expression score of 85. Thus, the WJ III clusters measuring similar domains there is evidence of consistency between a low was discussed previously under Step 3. When area of achievement and related “low” there are inconsistencies, the need for further psychological processes. If further assessment assessment should be considered. In John’s case, uncovers processing weaknesses in oral there is a discrepancy between his CPPS Fluid language and verbal fluid reasoning, this would Reasoning and WJ III Fluid Reasoning scores. be even more evidence that John’s low One hypothesis for this difference is that the WJ achievement in written language can be III primarily measures non-verbal fluid associated with difficulties in related reasoning, whereas several of the CPPS Fluid psychological processes. Reasoning subscale items refer to verbal fluid reasoning. Further assessment might include Case Study Diagnosis administration of the Similarities subtest from the Wechsler Intelligence Scale for Children John’s CPPS GPA T-score of 60, and (WISC®-IV; Wechsler, 2003) since the the fact that seven of his eleven subscale T- Similarities subtest is believed to assess verbal scores are above 60, indicates that there is a fluid reasoning. Because of John’s above higher than average probability that he is average CPPS Attention T-score of 62, more experiencing mild difficulties with assessment data on his attention difficulties psychological processes in general, as reflected

64

Table 4.9 Case Study T-Score to Standard Score Conversion Table

Processing Scale T-Score Standard Score Attention 62 82 Auditory Processing 61 83 Executive Functions 55 92 Fine Motor 63 80 Fluid Reasoning 64 79 Long-Term Recall 61 83 Oral Language 61 83 Phonological Processing 53 95 Processing Speed 58 88 Visual-Spatial Processing 48 103 Working Memory 63 80 General Processing Ability 60 85 Note: Standard Scores have a mean of 100 and a standard deviation of 15. in his primary teacher’s judgment. Thus, there is reasoning are consistent with his academic a strong possibility that psychological weakness in written language. Thus, there is processing weaknesses are impairing his adequate documentation of a psychological learning. Although John does not have a single processing disorder. However, John will not process T-score of 65 or higher, a case could be qualify for a learning disability placement both made for a diagnosis of LD because he does because he does not have a significant display a pattern of processing strengths and discrepancy between his general intellectual weaknesses, and because he has several scores ability and any of his achievement scores and in the above average range of difficulty. also because he does respond to intervention Furthermore, John’s weakness in working when it is provided (his math skills improved memory and probable weakness in fluid after tutoring and in-school interventions).

Appendix A

CPPS 121 Items Grouped by Process

Attention (11 items) GPA Loading Is noisy and disruptive in class .44 Has difficulty staying seated .53 Has poor listening skills .73 Has difficulty completing a simple, but time-consuming task .78 Has difficulty changing his/her focus of attention .76 Has difficulty paying attention to instruction unless interested in the topic .73 Has difficulty listening attentively to directions .75 Has a short attention span .76 Is easily distracted by sounds or activities .72 Has difficulty concentrating on challenging tasks for extended periods of time .78 Has difficulty dividing attention between two tasks .81 Auditory Processing (8 items) Has difficulty associating a voice with the correct person .61 Has difficulty correctly identifying different sounds in the environment .74 Misunderstands spoken words .69 Has difficulty discriminating between similar speech sounds .77 Has difficulty understanding oral communication .73 Has difficulty processing verbal information .84 Has difficulty remembering orally presented information .84 Has difficulty understanding instruction when there is background noise .75 Executive Functions (15 items) Has difficulty adapting to changes, such as a change in routine .69 Needs constant adult supervision .64 Misjudges his/her own academic abilities, skills, and performance .68 Lacks self-control .51 Has difficulty shifting from one task to another .78 Does not learn from his/her mistakes .70 Does not complete assignments and projects on time .71 Has difficulty planning .80 Has difficulty getting started on assignments .76 Acts without forethought .59 Does not change strategies when first attempt to solve a problem fails .77 Has difficulty with organization .70 Has difficulty managing time effectively .75 Has difficulty multitasking .82 Does not notice errors in school work .73 Fine Motor (12 items) Has difficulty picking up small objects with thumb and forefinger .53 Has difficulty using a computer mouse and keyboard .62 Has difficulty properly holding a pencil .53 Has difficulty cutting with scissors .63

65

66

Hand movements are not smooth, quick, and precise .64 Has difficulty coloring simple shapes within the lines .65 Has difficulty drawing a straight line with a ruler .69 Has difficulty with eye-hand coordination .69 Has difficulty drawing figures or pictures .67 Has difficulty copying or drawing shapes .70 Has difficulty correctly shaping letters when printing or writing .63 Has difficulty staying between the lines when printing or writing .63 Fluid Reasoning (13 items) When given clues, does poorly at guessing the answer .82 Has difficulty identifying what’s missing in a sequence .82 Has difficulty understanding how concepts are the same or different .82 Has difficulty comparing and contrasting ideas .81 Has difficulty estimating quantities .78 Has difficulty understanding more than one logical rule at a time .80 Has difficulty with logical reasoning .78 Has difficulty discovering underlying rules in new problems .81 Has difficulty solving arithmetic story problems .73 Has difficulty understanding analogies .74 Has difficulty comprehending abstract ideas .76 Has difficulty with inferential reading comprehension .71 Has difficulty solving unfamiliar problems .78 Long-Term Recall (10 items) Has difficulty remembering nursery rhymes or stories .72 Has difficulty rapidly naming well-known objects .72 Is slow to recall names of people, objects, or events .75 Has difficulty remembering the sequence of events .83 Has difficulty remembering details .81 Has difficulty memorizing and retaining new facts .82 Needs prompts and cues in order to recall what he/she knows .83 Has difficulty remembering information presented in class .83 Has difficulty remembering step-by-step procedures .83 Has difficulty recalling information during tests .79 Oral Language (10 items) Substitutes or omits vowel and consonant sounds when speaking .63 Omits part of the sentence when speaking .69 Has difficulty putting sentences together when speaking .71 Does not ask questions beginning with when, what, where, who, or why .70 Has poorly developed oral vocabulary .72 Has difficulty using proper grammar when speaking .70 Uses only short, simple expressions when speaking .67 Has difficulty comprehending oral directions .82 Has difficulty orally expressing thoughts and ideas .69 Has difficulty paraphrasing when speaking .78 Phonological Processing (12 items) Has difficulty identifying the first sound in a word .71 Has difficulty associating letters of the alphabet and their sounds .71 Does not recognize when two words rhyme .70

67

Has difficulty discriminating between different speech sounds .75 Doesn’t seem to hear all the sounds in a word .75 Has difficulty blending sounds or syllables into words .72 Has difficulty manipulating the sounds that make up words .76 Has difficulty correctly pronouncing words .71 Has difficulty dividing a word into sounds and syllables .78 Has difficulty spelling phonetically regular words .75 Has difficulty correctly pronouncing new words .72 Has difficulty sounding out unknown words when reading .75 Processing Speed (8 items) Is slow to perform relatively easy, well-learned tasks .80 Has difficulty quickly making simple decisions .74 Is slow to complete basic arithmetic calculations .73 Reads slowly even when the words are known .71 Is slow at writing basic sentences .80 Is one of the last students to finish a classroom test .63 Takes a long time to think before responding .69 Takes a long time to complete a worksheet .74 Visual-Spatial Processing (7 items) Has difficulty matching things that look alike .73 Has difficulty distinguishing between printed letters of the alphabet .66 Has difficulty putting puzzles together .70 Has difficulty recognizing visual patterns .77 Has difficulty recognizing letters when they are printed in an unusual font .73 Has difficulty seeing the details in a picture .76 Becomes confused when figures are rotated or reversed .76 Working Memory (15 items) Loses place when counting .74 Has difficulty remembering what he/she just saw .79 Has trouble remembering information for just a few seconds .82 In the middle of an activity, forgets how to continue or finish it .81 Has difficulty repeating what was just said to him or her .79 Forgets what he/she was doing .76 Forgets the necessary information before completing a mental task .86 Has difficulty thinking and retaining information at the same time .84 Forgets what he/she was going to say when called on in class .67 Has difficulty multitasking without forgetting information .83 Requires frequent repetition and reminders .80 Has difficulty with mental arithmetic .74 Has difficulty listening and taking notes at the same time .81 Forgets some of the steps when given multistep directions .82 Has difficulty organizing information when writing .80 Note: The items within each subscale are arranged in order of W-score progression, beginning with the lowest (easiest) items at the top. The GPA loading is the item’s loading on the CPPS general factor (see Chapter 3).

Appendix B

Communities and States in CPPS Standardization Sample

Alabama Connecticut Kentucky Arab Norwich Morehead Huntsville Taftville Nicholasville Ozark District of Maine Arizona Columbia Brownville Camp Verde Washington Milo Marana Raymond Scottsdale Florida Windham Tucson Coral Springs Kissimmee Maryland California Valrico Emmitsburg Anaheim Frederick Azusa Georgia Montgomery Barstow Columbus Village Chatsworth Marietta Coronado Pooler Massachusetts Fort Irwin Snellville Lawrence Long Beach Los Angeles Idaho Michigan Modesto Coeur d-Alene Jackson Oxnard Hayden Lake Orion Patterson Idaho Falls San Diego Minnesota San Fernando Illinois Bena San Francisco Auburn Eagan San Jose Divernon Faribault Santa Clara Lombard Minneapolis Sonora Richmond Mound Sunnyvale Romeoville Winona Tracy Springfield Woodbury Vallejo Tilden New Jersey Colorado Kansas Newark Englewood Manhattan Ft. Collins New Mexico Timnath Gallup

68

69

New York Fairview McKinney Bronx Medford Richland Springs Brooklyn Portland San Antonio Rochester Rogue River Sugar Land Yorkshire Three Rivers Pennsylvania Tivoli North Carolina Harrisburg Cary Mountain Top Virginia Cherryville New Castle Woodbridge Durham Prospect Greensboro Slippery Rock Washington Henderson Wexford Camas Snow Camp Renton Sylvan South Carolina Vancouver Greenville Westport Ohio Yakima Broadview Tennessee Heights Memphis Wisconsin Cleveland La Crosse Lakewood Texas McFarland Lorain Austin Milwaukee Lawton Cedar Park Neenah Fort Worth New Munster Oregon Houston Wausau Eugene Jasper

70

Appendix C

Z-Score Normal Probability Tables

Z-Score 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00 -3.9 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 -3.8 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 -3.7 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 -3.6 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0002 0.0002 -3.5 0.0002 0.0002 0.0002 0.0002 0.0003 0.0002 0.0002 0.0002 0.0002 0.0002 -3.4 0.0002 0.0003 0.0003 0.0000 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 -3.3 0.0003 0.0004 0.0004 0.0004 0.0004 0.0004 0.0004 0.0005 0.0005 0.0005 -3.2 0.0005 0.0005 0.0005 0.0006 0.0006 0.0006 0.0006 0.0006 0.0007 0.0007 -3.1 0.0007 0.0007 0.0008 0.0008 0.0008 0.0008 0.0009 0.0009 0.0009 0.0010 -3.0 0.0010 0.0010 0.0011 0.0011 0.0011 0.0012 0.0012 0.0013 0.0013 0.0013 -2.9 0.0014 0.0014 0.0015 0.0015 0.0016 0.0016 0.0017 0.0018 0.0018 0.0019 -2.8 0.0019 0.0020 0.0021 0.0021 0.0022 0.0023 0.0023 0.0024 0.0025 0.0026 -2.7 0.0026 0.0027 0.0028 0.0029 0.0030 0.0031 0.0032 0.0033 0.0034 0.0035 -2.6 0.0036 0.0037 0.0038 0.0039 0.0040 0.0041 0.0043 0.0044 0.0045 0.0047 -2.5 0.0048 0.0049 0.0051 0.0052 0.0054 0.0055 0.0057 0.0059 0.0060 0.0062 -2.4 0.0064 0.0066 0.0068 0.0069 0.0071 0.0073 0.0075 0.0078 0.0080 0.0082 -2.3 0.0084 0.0087 0.0089 0.0091 0.0094 0.0096 0.0099 0.0102 0.0104 0.0107 -2.2 0.0110 0.0113 0.0116 0.0119 0.0122 0.0125 0.0129 0.0132 0.0136 0.0139 -2.1 0.0143 0.0146 0.0150 0.0154 0.0158 0.0162 0.0166 0.0170 0.0174 0.0179 -2.0 0.0183 0.0188 0.0192 0.0197 0.0202 0.0207 0.0212 0.0217 0.0222 0.0228 -1.9 0.0233 0.0239 0.0244 0.0250 0.0256 0.0262 0.0268 0.0274 0.0281 0.0287 -1.8 0.0294 0.0301 0.0307 0.0314 0.0322 0.0329 0.0336 0.0344 0.0351 0.0359 -1.7 0.0367 0.0375 0.0384 0.0392 0.0401 0.0409 0.0418 0.0427 0.0436 0.0446 -1.6 0.0455 0.0465 0.0475 0.0485 0.0495 0.0505 0.0516 0.0526 0.0537 0.0548 -1.5 0.0559 0.0571 0.0582 0.0594 0.0606 0.0618 0.0630 0.0643 0.0655 0.0668 -1.4 0.0681 0.0694 0.0708 0.0721 0.0735 0.0749 0.0764 0.0778 0.0793 0.0808 -1.3 0.0823 0.0838 0.0853 0.0869 0.0885 0.0901 0.0918 0.0934 0.0951 0.0968 -1.2 0.0985 0.1003 0.1020 0.1038 0.1056 0.1075 0.1093 0.1112 0.1131 0.1151 -1.1 0.1170 0.1190 0.1210 0.1230 0.1251 0.1271 0.1292 0.1314 0.1335 0.1357 -1.0 0.1379 0.1401 0.1423 0.1446 0.1469 0.1492 0.1515 0.1539 0.1562 0.1587 -0.9 0.1611 0.1635 0.1660 0.1685 0.1711 0.1736 0.1762 0.1788 0.1814 0.1841 -0.8 0.1867 0.1894 0.1922 0.1949 0.1977 0.2005 0.2033 0.2061 0.2090 0.2119 -0.7 0.2148 0.2177 0.2206 0.2236 0.2266 0.2296 0.2327 0.2358 0.2389 0.2420 -0.6 0.2451 0.2483 0.2514 0.2546 0.2578 0.2611 0.2643 0.2676 0.2709 0.2743 -0.5 0.2776 0.2810 0.2843 0.2877 0.2912 0.2946 0.2981 0.3015 0.3050 0.3085 -0.4 0.3121 0.3156 0.3192 0.3228 0.3264 0.3300 0.3336 0.3372 0.3409 0.3446 -0.3 0.3483 0.3520 0.3557 0.3594 0.3632 0.3669 0.3707 0.3745 0.3783 0.3821 -0.2 0.3859 0.3897 0.3936 0.3974 0.4013 0.4052 0.4090 0.4129 0.4168 0.4207 -0.1 0.4247 0.4286 0.4325 0.4364 0.4404 0.4443 0.4483 0.4522 0.4562 0.4602 -0.0 0.4641 0.4681 0.4721 0.4761 0.4801 0.4840 0.4880 0.4920 0.4960 0.5000

71

Z-Score 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359 0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753 0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141 0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517 0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879 0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224 0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549 0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852 0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133 0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389 1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8854 0.8577 0.8599 0.8621 1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830 1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015 1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177 1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319 1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441 1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545 1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633 1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706 1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767 2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817 2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857 2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890 2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916 2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936 2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952 2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964 2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974 2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981 2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986 3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990 3.1 0.9990 0.9991 0.9991 0.9991 0.9992 0.9992 0.9992 0.9992 0.9993 0.9993 3.2 0.9993 0.9993 0.9994 0.9994 0.9994 0.9994 0.9994 0.9995 0.9995 0.9995 3.3 0.9995 0.9995 0.9995 0.9996 0.9996 0.9996 0.9996 0.9996 0.9996 0.9997 3.4 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9997 0.9998 3.5 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 0.9998 3.6 0.9998 0.9998 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 3.7 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 3.8 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 0.9999 3.9 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

72

Appendix D

Sample CPPS Score Report

Children’s Psychological Processes Scale (CPPS) Report

Student’s Name: Doe, John Date of Birth: 06/15/1999 Gender: Male Date of Ratings: 07/16/2011 School: Middletown Age: 12 years Grade: 6 Rater’s Name: Doe, Jane

SUMMARY

When John’s T-scores are compared to other children the same age (see Table of Scores) his level of difficulty in General Processing Ability is in the above average range. Based on ratings from John's teacher, he has average difficulties in Attention, Executive Functions, Fluid Reasoning, Long-Term Recall, Processing Speed, and Working Memory. He has above average difficulties in Auditory Processing, Fine Motor, Oral Language, Phonological Processing, and Visual-Spatial Processing. Above average difficulties indicate poorly developed psychological processes. These poorly developed psychological processes are likely to impair John's academic learning and performance.

To determine John’s intra-individual psychological processing strengths and weaknesses, his obtained T-score is compared to his predicted score for each process (see Intra- Individual Strengths and Weaknesses Table). Based on ratings from John’s teacher, he has significant individual strengths in Fluid Reasoning. In contrast, he has significant individual weaknesses in Fine Motor and Visual-Spatial Processing.

TABLE OF SCORES

Confidence Raw W- T- Process Interval Percentile Score Score Score (95 %) Attention 11 514 56 52-60 73 Auditory Processing 8 526 63 56-70 88 Executive Functions 15 512 56 52-60 73 Fine Motor 12 535 73 68-78 95 Fluid Reasoning 13 514 55 51-59 69 Long-Term Recall 10 524 59 54-64 82 Oral Language 11 525 64 58-70 89 Phonological Processing 12 531 66 61-71 91

73

Processing Speed 8 516 59 53-65 81 Visual-Spatial Processing 7 533 70 63-77 93 Working Memory 15 520 59 55-63 80 General Processing Ability 523 63 61-65 88

Note: These T-Scores are based on age norms. T-Scores have a mean of 50 and a standard deviation of 10. Higher scores indicate relatively more difficulties and lower abilities.

GRAPH OF 95% CONFIDENCE BANDS

INTRA-INDIVIDUAL STRENGTHS AND WEAKNESSES TABLE

T Scores Strength/Weakness Discrepancy Process at + or – 1.5 SD SD (SEE) Obtained Predicted Difference (SEE) Attention 56 59 -3 -0.49 _ Auditory Processing 63 61 2 0.49 _ Executive Functions 56 60 -4 -0.67 _ Fine Motor 73 57 16 2.16 Weakness

74

Fluid Reasoning 55 62 -7 -1.50 Strength Long-Term Recall 59 61 -2 -0.53 _ Oral Language 64 61 3 0.66 _ Phonological 66 59 7 1.26 _ Processing Processing Speed 59 60 -1 -0.20 _ Visual-Spatial 70 59 11 1.90 Weakness Processing Working Memory 59 61 -2 -0.59 _

T-SCORE TO STANDARD SCORE CONVERSION TABLE

Processing Scale T-Score Standard Score Attention 56 91 Auditory Processing 63 80 Executive Functions 56 91 Fine Motor 73 65 Fluid Reasoning 55 92 Long-Term Recall 59 86 Oral Language 64 79 Phonological Processing 66 76 Processing Speed 59 86 Visual-Spatial Processing 70 70 Working Memory 59 86 General Processing Ability 63 80

Note: Standard Scores have a mean of 100 and a standard deviation of 15. Because high CPPS T-Scores represent relatively more difficulty and lower abilities, the standard score’s distance from the mean has been reversed so that low standard scores represent relatively more difficulties and lower abilities, making the converted standard scores consistent with standard scores from cognitive and achievement tests.

ITEM RESPONSES

(A sample of the rater’s response to each item is not displayed here. If this option is selected, the items will be grouped and displayed in the same developmental sequence as in Appendix A. The responses will be reported numerically, with 0 for “Never,” 1 for “Sometimes,” 2 for “Often,” and 3 for “Almost Always.”)

References

American Educational Research Association Brady, S., Fowler, A., Stone, B., & Winbury, N. (AERA), American Psychological Association (1994). Training phonological awareness: A (APA), & National Council on Measurement study with inner-city kindergarten children. in Education (NCME). (1999). Standards for Annals of Dyslexia, 44, 26-59. educational and psychological testing. Washington, DC: American Educational Carroll, J. B. (1993). Human cognitive abilities: A Research Association. survey of factor-analytic studies. Cambridge: Cambridge University Press. Ardila, A., Rosselli, M., Matute, E., & Inozemtseva, O. (2011). Gender differences Catts, H. W. (1996). Defining dyslexia as a in cognitive development. Developmental developmental language disorder: An Psychology, 47(4), 984-990. expanded view. Topics in Language Disorders, 16, 14-29. Baddeley, A. D. (2006). Working memory: An overview. In S. J. Pickering (Ed.), Working Chun, M. M., Golomb, J. D., & Turk-Browne, N. memory and education. (pp. 1-31). B. (2011). A taxonomy of internal and Burlington, MA: Academic Press. external attention. Annual Review of Psychology, 62, 73-101. doi: Baddeley, A. D., & Hitch, G. J. (1974). Working 10.1146/annurev.psych.093008.100427 memory. In G. A. Bower (Ed.), Recent advances in learning and motivation. Vol. 8, Cronbach, L. J. (1951). Coefficient alpha and the (pp. 47-89). New York: Academic Press. internal structure of tests. Psychometrika, 16, 297-334. Barkley, R. A. (1997). ADHD and the nature of self- control. New York: Guilford Press. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bekebrede, J., van der Leij, A., & Share, D. L. Bulletin, 52, 281-302. (2009). Dutch dyslexic adolescents: Phonological-core variable-orthographic de Ayala, R. J. (2009). The theory and practice of differences. Reading and Writing: An item response theory. New York: Guilford Interdisciplinary Journal, 22, 133-165. Press.

Benson, N. (2008). Cattell-Horn-Carroll cognitive Dehn, M. J. (2006). Essentials of processing abilities and reading achievement. Journal of assessment. Hoboken, NJ: Wiley. Psychoeducational Assessment, 26, 27-41. Dehn, M. J. (2007). Cognitive processing deficits. Best, J.R., Miller, P. H., & Naglieri, J. A. (2011). In R. J. Morris & N. Mather (Eds.), Evidence- Relations between executive function and based interventions for students with learning academic achievement from ages 5 to 17 in a and behavioral challenges (pp. 258-287). large, representative national sample. Mahwah, NJ: Lawrence Erlbaum. Learning and Individual Differences, 21, 327-336. Dehn, M. J. (2008). Working memory and academic learning: Assessment and Bond, T. G., & Fox, C. M. (2001). Applying the intervention. Hoboken, NJ: Wiley. Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah NJ: Lawrence Dehn, M. J. (2010). Long-term memory problems Erlbaum Associates. in children and adolescents: Assessment, interventions, and effective instruction. Hoboken, NJ: Wiley.

75

76

Dehn, M. J. (2011). Helping students remember: Gathercole, S. E., & Pickering, S. J. (2001). Exercises and strategies to strengthen Working memory deficits in children with memory. Hoboken, NJ: Wiley. special educational needs. British Journal of Special Education, 28, 89-97. Embretson, S. E., & Hershberger, S. L. (1999). The new rules of measurement: What every Geary, D. C. (1993). Mathematical disabilities: psychologist and educator should know. Cognitive, neuropsychological, and genetic Mahwah, New Jersey: Lawrence Erlbaum components. Psychological Bulletin, 114, Associates. 345-362.

Embretson, S. E., & Reise, S. P. (2000). Item Geary, D. C. (2007). An evolutionary perspective response theory for psychologists. Mahwah, on learning disability in mathematics. New Jersey: Lawrence Erlbaum Associates. Developmental Neuropsychology, 32, 471- 519. Flanagan, D. P, Fiorello, C. A., & Ortiz, S. O. (2010). Enhancing practice through Geary, D. C. (2011, September 26). Cognitive application of Cattell-Horn-Carroll theory and predictors of achievement growth in research: A “third method” approach to mathematics: A 5-year longitudinal study. specific learning disability identification. Developmental Psychology. Advance online Psychology in the Schools, 47, 739-760. doi: publication. doi: 10.1037/a0025510. 10.1002/pits.20497 Geary, D. C, Bailey, D. H., Littlefield, A., Wood, Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. P., Hoard, M. K., & Nugent, L. (2009). First- (2007). Essentials of cross-battery assessment grade predictors of mathematical learning (2nd ed.). Hoboken, NJ: Wiley. disability: A latent class trajectory analysis. Cognitive Development, 24, 411-429. Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., & doi:10.1016/j.cogdev.2009.10.001 Mascolo, J. T. (2006). The achievement test desk reference: A guide to learning disability Gillon, G. T. (2004). Phonological awareness. identification (2nd ed.). Hoboken, NJ: Wiley. New York: The Guilford Press.

Floyd, R. G., Bergeron, R., & Alfonso, V. C. Gioia, G. A., Isquith, P. K., Guy, S. C., & (2006). Cattell-Horn-Carroll cognitive ability Kenworthy, L. (2000). Behavior Rating profiles of poor comprehenders. Reading and Inventory of Executive Function. Lutz, FL: Writing, 19, 427-456. Psychological Assessment Resources.

Floyd, R. G., Evans, J. J., & McGrew, K. S. Gomez, R., & Condon, M. (1999). Central (2003). Relations between measures of auditory processing ability in children with Cattell-Horn-Carroll (CHC) cognitive abilities ADHD with and without learning disabilities. and mathematics achievement across the Journal of Learning Disabilities, 32, 150-158. school-age years. Psychology in the Schools, 40, 155-171. Hackman, D. A., & Farah, M. J. (2009). Socioeconomic status and the developing Floyd, R. G., McGrew, K. S., & Evans, J. J. brain. Trends in Cognitive Sciences, 13(2), (2008). The relative contributions of the 65-73. Cattell-Horn-Carroll cognitive abilities in explaining writing achievement during Hale, J. B., Alfonso, V., Berninger, V., Bracken, childhood and adolescence. Psychology in the B., Christo, C., Clark, E., et al. (2010). Schools, 45, 132-144. doi: 10.1002/pits.20284 Critical issues in response-to-intervention, comprehensive evaluation, and specific Gathercole, S. E. (1999). Cognitive approaches to learning disabilities identification and the development of short-term memory. intervention: An expert white paper Trends in Cognitive Sciences, 3, 410-419. consensus. Learning Disability Quarterly, 33, 223-236.

77

Hale, J. B., Flanagan, D. P., & Naglieri, J. A. Linacre, J. M. (2009). A user’s guide to Winsteps (2008). Alternative research-based methods ministep Rasch-model computer programs: for IDEA 2004 identification of children with Program manual 3.69.0. Winsteps.com: specific learning disabilities. Communique, Author. 36(8), 1-17. Lorr, M. (1983). Cluster analysis for social Hale, R. L. (1981). Cluster analysis in school scientists. San Francisco, CA: Jossey-Bass. psychology: An example. Journal of School Psychology, 19(1), 51-56. Manis, F. R., Seidenber, M. S, & Doi, L. M. (1999). See Dick RAN: Rapid naming and the Harwell, M., & LeBeau, B. (2010). Student longitudinal prediction of reading subskills in eligibility for a free lunch as an SES measure first and second graders. Scientific Studies of in education research. Educational Research, Reading, 3, 129-157. 39, 120-131. McCloskey, G., Perkins, L. A., Van Divner, B. Henry, L. A., & Millar, S. (1993). Why does (2009). Assessment and intervention for memory span improve with age? A review of executive functions. New York: Taylor and the evidence for two current hypotheses. Francis. European Journal of Cognitive Psychology, 5, 241-287. McGrew, K. S., & Wendling, B. J. (2010). Cattell- Horn-Carroll cognitive-achievement relations: Holmes, J., Gathercole, S. E., & Dunning, D. L. What we have learned from the past 20 years (2009). Adaptive training leads to sustained of research. Psychology in the Schools, 47, enhancement of poor working memory in 651-675. doi: 10.1002/pits.20497 children. Developmental Science, 12, F9-F15. McGrew, K. S., Werder, J., & Woodcock, R. W. Hooper, S. R., Costa, L-J., & McBee, M. (2011). (1991). WJ-R technical manual. Chicago, IL. Concurrent and neuropsychological Riverside Publishing. contributors to written language expression in first and second grade students. Reading and McGrew, K. S., & Woodcock, R. W. (2001). Writing: An Interdisciplinary Journal, 24, Technical manual. Woodcock-Johnson III. 221-152. Itasca, IL: Riverside Publishing.

Jakobson, A., & Kikas, E. (2007). Cognitive McNamara, D. S., & Scott, J. L. (2001). Working functioning in children with and without memory capacity and strategy use. Memory & attention-deficit/hyperactivity disorder with Cognition, 29, 10-17. and without comorbid learning disabilities. Journal of Learning Disabilities, 40, 194-202. Mosier, C. I. (1943). On the reliability of a weighted composite. Psychometrika, 8, 161- Kamhi, A. G., & Pollock, K. E. (2005). 168. Phonological disorders in children: Clinical decision making in assessment and Muthén, L. K. & Muthén, B. O. (2011) Mplus intervention. Baltimore: Brookes Publishing. user’s guide (6th ed.). Los Angeles, CA: Muthén & Muthén. Kettenring, J. R. (2006). The practice of cluster analysis. Journal of Classification, 23, 3-30. Naglieri, J. A. (1999). Essentials of CAS assessment. Hoboken, NJ: Wiley. Kramer, J. H., Knee, K., & Delis, D. C. (2000). Verbal memory impairments in dyslexia. Naglieri, J. A., & Gottling, S. H. (1997). Archives of Clinical Neuropsychology, 15, 83- Mathematics instruction and PASS cognitive 93. processes: An intervention study. Journal of Learning Disabilities, 30, 513-520. Kroesbergen, E. H., Van Luit, J. E., & Naglieri, J. A. (2003). Mathematical learning disabilities and PASS cognitive processes. Journal of Learning Disabilities, 36, 574-582.

78

National Reading Panel. (2000). Teaching children Sirin, S. R. (2005). Socioeconomic status and to read: An evidence-based assessment of the academic achievement: A meta-analytic scientific research literature on reading and review of research. Review of Educational its applications for reading instruction. Research, 75(3), 417-453. Washington, DC: National Institute of Child Health and Human Development (NICHD). Stroud, K. C., & Reynolds, C. R. (2006). School Motivation and Learning Strategies Inventory. Nunnally, J. C. (1978). Psychometric theory. New Los Angeles: Western Psychological Services. York: McGraw-Hill. Swanson, H. L. (2000). Are working memory Pagani, L. S., Fitzpatrick, C., & Archambault, I. deficits in readers with learning disabilities (2010). School readiness and later hard to change? Journal of Learning achievement: A French Canadian replication Disabilities, 33, 551-566. and extension. Developmental Psychology, 46, 984-994. Swanson, H. L., & Berninger, V. W. (1996). Individual differences in children’s working Rabiner, D. L., Murray, D. W., & Schmid, L. memory and writing skill. Journal of (2004). An exploration of the relationship Experimental Child Psychology, 63, 358-385 between ethnicity, attention problems, and academic achievement. School Psychology Swanson, H. L. & Jerman, O. (2006). Math Review, 33, 498-509. disabilities: A selective meta-analysis of the literature. Review of Educational Research, Raykov, T. & Marcoulides, G. A. (2011). 76, 249-274. Introduction to psychometric theory. New York: Routledge. Swerdlik, M. E., Swerdlik, P., & Kahn, J. H. (2003). Technical manual: Psychological Reynolds, C.R., & Kamphaus, R. W. (2004). Processing Checklist. North Tonawanda, NY: Behavior Assessment System for Children (2nd Multi-Health Systems. ed.). Circle Pines, MN: AGS Publishing. Swerdlik, M. E., Swerdlik, P., Kahn, J. H. & Sattler, J. M. (2001). Assessment of children: Thomas, T. (2008). Psychological Processing Cognitive applications (4th ed.). San Diego, Checklist-Revised. North Tonawanda, NY: CA: Author. Multi-Health Systems.

Schneider, J. W., & McGrew, K. S. (in press). The SYSTAT 13 Statistics I-IV. Chicago, IL: SYSTAT Cattell-Horn-Carroll (CHC) model of Software, Inc. intelligence. In D. P. Flanagan, & P. L. Harrison (Eds.), Contemporary intellectual Wechsler, D. (2003). Wechsler Intelligence Scale assessment (3rd ed.). New York: Guilford for Children—fourth edition. San Antonio, Press. TX: The Psychological Corporation.

Schrank, F. A., & Woodcock, R. W. (2007). WJ III White, K. R. (1982). The relationship between normative update Compuscore® and profiles socioeconomic status and academic program. Itasca, IL: Riverside Publishing. achievement. Psychological Bulletin, 91(3), 461-481. Schweizer, K. (2011). Editorial: On the changing role of Cronbach’s α in the evaluation of the Woodcock, R. W. (1999). What Rasch-based quality of a measure. European Journal of scores convey about a person's test Psychological Assessment, 27(3), 143-144. performance. In S. E. Embretson & S. L. Hershberger (Eds.), The new rules of Singer, B. D., & Bashir, A. S. (1999). What are measurement: What every psychologist and executive functions and self-regulation and educator should know (pp. 105-127). what do they have to do with language- Mahesh, NJ: Lawrence Earlbaum Associates. learning disorders? Language, Speech, and Hearing Services in Schools, 30, 265-273.

79

Woodcock, R. W., & Dahl, M. N. (1971). A Woodcock, R. W., McGrew, K. S., & Mather, N. common scale for the measurement of person (2001a). Woodcock-Johnson III Tests of ability and test item difficulty (AGS Paper No. Achievement. Itasca, IL: Riverside Publishing. 10). Circle Pines, MN: American Guidance Service. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001b). Woodcock-Johnson III Tests of Cognitive Abilities. Itasca, IL: Riverside Publishing.