2017–2018 Technical Manual Update Year-End Model December 2018 All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission provided the source is cited as: Dynamic Learning Maps Consortium. (2018, December). 2017–2018 Technical Manual Update—Year-End Model. Lawrence, KS: University of Kansas, Center for Accessible Teaching, Learning, and Assessment Systems (ATLAS). Acknowledgements The publication of this technical manual update builds upon the documentation presented in the 2014–2015 Technical Manual—Year-End Model and annual technical manual updates. This document represents further contributions to a body of work in the service of supporting a meaningful assessment system designed to serve students with the most significant cognitive disabilities. Hundreds of people have contributed to this undertaking. We acknowledge them all for their contributions. Many contributors made the writing of this technical manual update possible. Dynamic Learning Maps® (DLM®) staff who made significant writing contributions to this technical manual update are listed below with gratitude. Amy Clark, Ph.D., Associate Director for Operational Research Brooke Nash, Ph.D., Associate Director for Psychometrics W. Jake Thompson, Ph.D., Psychometrician Brianna Beitling, Psychometrician Assistant The authors wish to acknowledge Zachary Hopper, Teri Millar, Chelsea Nehler, Noelle Pablo, Courtney Rodgers, and Michelle Shipman for their contributions to this update. For a list of project staff who supported the development of this manual through key contributions to design, development, or implementation of the Dynamic Learning Maps Alternate Assessment System, please see the 2014–2015 Technical Manual—Year-End Model, and the subsequent annual technical manual updates. We are also grateful for the contributions of the members of the DLM Technical Advisory Committee who graciously provided their expertise and feedback. Members of the Technical Advisory Committee during the 2017–2018 operational year include: Russell Almond, Ph.D., Florida State University Greg Camilli, Ph.D., Rutgers University Karla Egan, Ph.D., Independent Consultant Robert Henson, Ph.D., University of North Carolina-Greensboro James Pellegrino, Ph.D., University of Illinois-Chicago Edward Roeber, Ph.D., Assessment Solutions Group/Michigan Assessment Consortium David Williamson, Ph.D., Educational Testing Service Phoebe Winter, Ph.D., Independent Consultant Contents 1 Introduction .............................................................................................................................................. 1 1.1 Background....................................................................................................................................... 1 1.2 Assessment ....................................................................................................................................... 2 1.3 Technical Manual Overview .......................................................................................................... 2 2 Map Development ................................................................................................................................... 4 3 Item and Test Development................................................................................................................... 5 3.1 Items and Testlets............................................................................................................................. 5 3.1.1 Item Writers ......................................................................................................................... 5 3.2 External Reviews.............................................................................................................................. 8 3.3 Operational Assessment Items for Spring 2018........................................................................... 8 3.4 Field Testing...................................................................................................................................... 12 3.4.1 Description of Field Tests................................................................................................... 12 4 Test Administration................................................................................................................................. 15 4.1 Overview of Key Administration Features.................................................................................. 15 4.1.1 Test Windows....................................................................................................................... 15 4.2 Administration Evidence................................................................................................................ 15 4.2.1 Adaptive Delivery............................................................................................................... 15 4.2.2 Administration Incidents................................................................................................... 20 4.3 Implementation Evidence............................................................................................................... 20 4.3.1 User Experience With the DLM System .......................................................................... 20 4.3.2 Accessibility ......................................................................................................................... 23 4.4 Conclusion ........................................................................................................................................ 25 5 Modeling ................................................................................................................................................... 26 5.1 Overview of the Psychometric Model .......................................................................................... 26 5.2 Calibrated Parameters..................................................................................................................... 27 5.2.1 Probability of Masters Providing Correct Response...................................................... 27 5.2.2 Probability of Non-Masters Providing Correct Response............................................. 28 5.3 Mastery Assignment ....................................................................................................................... 29 5.4 Model Fit ........................................................................................................................................... 31 5.5 Conclusion ........................................................................................................................................ 32 6 Standard Setting....................................................................................................................................... 33 7 Assessment Results ................................................................................................................................. 34 7.1 Student Participation....................................................................................................................... 34 7.2 Student Performance....................................................................................................................... 36 7.2.1 Overall Performance........................................................................................................... 37 7.2.2 Subgroup Performance ...................................................................................................... 38 7.2.3 Linkage Level Mastery....................................................................................................... 39 7.3 Data Files........................................................................................................................................... 40 7.4 Score Reports .................................................................................................................................... 41 7.5 Quality Control Procedures for Data Files and Score Reports.................................................. 41 Page i 7.6 Conclusion ........................................................................................................................................ 41 8 Reliability.................................................................................................................................................. 43 8.1 Background Information on Reliability Methods ....................................................................... 43 8.2 Methods of Obtaining Reliability Evidence................................................................................. 43 8.2.1 Reliability Sampling Procedure ........................................................................................ 44 8.3 Reliability Evidence......................................................................................................................... 45 8.3.1 Performance Level Reliability Evidence.......................................................................... 46 8.3.2 Subject Reliability Evidence .............................................................................................. 47 8.3.3 Conceptual Area Reliability Evidence ............................................................................. 48 8.3.4 EE Reliability Evidence ...................................................................................................... 51 8.3.5 Linkage
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages109 Page
-
File Size-