Tilburg University Using Scalability Coefficients and Conditional

Total Page:16

File Type:pdf, Size:1020Kb

Tilburg University Using Scalability Coefficients and Conditional Tilburg University Using scalability coefficients and conditional association to assess monotone homogeneity Straat, J.H. Publication date: 2012 Link to publication in Tilburg University Research Portal Citation for published version (APA): Straat, J. H. (2012). Using scalability coefficients and conditional association to assess monotone homogeneity. Ridderprint. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 29. sep. 2021 Using Scalability Coefficients and Conditional Association to Assess Monotone Homogeneity Printed by Ridderprint BV, Ridderkerk ISBN/EAN: 978-90-5335-598-5 Copyright c 2012 J. Hendrik Straat. All rights reserved. Using Scalability Coefficients and Conditional Association to Assess Monotone Homogeneity Proefschrift ter verkrijging van de graad van doctor aan Tilburg University op gezag van de rector magnificus, prof. dr. Ph. Eijlander, in het openbaar te verdedigen ten overstaan van een door het college voor promoties aangewezen commissie in de aula van de Universiteit op vrijdag 23 november 2012 om 14.15 uur door Johannes Hendrikus Straat geboren op 24 juli 1983 te Eindhoven Promotiecommissie: Promotor: prof. dr. K. Sijtsma Copromotor: dr. L. A. van der Ark Overige leden: prof. dr. J. K. L. Denollet prof. dr. W. J. Heiser prof. dr. R. R. Meijer dr. A. A. B´eguin dr. W. H. M. Emons Contents 1 Introduction 1 1.1 Outline of the thesis . .6 2 Comparing Optimization Algorithms for Item Selection in Mokken Scale Analysis 9 2.1 Introduction . 10 2.2 Mokken's Nonparametric Item Response Theory . 13 2.3 New Item Selection Methods . 16 2.4 Comparing Three Item Selection Methods . 18 2.5 Real Data Example . 27 2.6 Discussion . 28 Appendix 1 . 30 Appendix 2 . 31 3 Multi-Method Analysis of the Internal Structure of the Type D Scale-14 (DS14) 37 3.1 Introduction . 38 3.2 Factor Analysis Results for Type D . 41 3.3 Mokken Scale Analysis . 45 3.4 Method . 47 3.5 Results . 49 3.6 Discussion . 54 4 Methodological Artifacts in Dimensionality Assessment of the Hospital Anxiety and Depression Scale (HADS) 57 4.1 Introduction . 58 4.2 Hierarchical Structure of Psychological Attributes . 61 4.3 Method . 63 4.4 Results . 67 4.5 Discussion . 69 5 Minimum Sample Size Requirements for Mokken Scale Analysis 73 5.1 Introduction . 74 5.2 Monotone Homogeneity Model . 75 5.3 Method . 78 5.4 Results . 81 5.5 Discussion . 83 Appendix . 84 6 Using Conditional Association to Identify Locally Independent Item Sets 85 6.1 Introduction . 86 6.2 Conditional Association . 88 6.3 A Procedure to Identify a Locally Independent Item Set . 93 6.4 Comparison of Methods Assessing Fit to UMLVM . 95 6.5 Empirical Example: The Type D Scale-14 . 99 6.6 Discussion . 102 Appendix . 103 7 Epilogue 107 References 111 Summary 127 Samenvatting 131 Dankwoord 137 Chapter 1 Introduction In the social and behavioral sciences, researchers commonly use tests and questionnaires to measure attributes such as cognitive abilities including aspects of intelligence, personality traits, and attitudes. Measurement of these attributes is liable to more random measurement error than measurement of, for example, temperature or distance and also to systematic but undesirable influences, such as social desirability, tiredness, and cheating. Consequently, a single item does not provide a reliable and valid measurement of the attribute and researchers have to construct multiple items to control for random measurement error and to cover different aspects of the attribute well. The responses to the items contain information about the attribute that a researcher intends to measure. If the items have been constructed adequately, test takers with a higher attribute score (e.g., being more verbally intelligent, more extravert, or having a more positive attitude towards euthanasia) are expected to score higher on an item than test takers with lower attribute scores. Researchers are predominantly interested in the positions of test takers on the scale for the attribute of interest rather than their scores on single items. Psychometric measurement models summarize a pattern of item scores into a score on a latent variable that represents the attribute. In validity research, researchers have to evaluate whether the latent variable is a valid representation of the attribute of interest and whether the latent variable covers all relevant aspects of the attribute. Measurement models restrict the relation of the item scores and the latent variable. A general class of measurement models is item 1 2 response theory (IRT; Embretson & Reise, 2000; Van der Linden & Hambleton, 1997). The evaluation of the nonparametric IRT model known as the monotone homogeneity model (Mokken, 1971; Molenaar, 1997) in real data is the central theme of this study. The monotone homogeneity model is also known as the unidimensional monotone latent variable model (Holland & Rosenbaum, 1986). Three assumptions define the monotone homogeneity model: unidimensionality, local independence, and monotonicity. The unidimensionality assumption posits that the items in the test measure only one latent variable. This assumption reflects the ideal that items should measure one attribute so as to simplify the test performance's interpretation. The local independence assumption posits that the unidimensional latent variable is the only source of association between the items, so that the multivariate, distribution of the item scores conditional on the latent variable equals the product of the univariate, conditional distributions. The local independence assumption implies that for test takers with the same score on the latent variable, the item scores are independent, hence local independence. In combination with unidimensionality, local independence ascertains that the test measures one latent variable and nothing else. The monotonicity assumption encompasses the intuitively appealing idea that the expected item score is a monotone nondecreasing function of the latent variable. This means that as the latent-variable value increases, for each item the expected score remains the same or increases. Many well-known parametric IRT models assume unidimensionality and local independence and assume a parametric function such as the logistic to describe the relation between the expected item score and the latent variable. These parametric IRT models are special cases of the nonparametric monotone homogeneity model that only restricts the relation between the expected item score and the latent variable to be nondecreasing. Examples of parametric IRT models for dichotomous items are the Rasch model (Rasch, 1960), the 2-parameter logistic model and the 3-parameter logistic model (Birnbaum, 1968), and examples of parametric IRT models for polytomous items are the partial credit model (Masters, 1982), and the graded response model (Samejima, 1969). The parametric IRT models reject items that have monotone relations with the latent variable that are not logistic. Because the monotone homogeneity model is less stringent with respect to the relation between the 3 item scores and the latent variable, if different models could select items from a large pool then the monotone homogeneity model would select more items than parametric IRT models that are special cases of the former model. Including many items in a scale may be considered to be a desirable property of a measurement model. Moreover, because the unidimensional parametric IRT models are special cases of the monotone homogeneity model, assessment of the fit of the monotone homogeneity model to the data also provides information about a parametric model's data fit. An important question is how the latent variable can summarize a person's pattern of item scores. The monotone homogeneity model justifies the use of the easily interpretable total score, which is the unweighed sum of the item scores, as an ordinal estimator of the latent variable. Thus, test takers with a higher total score on average have a higher score on the latent variable than test takers with a lower total score. Grayson (1988; also, Huynh, 1994) showed that the ordering of the latent variable by the total score holds for dichotomous items, but Hemker, Sijtsma, Molenaar, and Junker (1997) proved that for polytomous items the total score strictly is not an ordinal estimator of the latent variable. However, Van der Ark (2005) demonstrated that violations of ordinal measurement of the latent variable are rare. Moreover, Van der Ark and Bergsma (2010) proved that for polytomous items a weaker form of the ordering of the latent variable by the total score holds. Hence, the monotone homogeneity model suffices as a measurement model when the measurement purpose requires ordinal measurement, for example, to identify the most capable applicants in personnel selection (ranking of total scores) or patients who need a particular treatment more than others (dichotomization of the total-score scale). This thesis discusses methods that use observable consequences of the monotone homogeneity model to assess the fit of the model to the data and the measurement quality of items. Observable consequences provide necessary but not sufficient conditions for the measurement model. Hence, observable consequences are particularly useful to investigate whether one or more items are inconsistent with the monotone homogeneity model. The property of conditional association (Holland & Rosenbaum, 1986; Rosenbaum, 1984) defines a large set of observable consequences of the monotone homogeneity model.
Recommended publications
  • Psychometric Analysis of the Flow Short Scale Translated to Finnish
    Psychometric Analysis of the Flow Short Scale translated to Finnish Michael Laakasuoa, Jussi Palomäkia, Sami Abuhamdehb, Otto Lappia,c and Benjamin Ultan Cowleya,d aCognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki; Siltavuorenpenger 1 A, 00012 Helsinki, Finland bDepartment of Psychology, Marmara University, Istanbul, Turkey cTraffic Research Unit, University of Helsinki, Helsinki, Finland dFaculty of Educational Sciences, University of Helsinki, Helsinki, Finland ARTICLEINFO ABSTRACT Keywords: We report a psychometric scale validation study on a Finnish translation of the Flow Short Scale Flow (FSS). Data from 201 Finnish speaking participants were collected via the Prolific Academic Finnish Flow Short Scale platform. To analyse the scale’s factorial structure, we performed Mokken scale analysis, Par- Scale validation allel Analysis, Very Simple Structures analysis and a standard Confirmatory Factor Analysis. Psychometrics Then, we evaluated the correlations between FSS and two other flow scales, the Flow State Scale and Flow Core Scale. Finally, we evaluated the FSS’s ability to distinguish flow-inducing experiences from boring (non-flow-inducing) experiences. Collectively the results showed that an 8-item, two-factor version of the scale was a justified instrument with good psychometric properties. 1. Introduction Flow is a well-known construct describing the experience of harmony between action and awareness, typically in a demanding but intrinsically-motivating task conducted with high skill [4]. As an experiential construct, Flow can only be assessed by some form of self-report. Various instruments have been developed to thus operationalise Flow [7, 12, 15], but these have not been made available in a range of languages in thoroughly validated form.
    [Show full text]
  • A Tutorial on How to Do a Mokken Scale Analysis on Your Test and Questionnaire Data Klaas Sijtsma1* and L
    1 British Journal of Mathematical and Statistical Psychology (2016) © 2016 The British Psychological Society www.wileyonlinelibrary.com Expert Tutorial A tutorial on how to do a Mokken scale analysis on your test and questionnaire data Klaas Sijtsma1* and L. Andries van der Ark2 1Tilburg University, The Netherlands 2University of Amsterdam, The Netherlands Over the past decade, Mokken scale analysis (MSA) has rapidly grown in popularity among researchers from many different research areas. This tutorial provides researchers with a set of techniques and a procedure for their application, such that the construction of scales that have superior measurement properties is further optimized, taking full advantage of the properties of MSA. First, we define the conceptual context of MSA, discuss the two item response theory (IRT) models that constitute the basis of MSA, and discuss how these models differ from other IRT models. Second, we discuss dos and don’ts for MSA; the don’ts include misunderstandings we have frequently encountered with researchers in our three decades of experience with real-data MSA. Third, we discuss a methodology for MSA on real data that consist of a sample of persons who have provided scores on a set of items that, depending on the composition of the item set, constitute the basis for one or more scales, and we use the methodology to analyse an example real-data set. 1. Introduction Mokken scale analysis (MSA) provides a set of statistical tools for constructing scales for measuring persons and items with respect to attributes from the personality and cognitive domain, health-related quality of life, sociology, marketing, and several other areas in which multi-item scales are used.
    [Show full text]
  • Examining Invariant Item Ordering Using Mokken Scale Analysis for Polytomously Scored Items*
    ISSN: 1309 – 6575 Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi Journal of Measurement and Evaluation in Education and Psychology 2018; 9(4);312-325 325 Examining Invariant Item Ordering Using Mokken Scale Analysis for Polytomously Scored Items* Hakan KOĞAR** Abstract The aim of the present study is to identify and compare the number of items violating the item ordering, the total number of item pairs causing violation, the test statistics averages and the HT values of the overall test obtained from three separate Mokken IIO models in the simulative datasets generated by the graded response model. All the simulation conditions were comprised of 108 cells: 3 (minimum coefficient of a violation) x 2 (item discrimination levels) x 3 (sample sizes) x 2 (number of items) x 3 (response categories). MIIO, MSCPM and IT methods were used for data analysis. When the findings were considered in general, it was found that the MIIO method yielded the most stable values due to the fact that it was not affected by the lowest violation coefficient and was affected only slightly by simulation conditions. Especially in conditions where the violation coefficient was 0.03 (the default value in the Mokken package), it was recommended to use the MIIO method in identifying item ordering. Even though the MSCPM method yielded similar findings to those of the IT method, it generated more stable findings in particularly high sample sizes. In conditions where sample size, number of items and item discrimination were high, the MSCPM was recommended to be used. Key Words: Invariant item ordering, mokken scale analysis, polytomous items, polytomous item response theory.
    [Show full text]
  • Investigating Invariant Item Ordering Using Mokken Scale Analysis for Dichotomously Scored Items
    International Journal of Progressive Education, Volume 16 Number 3, 2020 © 2020 INASED Investigating Invariant Item Ordering Using Mokken Scale Analysis for Dichotomously Scored Items Ezgi Mor Dirlik i Kastamonu Unıversıty Abstract Mokken models have recently started to become the preferred method of researchers from different fields in studies of nonparametric item response theory (NIRT). Despite increasing application of these models, some features of this type of modelling need further study and explanation. Invariant item ordering (IIO) is one of these areas, which the present study aims to exemplify using a real dataset and comparing the findings of different methods. The main purpose of this study is to check the IIO assumption for a large scale test by using different methods. Data relating to the high school placement test (applied in 2016) was investigated. The sample size was determined as being 250, which complies with NIRT. Two different methods have been used for dichotomous items in the IIO testing: rest-scores and P-matrix methods. The HT coefficients were also calculated in order to define the placement of item response functions. Findings show that the test battery is not suitable for Mokken scaling. IIO property was not met for any of the tests, and findings from the different methods were not consistent with each other. As for the results, the rest-score method defined more items violating IIO properties than the P-matrix method for all the tests. The HT coefficients were also estimated from the critical value, which shows that the tests do not have IIO properties. The conflicting results indicate that there is a need for new studies to investigate IIO empirically.
    [Show full text]
  • An Instructional Module on Mokken Scale Analysis
    Educational Measurement: Issues and Practice Summer 2017, Vol. 36, No. 2, pp. 50–66 An Instructional Module on Mokken Scale Analysis Stefanie A. Wind, University of Alabama Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore measurement quality, with an emphasis on its application in the context of educational assessment. The module describes both dichotomous and polytomous formulations of the MSA model. Examples of the application of MSA to educational assessment are provided using data from a multiple-choice physical science assessment and a rater-mediated writing assessment. Keywords: Mokken scaling, nonparametric item response theory ethods based on item response theory (IRT) are fre- its own right as a method for exploring fundamental mea- M quently used to inform the development, interpreta- surement properties, including invariant person and item tion, and use of educational assessments across a wide range ordering, when an ordinal level of measurement is suffi- of contexts. These methods are useful because they provide cient to inform decisions based on a measurement proce- information about the relationship between student locations dure. In particular, several authors (Chernyshenko, Stark, on a latent variable and the probability for a particular
    [Show full text]
  • A Mokken Scale to Assess Secondary Pupils' Experience of Violence In
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by DSpace at Open Universiteit Nederland Mokken scale to asses experience of violence 1 Running head: MOKKEN SCALE TO ASSESS EXPERIENCE OF VIOLENCE A Mokken Scale to Assess Secondary Pupils’ Experience of Violence in Terms of Severity Ton Mooij Radboud University Nijmegen (ITS) and Open University of the Netherlands (CELSTEC) Mokken scale to asses experience of violence 2 A Mokken Scale to Assess Secondary Pupils’ Experience of Violence in Terms of Severity Abstract Violence assessment can potentially be improved by Item Response Theory, i.e. ordinal Mokken Scale Analysis. The research question is: Does Mokken Scale Analysis of secondary pupils’ experience of violence result in a homogeneous, reliable, and valid unidimensional scale that fits all the requirements of Mokken scaling? The method used is secondary analysis of Dutch national data collected from secondary school pupils in 2008 by means of a digital school safety survey. The first random sample (n1=14,388) was used to develop the scale; the second sample (n2=14,350) is meant to cross-validate the first results. Pupils’ experience of violence is assessed by 29 items reflecting six types of antisocial or aggressive behaviour. A Mokken scale of 25 items meets the requirements of monotone homogeneity and double monotonicity. Ordering is invariant between: Boys and girls; being born in the Netherlands or not; and feeling at home in the Netherlands or not. These results are cross-validated in sample 2. The latent construct concerns pupils’ experience of violence in terms of severity, varying from verbal and mild physical violence (relatively most frequent), to combinations of social, material and severe physical violence, to very severe and serious sexual violence (relatively least frequent).
    [Show full text]
  • Using Mokken Scaling to Compare the Italian and English Versions of the MOLES Index
    City Research Online City, University of London Institutional Repository Citation: Aleo, G., Bagnasco, A., Watson, R., Dyson, J., Cowdell, F., Catania, G., Zanini, M. P., Cozzani, E., Parodi, A. and Sasso, L. (2019). Comparing questionnaires across cultures: Using Mokken scaling to compare the Italian and English versions of the MOLES index. Nursing Open, doi: 10.1002/nop2.297 This is the published version of the paper. This version of the publication may differ from the final published version. Permanent repository link: https://openaccess.city.ac.uk/id/eprint/22539/ Link to published version: http://dx.doi.org/10.1002/nop2.297 Copyright: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. Reuse: Copies of full items can be used for personal research or study, educational, or not-for-profit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. City Research Online: http://openaccess.city.ac.uk/ [email protected] Received: 24 January 2018 | Revised: 20 February 2019 | Accepted: 18 March 2019 DOI: 10.1002/nop2.297 RESEARCH ARTICLE Comparing questionnaires across cultures: Using Mokken scaling to compare the Italian and English versions of the MOLES index
    [Show full text]
  • Item Response Theory: How Mokken Scaling Can Be Used in Clinical Practice
    ORIGINAL ARTICLE Item response theory: How Mokken scaling can be used in clinical practice Roger Watson, L Andries van der Ark, Li-Chan Lin, Robert Fieo, Ian J Deary and Rob R Meijer Aims. To demonstrate the principles and application of Mokken scaling. Background. The history and development of Mokken scaling is described, some examples of applications are given, and some recent development of the method are summarised. Design. Secondary analysis of data obtained by cross-sectional survey methods, including self-report and observation. Methods. Data from the Edinburgh Feeding Evaluation in Dementia scale and the Townsend Functional Ability Scale were analysed using the Mokken scaling procedure within the ‘R’ statistical package. Specifically, invariant item ordering (the extent to which the order of the items in terms of difficulty was the same for all respondents whatever their total scale score) was studied. Results. The Edinburgh Feeding Evaluation in Dementia scale and the Townsend Functional Ability Scale showed no violations of invariant item ordering, although only the Townsend Functional Ability Scale showed a medium accuracy. Conclusion. Mokken scaling is an established method for item response theory analysis with wide application in the social sciences. It provides psychometricians with an additional tool in the development of questionnaires and in the study of individuals and their responses to latent traits. Specifically, with regard to the analyses conducted in this study, the Edinburgh Feeding Evaluation in Dementia scale requires further development and study across different levels of severity of dementia and feeding difficulty. Relevance to clinical practice. Good scales are required for assessment in clinical practice and the present paper shows how a relatively recently developed method for analysing Mokken scales can contribute to this.
    [Show full text]
  • An Instructional Module on Mokken Scale Analysis
    Educational Measurement: Issues and Practice xxxx 2017, Vol. 00, No. 0, pp. 1–17 An Instructional Module on Mokken Scale Analysis Stefanie A. Wind, University of Alabama Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore measurement quality, with an emphasis on its application in the context of educational assessment. The module describes both dichotomous and polytomous formulations of the MSA model. Examples of the application of MSA to educational assessment are provided using data from a multiple-choice physical science assessment and a rater-mediated writing assessment. Keywords: Mokken scaling, nonparametric item response theory ethods based on item response theory (IRT) are fre- its own right as a method for exploring fundamental mea- M quently used to inform the development, interpreta- surement properties, including invariant person and item tion, and use of educational assessments across a wide range ordering, when an ordinal level of measurement is suffi- of contexts. These methods are useful because they provide cient to inform decisions based on a measurement proce- information about the relationship between student locations dure. In particular, several authors (Chernyshenko, Stark, on a latent variable and the probability for a particular
    [Show full text]
  • A Tutorial on How to Do a Mokken Scale Analysis on Your Test and Questionnaire Data
    UvA-DARE (Digital Academic Repository) A tutorial on how to do a Mokken scale analysis on your test and questionnaire data Sijtsma, K.; van der Ark, L.A. DOI 10.1111/bmsp.12078 Publication date 2017 Document Version Final published version Published in British Journal of Mathematical & Statistical Psychology Link to publication Citation for published version (APA): Sijtsma, K., & van der Ark, L. A. (2017). A tutorial on how to do a Mokken scale analysis on your test and questionnaire data. British Journal of Mathematical & Statistical Psychology, 70(1), 137-158. https://doi.org/10.1111/bmsp.12078 General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:02 Oct 2021 137 British Journal of Mathematical and Statistical Psychology (2017), 70, 137–158 © 2016 The British Psychological Society www.wileyonlinelibrary.com Expert Tutorial A tutorial on how to do a Mokken scale analysis on your test and questionnaire data Klaas Sijtsma1* and L.
    [Show full text]
  • Package 'Mokken'
    Package ‘mokken’ March 8, 2021 Version 3.0.6 Date 2021-3-8 Title Conducts Mokken Scale Analysis Author L. Andries van der Ark [aut, cre], Letty Koopman [aut], J. Hendrik Straat [ctb], Don van den Bergh [ctb] Maintainer L. Andries van der Ark <[email protected]> Depends R (>= 3.5.0), graphics, poLCA Suggests MASS Description Contains functions for performing Mokken scale analysis on test and questionnaire data. It includes an automated item selection algorithm, and various checks of model assumptions. License GPL (>= 2) URL https://sites.google.com/a/tilburguniversity.edu/avdrark/mokken LinkingTo Rcpp Imports Rcpp NeedsCompilation yes Repository CRAN Date/Publication 2021-03-08 21:10:02 UTC R topics documented: mokken-package . .2 acl..............................................8 aisp .............................................9 autonomySupport . 12 balance . 13 cavalini . 14 check.bounds . 15 check.ca . 16 1 2 mokken-package check.errors . 18 check.iio . 19 check.monotonicity . 22 check.norms . 23 check.pmatrix . 25 check.reliability . 26 check.restscore . 28 coefH . 29 coefZ . 32 DS14 ............................................ 34 ICC ............................................. 36 mcmi ............................................ 37 MLcoefH . 38 MLcoefZ . 41 MLweight . 42 plot.iio.class . 44 plot.monotonicity.class . 45 plot.pmatrix.class . 47 plot.restscore.class . 49 recode . 50 summary.iio.class . 51 summary.monotonicity.class . 53 summary.pmatrix.class . 54 summary.restscore.class . 56 SWMD ..........................................
    [Show full text]
  • Measuring Unobserved Variables in Educational Inequality Research: Mathematics and Language ‘Abilities’ in Early Childhood
    Measuring Unobserved Variables in Educational Inequality Research: Mathematics and Language ‘Abilities’ in Early Childhood Alejandra Rodríguez Sánchez July 18, 2019 Abstract Evidence of social inequalities in cognitive abilities in early childhood has been documented in many societies; however, three characteristics of the data used to measure cognitive constructs make it difficult to quantify inequalities across groups. First, a causal understanding of validity is not compatible with the standard validation framework, which forces researchers to think critically what it means to measure unobserved constructs. Second, test scores only provide ordinal information about individuals, they are not interval scales and require the use of suitable corresponding methods for their study. Third, measurement invariance, which causes measurement error, may make comparison of test scores across groups invalid. The paper explores these three data problems applied to standardized tests—one mathematics and two language assessments—taken by a cohort of German children. The paper proposes a comparative validation framework for researchers based on nonparametric psychometric models and the representational theory of measurement. This framework can help researchers to determine if data fit the assumptions of a measurement model, to check for various forms of measurement error, and to overcome potential issues. A comparison of competing statistical modeling alternatives reveals substantial differences: By conceptualizing ability as ordinal instead of interval and excluding items that do 1 not fit the assumptions of measurement models, I find a reduction in effect sizes for typical covariates studied in social stratification research. A number of studies—from Noble et al. (2015), Kalil, Ryan, and Corey (2012), Duncan, Ziol-Guest, and Kalil (2010), Halle et al.
    [Show full text]