Analyzing Test Content Using Cluster Analysis and Multidimensional Scaling Stephen G

Analyzing Test Content Using Cluster Analysis and Multidimensional Scaling Stephen G

Analyzing Test Content Using Cluster Analysis and Multidimensional Scaling Stephen G. Sireci and Kurt F. Geisinger Fordham University A new method for evaluating the content such as ratings of test items by content experts representation of a test is illustrated. Item similari- (Osterlind, 1989). were obtained from content domain ex- ty ratings In order to retain the importance of ensuring perts in order to assess whether their ratings cor- content domain and at the same responded to item groupings specified in the test representation blueprint. Three expert judges rated the similarity time to avoid use of the term &dquo;validity,&dquo; several of items on a 30-item multiple-choice test of study test specialists have suggested replacing &dquo;content skills. The similarity data were analyzed using a validity&dquo; with a more technically correct term. multidimensional followed scaling (MDS) procedure Messick (1975, 1980) suggested the terms &dquo;con- by a hierarchical cluster analysis of the MDS tent relevance&dquo; and &dquo;content stimulus coordinates. The results indicated a strong representation,&dquo; correspondence between the similarity data and the Guion (1978) offered the term &dquo;content domain arrangement of items as prescribed in the test sampling,&dquo; and Fitzpatrick (1983) recommend- blueprint. The findings suggest that analyzing item ed use of the term &dquo;content representativeness.&dquo; similarity data with MDS and cluster analysis can Thus, regardless of the terminology employed, provide substantive information pertaining to the the of a test to its content representation of a test. The advantages ability represent underlying and disadvantages of using MDS and cluster content domain continues to be an issue of para- analysis with item similarity data are discussed. mount importance in test construction. Index terms: cluster analysis, content validity, The present paper presents and explores a new multidimensional scaling, similarity data, test to evaluate the content construction. approach designed rep- resentation of a test. Because item response data The term &dquo;content validity&dquo; traditionally has are not used in this approach, the terms &dquo;con- been used to refer to how well the items on a test tent representation&dquo; and &dquo;content representa- represent the underlying domain of skill or tiveness&dquo; are used to encompass the psychometric knowledge tested (Thorndike, 1982). However, concerns previously attributed to content validity. use of this term has been criticized by several Previous Methods of Evaluating Test Content theorists who describe validity as a unitary con- cept (Fitzpatrick, 1983; Guion, 1977; Messick, Evaluations of test content can be classified 1975, 1989a, 1989b; Tenopyr, 1977). Content as either subjective or empirical. Subjective validity has become controversial because many methods employ subject matter experts (SMES) to current test specialists define validity in terms of evaluate and rate the relevance and representa- inferences derived from test scores, but studies tiveness of test items to the domain of knowledge of content validation rarely employ test score tested. Examples of subjective methods for data. Rather, a test’s content is usually evaluating the content representation of a test are &dquo;validated&dquo; through more subjective methods provided by Hambleton (1980, 1984), Lawshe (1975), and Morris and Fitz-Gibbon (1978). APPLIED PSYCHOLOGICAL MEASUREMENT Hambleton’s method an vol. 16, No. 1, March 1992, 17 31 (1980, 1984) provides pp. index that is @ Copyright 1992 Applied Psychological Measurement Inc. item-objective congruence appropri- 0146-6216/92/01001715$2. 00 ate for criterion-referenced tests in which each 17 Downloaded from the Digital Conservancy at the University of Minnesota, http://purl.umn.edu/93227. May be reproduced with no cost by students and faculty for academic use. Non-academic reproduction requires payment of royalties through the Copyright Clearance Center, http://www.copyright.com/ 18 item is linked to a single objective. This index and Osterlind (1989) recommended that the reflects an averaging of SME ratings regarding the judges not be informed of the item-objective extent to which items measure their specified ob- specifications of the test blueprint. Furthermore, jective. Lawshe’s (1975) procedure results in a Crocker et al. pointed out that item ratings often content validity index that represents an averag- differ due to minor changes in the wording of the ing of SME item relevance ratings for items re- directions to the judges. Clearly, it would be tained on a test after the review process. The pro- beneficial to modify these procedures to avoid cedure developed by Morris and Fitz-Gibbon imposing the test blueprint on the content (1978) provides several indices that reflect an domain experts’ judgments and to gain economy averaging of SME judgments regarding the objec- of time, money, and personnel. tives measured by each item, the relevance of The empirical methods of content assessment these objectives to the purpose of the testing, the have also been criticized. These criticisms stem appropriateness of the item formats, and the ap- from the presence of content-irrelevant factors propriateness of the expected item difficulties. associated with item response data (Green, 1983). Rather than relying on subjective evaluations The ability of a test to represent its correspond- of test items, some test theorists recommend em- ing content domain is a quality inherent in the pirical analyses of item response data to discover test independent of examinee responses to the underlying content structure. Empirical studies items. Item difficulty, the ability level and of test content include applications of multidi- variability of the examinee population, motiva- mensional scaling (MDS) and cluster analysis tion, guessing, differential item functioning, and (Napior, 1972; Oltman, Sticker, & Barrows, 1990) social desirability are variables that may affect and applications of factor analysis (Cattell, the results of item response analyses but are 1957). These analytic procedures result in dimen- irrelevant to assessment of content representa- sions, factors, and clusters that are presumed to tion. Analyses employing test response data allow be relevant to the content domains measured by the performance of the tested population to the test. Davison (1985) reviewed applications of determine the relationships between the test items factor analysis and MDS to test intercorrelations. while ignoring inherent item characteristics. His review, accompanied by monte carlo com- Although such analyses may be relevant in parisons of the two procedures, revealed that evaluations of construct validity or criterion- MDS often led to a more parsimonious represen- related studies, they are not central to evaluations tation of test structure than did factor analysis. of test content (Messick, 1989b). This finding was explained by the presence of A pseudo-empirical study conducted by item difficulty factors that appeared in the fac- Tucker (1961) used factor analysis to evaluate test tor analyses but did not emerge as dimensions content, yet avoided the use of item response in the MDS solutions. data. Tucker factor analyzed the ratings provided The subjective methods of evaluating test con- by the SMEs of the relevance of test items to the tent have been criticized severely due to their lack content domain tested. Two factors were iden- of practicality. Crocker, Miller, and Franks (1989) tified : The first factor was interpreted as &dquo;a and Thorndike (1982) point out that these tech- measure of general approval of the sample items&dquo; niques are rarely used in practice. One reason for (p. 584). The second factor was interpreted as a this lack of application is that subjective methods measure of the differences between two groups tend to support the content structure of a test of judges regarding which item types they con- implicitly, because presenting judges with the sidered more relevant (recognition items or content objectives of the test may bias their reasoning items). Through this procedure, Tucker judgments by imposing an external structure on avoided the problems associated with factor their ratings. Indeed, both Crocker et al. (1989) analyzing dichotomous test data. However, his Downloaded from the Digital Conservancy at the University of Minnesota, http://purl.umn.edu/93227. May be reproduced with no cost by students and faculty for academic use. Non-academic reproduction requires payment of royalties through the Copyright Clearance Center, http://www.copyright.com/ 19 study did not directly evaluate the content areas uate the test items. Two of the judges had former- comprising the test. The items were not rated in ly taught a Study Skills course and were selected terms of their relevance to the specific content for their knowledge of the subject domain. The areas of the test, and the arrangement of items third judge, also familiar with the subject in the test blueprint was not directly evaluated. domain, was a psychometrician with many years The present study borrows from Tucker (1961) of experience in the construction and evaluation in that subjective data obtained from SNtES were of educational tests. All the judges were ignorant employed to evaluate test content. It deviates of the test blueprint. from Tucker’s method by the instructions altering Procedure given to the SMEs and analyzing the data using MDS and cluster

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us