Psychological Testing: Basic Concepts Commonand Misconceptions Anne Anastasi
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Assessment Center Structure and Construct Validity: a New Hope
University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2015 Assessment Center Structure and Construct Validity: A New Hope Christopher Wiese University of Central Florida Part of the Psychology Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Wiese, Christopher, "Assessment Center Structure and Construct Validity: A New Hope" (2015). Electronic Theses and Dissertations, 2004-2019. 733. https://stars.library.ucf.edu/etd/733 ASSESSMENT CENTER STRUCTURE AND CONSTRUCT VALIDITY: A NEW HOPE by CHRISTOPHER W. WIESE B.S., University of Central Florida, 2008 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Psychology in the College of Sciences at the University of Central Florida Orlando, Florida Summer Term 2015 Major Professor: Kimberly Smith-Jentsch © 2015 Christopher Wiese ii ABSTRACT Assessment Centers (ACs) are a fantastic method to measure behavioral indicators of job performance in multiple diverse scenarios. Based upon a thorough job analysis, ACs have traditionally demonstrated very strong content and criterion-related validity. However, researchers have been puzzled for over three decades with the lack of evidence concerning construct validity. ACs are designed to measure critical job dimensions throughout multiple situational exercises. However, research has consistently revealed that different behavioral ratings within these scenarios are more strongly related to one another (exercise effects) than the same dimension rating across scenarios (dimension effects). -
Donor-Advised Fund
WELCOME. The New York Community Trust brings together individuals, families, foundations, and businesses to support nonprofits that make a difference. Whether we’re celebrating our commitment to LGBTQ New Yorkers—as this cover does—or working to find promising solutions to complex problems, we are a critical part of our community’s philanthropic response. 2018 ANNUAL REPORT 1 A WORD FROM OUR DONORS Why The Trust? In 2018, we asked our donors, why us? Here’s what they said. SIMPLICITY & FAMILY, FRIENDS FLEXIBILITY & COMMUNITY ______________________ ______________________ I value my ability to I chose The Trust use appreciated equities because I wanted to ‘to‘ fund gifts to many ‘support‘ my community— different charities.” New York City. My ______________________ parents set an example of supporting charity My accountant and teaching me to save, suggested The Trust which led me to having ‘because‘ of its excellent appreciated stock, which tools for administering I used to start my donor- donations. Although advised fund.” my interest was ______________________ driven by practical considerations, The need to fulfill the I eventually realized what charitable goals of a dear an important role it plays ‘friend‘ at the end of his life in the City.” sent me to The Trust. It was a great decision.” ______________________ ______________________ The Trust simplified our charitable giving.” Philanthropy is a ‘‘ family tradition and ______________________ ‘priority.‘ My parents communicated to us the A donor-advised fund imperative, reward, and at The Trust was the pleasure in it.” ‘ideal‘ solution for me and my family.” ______________________ I wanted to give back, so I opened a ‘fund‘ in memory of my grandmother and great-grandmother.” 2 NYCOMMUNITYTRUST. -
Construct Validity and Reliability of the Work Environment Assessment Instrument WE-10
International Journal of Environmental Research and Public Health Article Construct Validity and Reliability of the Work Environment Assessment Instrument WE-10 Rudy de Barros Ahrens 1,*, Luciana da Silva Lirani 2 and Antonio Carlos de Francisco 3 1 Department of Business, Faculty Sagrada Família (FASF), Ponta Grossa, PR 84010-760, Brazil 2 Department of Health Sciences Center, State University Northern of Paraná (UENP), Jacarezinho, PR 86400-000, Brazil; [email protected] 3 Department of Industrial Engineering and Post-Graduation in Production Engineering, Federal University of Technology—Paraná (UTFPR), Ponta Grossa, PR 84017-220, Brazil; [email protected] * Correspondence: [email protected] Received: 1 September 2020; Accepted: 29 September 2020; Published: 9 October 2020 Abstract: The purpose of this study was to validate the construct and reliability of an instrument to assess the work environment as a single tool based on quality of life (QL), quality of work life (QWL), and organizational climate (OC). The methodology tested the construct validity through Exploratory Factor Analysis (EFA) and reliability through Cronbach’s alpha. The EFA returned a Kaiser–Meyer–Olkin (KMO) value of 0.917; which demonstrated that the data were adequate for the factor analysis; and a significant Bartlett’s test of sphericity (χ2 = 7465.349; Df = 1225; p 0.000). ≤ After the EFA; the varimax rotation method was employed for a factor through commonality analysis; reducing the 14 initial factors to 10. Only question 30 presented commonality lower than 0.5; and the other questions returned values higher than 0.5 in the commonality analysis. Regarding the reliability of the instrument; all of the questions presented reliability as the values varied between 0.953 and 0.956. -
Psychology of School Learning: Views of the Learner
DOCUMENT RESUME ED 104 859 SP 009 097 AUTHOR Bart, William M., Ed.; Wong, Martin R., Ed. TITLE Psychology of School Learning: Views of the Learner. Volume I: Environmentalism. INSTITUTION MSS Information Ccrp., New York, N.Y. PUB DATE 74 'NOTE 249p. AVAILABLE FROM MSS Information 'orporation, 655 Madison Avenue, New York, New York 10021 ($6.25 paper, $12.00 cloth, 20 percent discount on orders of $200.00 or more) EDRS PRICE MF-$0.76 HC-$12.05 PLUS POSTAGE DESCRIPTORS Behavioral Objectives; Computer Assisted Instruction; Educational Philosophy; *Educational Psychology; *Environmental Influences; Higher Education; *Learning; *Learning Specialists; Reinforcement; Teacher Education ABSTRACT This document is the first of three volumes presenting essays from three schools of thought regarding learning. Volume one consists of readings from psychologists, philosophers, and learning theorists concerning the view that the learner isa product primarily of environmental factors. The list of essays includes the following:(a) "Ideas and Their Origin," (b) "The Free and Happy Student," (c) "The Technology of Teaching," (d) "Treatment of Nonreading in a Culturally Deprived Juvenile Delinquent: An Application of Reinforcement Principles," (e) "Production and Elimination of Disruptive Classroom Behavior by Systematically Varying Teacher's Behavior," (f) "Learning Theory Approaches to Classroom Management: Rationale and Intervention Techniques," (g) "A Token Reinforcement Program in a Public School: A Replication and Systematic Analysis," (h) "Educational -
Reading Comprehension of Materials Written with Select Oral Language Patterns: a Study at Grades 2 and 4
DCCUMENT RESUME ED 036 405 24 RE 002 386 AUTHOR TATHAM, SUSAN MASLAND TITLE READ-NG COMPREHENSION CF MATERIALS WRITTEN WITH SELECT ORAL LANGUAGE PATTERNS: A STUDY AT GRADES 2 AND 4. INSTITUTION WISCONSIN UNIV., MADISON. RESEARCH AND DEVELOPMENT CENTER FOR COGNITIVE LEARNING. SPONS AGENCY OFFICE OF EDUCATION (DHEW), WASHINGTON, D.C. BUREAU OF RESEARCH.. REPORT MO TR-86 BUREAU NO BR-5-0216 PUB DATE JUL 69 CONTRACT OEC-5-10-154 MOTE 1422.. EDRS PRICE EDRS PRICE MF-$0..75 HC-$7.20 DESCRIPTORS GRADE 2, GRADE 4, *LANGUAGE PATTERNS, *LANGUAGE RESEARCH, LANGUAGE ROLE, *ORAL EXPRESSION, *READING COMPREHENSION, *READING RESEARCH ABSTRACT TO DETERMINE WHETHER OR NOT STUDENTS IN GRADES 2 AND 4 COMPREHENDED MATERIALS WRITTEN WITH PATTERNS THAT APPEAR FREQUENTLY IN THEIR SPEECH BETTER THAN MATERIALS WRITTEN WITH PATTERNS THAT APPEAR INFREQUENTLY, TWO READING COMPREHENSION TESTS WERE DEVISED BY THE INVESTIGATOR* SUBJECTS WERE ALI SECCND- AND FOURTH-GRADE CLASSROOMS FROM TWO SIMILAR SCHOCLS: 163 GRADE-2 STUDENTS (81 GIRLS, 82 BOYS) AND 137 GRADE-4 STUDENTS (69 GIRLS, 68 BOYS).. TEST A USED FREQUENTLY OCCURRING PATTERNS FRCM THE ORAL LANGUAGE OF SECOND AND FOURTH GRADERS, AND TEST B USED INFREQUENTLY OCCURRING PATTERNS IN THE ORAL LANGUAGE OF STUDENTS IRCM THE SAME GRADES. PATTERNS WERE SELECTED FROM STRICKLAND'S STUDY (1962). CHI SQUARE AND T TESTS WERE USED TO ANALYZE THE DATA RESULTS INDICATED (1) SIGNIFICANTLY MORE GRADE-2 AND -4 STUDENTS OBTAINED HIGHER SCORES ON TEST A THAN ON TEST B (P .091);(2) GRADE-4 STUDENTS PERFORMED SIGNIFICANTLY BETTER THAN GRADE-2 STUDENTS ON BOTH TESTS (P .01); (3) IN GENERAL, THERE WERE NO SIGNIFICANT SEX -DIFFERENCES ON EITHER TEST WITHIN OR ACROSS GRADES. -
Construct Validity in Psychological Tests
CONSTRUCT VALIDITY IN PSYCHOLOGICAL TF.STS - ----L. J. CRONBACH and P. E. MEEID..----- validity seems only to have stirred the muddy waters. Portions of the distinctions we shall discuss are implicit in Jenkins' paper, "Validity for 'What?" { 33), Gulliksen's "Intrinsic Validity" (27), Goo<lenough's dis tinction between tests as "signs" and "samples" (22), Cronbach's sepa· Construct Validity in Psychological Tests ration of "logical" and "empirical" validity ( 11 ), Guilford's "factorial validity" (25), and Mosier's papers on "face validity" and "validity gen eralization" ( 49, 50). Helen Peak ( 52) comes close to an explicit state ment of construct validity as we shall present it. VALIDATION of psychological tests has not yet been adequately concep· Four Types of Validation tua1ized, as the APA Committee on Psychological Tests learned when TI1e categories into which the Recommendations divide validity it undertook (1950-54) to specify what qualities should be investigated studies are: predictive validity, concurrent validity, content validity, and before a test is published. In order to make coherent recommendations constrnct validity. The first two of these may be considered together the Committee found it necessary to distinguish four types of validity, as criterion-oriented validation procedures. established by different types of research and requiring different interpre· TI1e pattern of a criterion-oriented study is familiar. The investigator tation. The chief innovation in the Committee's report was the term is primarily interested in some criterion which he wishes to predict. lie constmct validity.* This idea was first formulated hy a subcommittee administers the test, obtains an independent criterion measure on the {Meehl and R. -
Questionnaire Validity and Reliability
Questionnaire Validity and reliability Department of Social and Preventive Medicine, Faculty of Medicine Outlines Introduction What is validity and reliability? Types of validity and reliability. How do you measure them? Types of Sampling Methods Sample size calculation G-Power ( Power Analysis) Research • The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions • In the broadest sense of the word, the research includes gathering of data in order to generate information and establish the facts for the advancement of knowledge. ● Step I: Define the research problem ● Step 2: Developing a research plan & research Design ● Step 3: Define the Variables & Instrument (validity & Reliability) ● Step 4: Sampling & Collecting data ● Step 5: Analysing data ● Step 6: Presenting the findings A questionnaire is • A technique for collecting data in which a respondent provides answers to a series of questions. • The vehicle used to pose the questions that the researcher wants respondents to answer. • The validity of the results depends on the quality of these instruments. • Good questionnaires are difficult to construct; • Bad questionnaires are difficult to analyze. •Identify the goal of your questionnaire •What kind of information do you want to gather with your questionnaire? • What is your main objective? • Is a questionnaire the best way to go about collecting this information? Ch 11 6 How To Obtain Valid Information • Ask purposeful questions • Ask concrete questions • Use time periods -
Psychology 230 History, Systems, & Theories
1 Psychology 230 History, Systems, & Theories Fall 2015 Class meets on Monday and Wednesday from 11:45am to 12:55pm in PPHAC 235 Overview: Historical origins of contemporary psychology, including structuralism, associationism, functionalism, behaviorism, Gestalt, and psychoanalysis, as well as recent developments in the field. Prerequisite: Psychology 120. Dana S. Dunn, Ph.D. Department of Psychology Hurd Academic Complex Room 231 Office phone: (610) 861-1562 E-mail: [email protected] 2 Fall 2015 Office hours: Monday By appointment Tuesday 1:30 – 3pm Wednesday By appointment Thursday 1:30 – 3 pm Friday 8:30 – 10:30am Course Goals: 1. To introduce you to the historical development of the scientific study of psychology. 2. To show you where psychology fits in the history of ideas in Western thought. 3. To understand key issues, themes, and controversies that shaped (and continue to shape) the contemporary discipline. Required Books: Freud, S. (1989). On dreams. New York: Norton. Leahey, T. H. (2013). A history of psychology: From antiquity to modernity (7th ed.). New York, NY: Pearson. Skinner, B. F. (1976). Walden Two. New York: Macmillan. Course Requirements 1. Class participation and attendance. This course requires constant attendance, active participation and critical discussion of the readings. I expect that you will attend each and every class, and that you will come prepared to talk about—and question—what you read. Class participation is worth 15% of your final course grade. Please note that I will be taking role, thus your absence from class will affect your participation grade (i.e., if you are not in class, you cannot contribute to discussion). -
How to Create Scientifically Valid Social and Behavioral Measures on Gender Equality and Empowerment
April 2020 EMERGE MEASUREMENT GUIDELINES REPORT 2: How to Create Scientifically Valid Social and Behavioral Measures on Gender Equality and Empowerment BACKGROUND ON THE EMERGE PROJECT EMERGE (Evidence-based Measures of Empowerment for Research on Box 1. Defining Gender Equality and Gender Equality), created by the Center on Gender Equity and Health at UC San Diego, is a project focused on the quantitative measurement of gender Empowerment (GE/E) equality and empowerment (GE/E) to monitor and/or evaluate health and Gender equality is a form of social development programs, and national or subnational progress on UN equality in which one’s rights, Sustainable Development Goal (SDG) 5: Achieve Gender Equality and responsibilities, and opportunities are Empower All Girls. For the EMERGE project, we aim to identify, adapt, and not affected by gender considerations. develop reliable and valid quantitative social and behavioral measures of Gender empowerment is a type of social GE/E based on established principles and methodologies of measurement, empowerment geared at improving one’s autonomy and self-determination with a focus on 9 dimensions of GE/E- psychological, social, economic, legal, 1 political, health, household and intra-familial, environment and sustainability, within a particular culture or context. and time use/time poverty.1 Social and behavioral measures across these dimensions largely come from the fields of public health, psychology, economics, political science, and sociology. OBJECTIVE OF THIS REPORT The objective of this report is to provide guidance on the creation and psychometric testing of GE/E scales. There are many good GE/E measures, and we highly recommend using previously validated measures when possible. -
Vol 1 Ross A. Mcfarland Papers
Ross A. McFarland Collection in Aerospace Medicine and Human Factors Engineering 1 Catalog of the Library Mary Ann Hoffman Fordham Health Sciences Library Wright State University School of Medicine Dayton, Ohio 1987 Fordham Library Publication No. 2 ©1987 Ross A. McFarland 1901-1976 CONTENTS Preface vi Introduction vii Acknowledgements ix Catalog 1 Vidéocassettes ИЗ Journals 114 Technical Reports Series 117 Name Index 119 Subject Index 146 PREFACE The Ross A. McFarland Collection in Aerospace Medicine and Human Factors Engineering at the Fordham Health Sciences Library, Wright State University School of Medicine, provides an unparalleled scientific resource and data base for physicians, life scientists, engineers and others working at the leading edge of human progress, especially those in the areas of aviation, space and advanced ground transportation. The Collection is regularly consulted by those currently pioneering these fields and is an invaluable source of information constituting the base upon which future progress is being constructed. I met Dr. McFarland in 1958 and came to know him -well. I observed first-hand his pioneering concepts in human factors, enhanced immeasurably by his articulate communications. Starting in the 1930's, he almost singlehandedly launched the human factors effort in aviation, directly collecting data on airline pilot fatigue and other major operational flight safety aspects. Folio-wing Dr. McFarland's untimely death in 1976, an event -widely recognized as taking from us the father of aerospace human factors, his wife, Mrs. Emily McFarland, decided to deed his library and scientific papers to Wright State University School of Medicine, Fordham Health Sciences Library. This gift consisted of more than 6,000 print items and approximately 400 linear feet of scientific manuscripts, unpublished reports, research data and correspondence, covering 50 years of professional work and research by Dr. -
Reliability & Validity
Another bit of jargon… Reliability & Validity In sampling… In measurement… • More is Better We are trying to represent a We are trying to represent a • Properties of a “good measure” population of individuals domain of behaviors – Standardization – Reliability – Validity • Reliability We select participants We select items – Inter-rater – Internal – External • Validity The resulting sample of The resulting scale/test of – Criterion-related participants is intended to items is intended to –Face – Content represent the population represent the domain – Construct For both “more is better” “more give greater representation” Whenever we’ve considered research designs and statistical conclusions, we’ve always been concerned with “sample size” We know that larger samples (more participants) leads to ... • more reliable estimates of mean and std, r, F & X2 • more reliable statistical conclusions • quantified as fewer Type I and II errors The same principle applies to scale construction - “more is better” • but now it applies to the number of items comprising the scale • more (good) items leads to a better scale… • more adequately represent the content/construct domain • provide a more consistent total score (respondent can change more items before total is changed much) Desirable Properties of Psychological Measures Reliability (Agreement or Consistency) Interpretability of Individual and Group Scores Inter-rater or Inter-observers reliability • do multiple observers/coders score an item the same way ? • critical whenever using subjective measures -
I F)0. the AIR FORGE OFFICER QUALIFYING TEST Warren B. Shull
I f)0. AN INVESTIGATION OF THE AIR FORGE OFFICER QUALIFYING TEST Warren B. Shull A Dissertation Submitted to the Graduate School of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 1975 Pagination Error <373 77/ 610300 /3 7¿-e-1 /3d,<3£/ ABSTRACT (The Air Force Reserve Officers’ Training Corps (AFROTC) currently administers the Air Force Officer Qualifying Test (AFOQT) to all applicants for AFROTC scholarships and candidates for commissioning. This test contains five composites« pilot, navigator-tech- nical, dfficer quality, verbal, and quantitative A If a significant relationship exists between these scores and other indicators, AFROTC could conduct a less ex tensive aptitude testing program at a substantial savings. Research was conducted to test the null hypotheses that there is no significant correlation between the various composite scores of the AFOQT as dependent vari ables and the various subscores and total/composite scores of the Scholastic Aptitude Test (SAT) and the American College Test (ACT) as independent variables. The group of persons who applied for four-year AFROTC scholarships during the 1973-74 academic year was used as the popu lation from which to sample. The data collected and analyzed in this study indi cated a significant correlation between each composite of the AFOQT and both the verbal and mathematics subscores and the total score of the SAT as well as one or more subscores and the composite score of the ACT. Each AFOQT composite except the pilot composite can be adequately predicted from either the individual’s SAT or ACT score.