Intelligence Testing and Cultural Diversity: Concerns, Cautions, and Considerations
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Assessment Center Structure and Construct Validity: a New Hope
University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2015 Assessment Center Structure and Construct Validity: A New Hope Christopher Wiese University of Central Florida Part of the Psychology Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Wiese, Christopher, "Assessment Center Structure and Construct Validity: A New Hope" (2015). Electronic Theses and Dissertations, 2004-2019. 733. https://stars.library.ucf.edu/etd/733 ASSESSMENT CENTER STRUCTURE AND CONSTRUCT VALIDITY: A NEW HOPE by CHRISTOPHER W. WIESE B.S., University of Central Florida, 2008 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Psychology in the College of Sciences at the University of Central Florida Orlando, Florida Summer Term 2015 Major Professor: Kimberly Smith-Jentsch © 2015 Christopher Wiese ii ABSTRACT Assessment Centers (ACs) are a fantastic method to measure behavioral indicators of job performance in multiple diverse scenarios. Based upon a thorough job analysis, ACs have traditionally demonstrated very strong content and criterion-related validity. However, researchers have been puzzled for over three decades with the lack of evidence concerning construct validity. ACs are designed to measure critical job dimensions throughout multiple situational exercises. However, research has consistently revealed that different behavioral ratings within these scenarios are more strongly related to one another (exercise effects) than the same dimension rating across scenarios (dimension effects). -
Construct Validity and Reliability of the Work Environment Assessment Instrument WE-10
International Journal of Environmental Research and Public Health Article Construct Validity and Reliability of the Work Environment Assessment Instrument WE-10 Rudy de Barros Ahrens 1,*, Luciana da Silva Lirani 2 and Antonio Carlos de Francisco 3 1 Department of Business, Faculty Sagrada Família (FASF), Ponta Grossa, PR 84010-760, Brazil 2 Department of Health Sciences Center, State University Northern of Paraná (UENP), Jacarezinho, PR 86400-000, Brazil; [email protected] 3 Department of Industrial Engineering and Post-Graduation in Production Engineering, Federal University of Technology—Paraná (UTFPR), Ponta Grossa, PR 84017-220, Brazil; [email protected] * Correspondence: [email protected] Received: 1 September 2020; Accepted: 29 September 2020; Published: 9 October 2020 Abstract: The purpose of this study was to validate the construct and reliability of an instrument to assess the work environment as a single tool based on quality of life (QL), quality of work life (QWL), and organizational climate (OC). The methodology tested the construct validity through Exploratory Factor Analysis (EFA) and reliability through Cronbach’s alpha. The EFA returned a Kaiser–Meyer–Olkin (KMO) value of 0.917; which demonstrated that the data were adequate for the factor analysis; and a significant Bartlett’s test of sphericity (χ2 = 7465.349; Df = 1225; p 0.000). ≤ After the EFA; the varimax rotation method was employed for a factor through commonality analysis; reducing the 14 initial factors to 10. Only question 30 presented commonality lower than 0.5; and the other questions returned values higher than 0.5 in the commonality analysis. Regarding the reliability of the instrument; all of the questions presented reliability as the values varied between 0.953 and 0.956. -
Writing Culture from Within
Writing Culture fromWithin Reflections on Endogenous Ethnography* Rob van Ginkel University of Amsterdam The days when anthropology was automatically associated with accounts of 'primitive peoples' in faraway lands are gone. Though the confrontation and dialogue with others deemed 'exotic' constituted anthropology's initial raison d'etre, today it is not in the least exceptional that anthropologists study an aspect or segment of the society of which they themselves are members. Thus, - instead of studying ourselves through the detour of studying others occasionally with the justification that perceiving others as 'exotic' will ultimately lead to the - recognition of our own peculiarities (cf. Leach 1982:127) many ethnographers nowadays tend to take a short cut. This fact notwithstanding, participant observation has remained the much heralded trademark of ethnographic research. In this respect, anthropology dif fers from all other disciplines in the humanities and social sciences. It is probably the distinct method of participant observation which is the reason why anthro pologists doing fieldwork at home reflect upon the implications of their position as natives for their research and its results. At least, historians or sociologists conducting research in their own society do not seem to be as reflexive. When anthropologists predominantly went to foreign countries far afield, they considered themselves 'strangers and friends', 'marginal natives' or 'professional strangers'.1 These terms are an indication of their role as outsiders. Now -
Menstrual Justice
Menstrual Justice Margaret E. Johnson* Menstrual injustice is the oppression of menstruators, women, girls, transgender men and boys, and nonbinary persons, simply because they * Copyright © 2019 Margaret E. Johnson. Professor of Law, Co-Director, Center on Applied Feminism, Director, Bronfein Family Law Clinic, University of Baltimore School of Law. My clinic students and I have worked with the Reproductive Justice Coalition on legislative advocacy for reproductive health care policies and free access to menstrual products for incarcerated persons since fall 2016. In 2018, two bills became law in Maryland requiring reproductive health care policies in the correctional facilities as well as free access to products. Maryland HB 787/SB629 (reproductive health care policies) and HB 797/SB 598 (menstrual products). I want to thank the Coalition members and my students who worked so hard on these important laws and are currently working on their implementation and continued reforms. I also want to thank the following persons who reviewed and provided important feedback on drafts and presentations of this Article: Professors Michele Gilman, Shanta Trivedi, Virginia Rowthorn, Nadia Sam-Agudu, MD, Audrey McFarlane, Lauren Bartlett, Carolyn Grose, Claire Donohue, Phyllis Goldfarb, Tanya Cooper, Sherley Cruz, Naomi Mann, Dr. Nadia Sam-Agudu, Marcia Zug, Courtney Cross, and Sabrina Balgamwalla. I want to thank Amy Fettig for alerting me to the breadth of this issue. I also want to thank Bridget Crawford, Marcy Karin, Laura Strausfeld, and Emily Gold Waldman for collaborating and thinking about issues relating to periods and menstruation. And I am indebted to Max Johnson-Fraidin for his insight into the various critical legal theories discussed in this Article and Maya Johnson-Fraidin for her work on menstrual justice legislative advocacy. -
Construct Validity in Psychological Tests
CONSTRUCT VALIDITY IN PSYCHOLOGICAL TF.STS - ----L. J. CRONBACH and P. E. MEEID..----- validity seems only to have stirred the muddy waters. Portions of the distinctions we shall discuss are implicit in Jenkins' paper, "Validity for 'What?" { 33), Gulliksen's "Intrinsic Validity" (27), Goo<lenough's dis tinction between tests as "signs" and "samples" (22), Cronbach's sepa· Construct Validity in Psychological Tests ration of "logical" and "empirical" validity ( 11 ), Guilford's "factorial validity" (25), and Mosier's papers on "face validity" and "validity gen eralization" ( 49, 50). Helen Peak ( 52) comes close to an explicit state ment of construct validity as we shall present it. VALIDATION of psychological tests has not yet been adequately concep· Four Types of Validation tua1ized, as the APA Committee on Psychological Tests learned when TI1e categories into which the Recommendations divide validity it undertook (1950-54) to specify what qualities should be investigated studies are: predictive validity, concurrent validity, content validity, and before a test is published. In order to make coherent recommendations constrnct validity. The first two of these may be considered together the Committee found it necessary to distinguish four types of validity, as criterion-oriented validation procedures. established by different types of research and requiring different interpre· TI1e pattern of a criterion-oriented study is familiar. The investigator tation. The chief innovation in the Committee's report was the term is primarily interested in some criterion which he wishes to predict. lie constmct validity.* This idea was first formulated hy a subcommittee administers the test, obtains an independent criterion measure on the {Meehl and R. -
Questionnaire Validity and Reliability
Questionnaire Validity and reliability Department of Social and Preventive Medicine, Faculty of Medicine Outlines Introduction What is validity and reliability? Types of validity and reliability. How do you measure them? Types of Sampling Methods Sample size calculation G-Power ( Power Analysis) Research • The systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions • In the broadest sense of the word, the research includes gathering of data in order to generate information and establish the facts for the advancement of knowledge. ● Step I: Define the research problem ● Step 2: Developing a research plan & research Design ● Step 3: Define the Variables & Instrument (validity & Reliability) ● Step 4: Sampling & Collecting data ● Step 5: Analysing data ● Step 6: Presenting the findings A questionnaire is • A technique for collecting data in which a respondent provides answers to a series of questions. • The vehicle used to pose the questions that the researcher wants respondents to answer. • The validity of the results depends on the quality of these instruments. • Good questionnaires are difficult to construct; • Bad questionnaires are difficult to analyze. •Identify the goal of your questionnaire •What kind of information do you want to gather with your questionnaire? • What is your main objective? • Is a questionnaire the best way to go about collecting this information? Ch 11 6 How To Obtain Valid Information • Ask purposeful questions • Ask concrete questions • Use time periods -
How to Create Scientifically Valid Social and Behavioral Measures on Gender Equality and Empowerment
April 2020 EMERGE MEASUREMENT GUIDELINES REPORT 2: How to Create Scientifically Valid Social and Behavioral Measures on Gender Equality and Empowerment BACKGROUND ON THE EMERGE PROJECT EMERGE (Evidence-based Measures of Empowerment for Research on Box 1. Defining Gender Equality and Gender Equality), created by the Center on Gender Equity and Health at UC San Diego, is a project focused on the quantitative measurement of gender Empowerment (GE/E) equality and empowerment (GE/E) to monitor and/or evaluate health and Gender equality is a form of social development programs, and national or subnational progress on UN equality in which one’s rights, Sustainable Development Goal (SDG) 5: Achieve Gender Equality and responsibilities, and opportunities are Empower All Girls. For the EMERGE project, we aim to identify, adapt, and not affected by gender considerations. develop reliable and valid quantitative social and behavioral measures of Gender empowerment is a type of social GE/E based on established principles and methodologies of measurement, empowerment geared at improving one’s autonomy and self-determination with a focus on 9 dimensions of GE/E- psychological, social, economic, legal, 1 political, health, household and intra-familial, environment and sustainability, within a particular culture or context. and time use/time poverty.1 Social and behavioral measures across these dimensions largely come from the fields of public health, psychology, economics, political science, and sociology. OBJECTIVE OF THIS REPORT The objective of this report is to provide guidance on the creation and psychometric testing of GE/E scales. There are many good GE/E measures, and we highly recommend using previously validated measures when possible. -
Influence of Implicit-Bias Training on the Cultural Competency of Police Officers Marvin Whitfield Walden University
Walden University ScholarWorks Walden Dissertations and Doctoral Studies Walden Dissertations and Doctoral Studies Collection 2019 Influence of Implicit-Bias Training on the Cultural Competency of Police Officers Marvin Whitfield Walden University Follow this and additional works at: https://scholarworks.waldenu.edu/dissertations Part of the Public Administration Commons This Dissertation is brought to you for free and open access by the Walden Dissertations and Doctoral Studies Collection at ScholarWorks. It has been accepted for inclusion in Walden Dissertations and Doctoral Studies by an authorized administrator of ScholarWorks. For more information, please contact [email protected]. Walden University College of Social and Behavioral Sciences This is to certify that the doctoral dissertation by Marvin Whitfield has been found to be complete and satisfactory in all respects, and that any and all revisions required by the review committee have been made. Review Committee Dr. Melanye Smith, Committee Chairperson, Criminal Justice Faculty Dr. Tony Gaskew, Committee Member, Criminal Justice Faculty Dr. Joseph Pascarella, University Reviewer, Criminal Justice Faculty Chief Academic Officer Eric Riedel, Ph.D. Walden University 2019 Abstract Influence of Implicit-Bias Training on the Cultural Competency of Police Officers by Marvin Whitfield MBA, Columbia Southern University, 2014 MA, Columbia Southern University, 2013 BS, Columbia Southern University, 2011 AA, Faulkner University, 2005 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Criminal Justice Walden University August 2019 Abstract Highly publicized media events involving African American men and the use of deadly force by police officers have occurred between 2013 and 2014. These events have emphasized the need to examine the influence of implicit bias training on police officers’ decision-making processes. -
Reliability & Validity
Another bit of jargon… Reliability & Validity In sampling… In measurement… • More is Better We are trying to represent a We are trying to represent a • Properties of a “good measure” population of individuals domain of behaviors – Standardization – Reliability – Validity • Reliability We select participants We select items – Inter-rater – Internal – External • Validity The resulting sample of The resulting scale/test of – Criterion-related participants is intended to items is intended to –Face – Content represent the population represent the domain – Construct For both “more is better” “more give greater representation” Whenever we’ve considered research designs and statistical conclusions, we’ve always been concerned with “sample size” We know that larger samples (more participants) leads to ... • more reliable estimates of mean and std, r, F & X2 • more reliable statistical conclusions • quantified as fewer Type I and II errors The same principle applies to scale construction - “more is better” • but now it applies to the number of items comprising the scale • more (good) items leads to a better scale… • more adequately represent the content/construct domain • provide a more consistent total score (respondent can change more items before total is changed much) Desirable Properties of Psychological Measures Reliability (Agreement or Consistency) Interpretability of Individual and Group Scores Inter-rater or Inter-observers reliability • do multiple observers/coders score an item the same way ? • critical whenever using subjective measures -
Intelligence Testing and Cultural Diversity: Concerns, Cautions, and Considerations
THE NATIONAL RESEARCH CENTER NRC ON THE GIFTED G/T AND TALENTED Senior Scholars Series University of Connecticut University of Virginia Yale University Intelligence Testing and Cultural Diversity: Concerns, Cautions, and Considerations Donna Y. Ford Vanderbilt University Nashville, Tennessee December 2004 RM04204 Intelligence Testing and Cultural Diversity: Concerns, Cautions, and Considerations Donna Y. Ford Vanderbilt University Nashville, Tennessee December 2004 RM04204 THE NATIONAL RESEARCH CENTER ON THE GIFTED AND TALENTED The National Research Center on the Gifted and Talented (NRC/GT) is funded under the Jacob K. Javits Gifted and Talented Students Education Act, Institute of Education Sciences, United States Department of Education. The Directorate of the NRC/GT serves as an administrative and a research unit and is located at the University of Connecticut. The participating universities include the University of Virginia and Yale University, as well as a research unit at the University of Connecticut. University of Connecticut Dr. Joseph S. Renzulli, Director Dr. E. Jean Gubbins, Associate Director Dr. Sally M. Reis, Associate Director University of Virginia Dr. Carolyn M. Callahan, Associate Director Yale University Dr. Robert J. Sternberg, Associate Director Copies of this report are available from: NRC/GT University of Connecticut 2131 Hillside Road Unit 3007 Storrs, CT 06269-3007 Visit us on the web at: www.gifted.uconn.edu The work reported herein was supported under the Educational Research and Development Centers Program, PR/Award Number R206R000001, as administered by the Institute of Education Sciences, U.S. Department of Education. The findings and opinions expressed in this report do not reflect the position or policies of the Institute of Education Sciences or the U.S. -
Linking Global Consumer Culture and Ethnocentric Consumerism to Global
The current issue and full text archive of this journal is available on Emerald Insight at: https://www.emerald.com/insight/0144-333X.htm Mediating Linking global consumer culture effect of and ethnocentric consumerism to cultural global citizenship: exploring the intelligence mediating effect of cultural intelligence Received 18 October 2019 Revised 19 February 2020 17 March 2020 Aluisius Hery Pratono Accepted 17 March 2020 Faculty of Business and Economics, Universitas Surabaya, Surabaya, Indonesia, and Denni Arli Labovitz School of Business and Economics, University of Minnesota Duluth, Duluth, Minnesota, USA Abstract Purpose – This article attempts to understand the impact of global consumer culture and ethnocentric consumerism on global citizenship by identifying the mediating effect of cultural intelligence. Design/methodology/approach – The proposed structural equation model explains the relationship between global consumer culture, ethnocentric consumerism, and global citizenship. The empirical analysis involves an online survey targeted young people in Indonesia context. Findings – The empirical evidence broadly supports the view that cultural intelligence strengthens the impact of global consumer culture and ethnocentric consumerism on global citizenship. There is a strong tendency in this study to suggest that global consumerism will not be able to contribute to global citizenship unless cultural intelligence provides as a mediating variable. However, the results do not support the mainstream literature, which suggests that -
Construct Validity in Psychological Tests – the Case of Implicit Social Cognition
European Journal for Philosophy of Science (2020) 10:4 https://doi.org/10.1007/s13194-019-0270-8 PAPER IN THE PHILOSOPHY OF THE SCIENCES OF MIND AND BRAIN Construct validity in psychological tests – the case of implicit social cognition Uljana Feest1 Received: 17 February 2019 /Accepted: 27 November 2019/ # Springer Nature B.V. 2020 Abstract This paper looks at the question of what it means for a psychological test to have construct validity. I approach this topic by way of an analysis of recent debates about the measure- ment of implicit social cognition. After showing that there is little theoretical agreement about implicit social cognition, and that the predictive validity of implicit tests appears to be low, I turn to a debate about their construct validity. I show that there are two questions at stake: First, what level of detail and precision does a construct have to possess such that a test can in principle be valid relative to it? And second, what kind of evidence needs to be in place such that a test can be regarded as validated relative to a given construct? I argue that construct validity is not an all-or-nothing affair. It can come in degrees, because (a) both our constructs and our knowledge of the explanatory relation between constructs and data can vary in accuracy and level of detail, and (b) a test can fail to measure all of the features associated with a construct. I conclude by arguing in favor of greater philosoph- ical attention to processes of construct development. Keywords Implicit bias .