AN EVALUATION OF TEST-TAKING PERFORMANCE IN A SELECTION CONTEXT THROUGH MOTIVATIONAL MECHANISMS AND JOB RELATEDNESS PERCEPTIONS
Shelby Wise
A Thesis
Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of
MASTER OF ARTS
December 2017
Committee:
Clare Barratt, Advisor
Eric Dubow
Scott Highhouse © 2017
Shelby Wise
All Rights Reserved iii ABSTRACT
Clare Barratt, Advisor
Applicant reactions researchers have posited that perceptions of test fairness may have a direct affect on applicant motivation and test performance thus diminishing test validity as was
proposed in the Justice Theory Framework Perspective. The Justice Theory Framework
Perspective describes how situational characteristics inform perceptions of ten procedural justice
rules which, in turn, contribute to fairness perceptions and more distally reactions during hiring,
reactions after hiring, and applicant self-perceptions. Despite the commentary pertaining to this
potential model, at this time, few researchers have empirically tested the aforementioned
proposition regarding applicant reactions and test validity. To address this gap, this study
investigated the relationship between the perceived job-relatedness of selection assessments and
test performance through changes in test-taking self-efficacy and test motivation. Additionally, this model was tested while controlling for individual differences related to achievement (i.e.
Conscientiousness and generalized self-efficacy). Results of this study indicate that perceptions of job-relatedness vary by selection test used (i.e. personality test, cognitive ability test, and situational judgment test) and that job-relatedness is significantly related to perceptions of overall process fairness. The model evaluated was also supported wherein job-relatedness predicted test performance through changes in self-efficacy and more distally test motivation.
The findings suggest that the applicant reactions-performance relationship may be important in terms of testing proficiency and selection outcomes. Implications of these findings and recommendations for future research are further described. iv
This work is dedicated to my friends and family for their extensive support throughout this
process. v ACKNOWLEDGMENTS
I would first and foremost like to thank my thesis advisor, Clare Barratt, for her help throughout this process. Her support in both the writing and analytics was instrumental in the completion of this project. Additionally, I would like to thank my committee members for their constructive feedback and help in improving my research idea.
I would also like to acknowledge my friends and classmates for their support. Graduate school would not be the same without you.
vi
TABLE OF CONTENTS
Page
INTRODUCTION………………………………………………………………………..... 1
Applicant Reactions and Organizational Justice…………………………… ...... ….. 3
Procedural Justice Frameworks………………………………………… ...... 4
Job-Relatedness……………………………………...... 6
Facets of Job-Relatedness……………………………………...... 8
Types of Selection Tests……………………………………...... 11
Job-Relatedness and Self-Perceptions……………………………………...... 16
Motivation……………………………………...... 17
Test Performance……………………………………...... 19
Self-Efficacy……………………………………...... 19
Control Variables and Individual Differences………………………………… .….. 22
METHOD……………………… ...... 26
Participants……………………………………...... 26
Procedure……………………………………...... 26
Measures……………………………………...... 27
RESULTS………………………………………………………………...... 31
DISCUSSION………………………...... 42
Implications………………………………………………………………...... 44
Limitations……………………………………………………………………… …. 46
Conclusion……………………………………...... 46
REFERENCES……………………………………………………………………………… 48 vii
APPENDIX A. JOB DESCRIPTION……………………………………………………… 65
APPENDIX B. SITUATIONAL JUDGEMENT ...... 66
APPENDIX C. COGNITIVE ABILITY TEST………………………………………… … 67
APPENDIX D. PERSONALITY TEST…………………………………………………… 68
APPENDIX E. STUDY MEASURES……………………………………………………… 69
APPENDIX E. HSRB APPROVAL…………………………………………………… .… 71
viii
LIST OF TABLES
Table Page
1 Procedural Justice Constructs by Category with Definition ...... 7
2 Means and Standard Deviations for Study Variables by Test Condition ...... 32
3 Post-Hoc ANOVA Comparison Between Test Conditions ...... 33
4 Means, Standard Deviations, Correlations, and Reliability Estimates
for Aggregated Study Variables...... 35
5 Summary of Exploratory Factor Analysis Results Based on a
Principle Components Analysis with Oblimin Rotation ...... 37
6 Standardized path estimates for control variables as regressed onto
model variables ...... 39
ix
LIST OF FIGURES
Figure Page
1 Hypothesized Model Visual ...... 22
2 Hypothesized Model with Standardized Path Estimates ...... 39
3 Full Mediation Model with Standardized Path Estimates...... 40
4 Complex Model with Standardized Path Estimates ...... 41 Running head: MODEL OF PERFORMANCE, MOTIVATION, AND TEST 1 PERCEPTIONS
INTRODUCTION
More than one third of Americans have negative reactions to pre-employment testing methods (Schmit & Ryan, 1997) despite research suggesting that testing aids can sufficiently predict job performance (Barrick & Mount, 1991; Schmidt & Hunter, 2000). These numbers are staggering and suggest a large disconnect in the way in which selection tests are utilized by organizations and the predilection of job applicants.
The study of applicant reactions, defined as the thoughts, emotions, and feelings in relation to, and resulting from, the selection process, has received a lot of empirical attention
(Bauer, Truxillo, Sanchez, Craig, Ferrara, & Campion, 2001), and this has not ceased over the last fifteen years. In a recent meta-analysis by McCarthy, Bauer, Truxillo, Anderson, Costa, and
Ahmed (2017), it was noted that, since the year 2000, at least 145 primary studies have been conducted on this topic. Many of these empirical works have evaluated the perceived likability
of various testing methods, related the impressions of testing to important organizational
consequences, and assessed intervention methods for increasing overall fairness perceptions
(e.g., Bakhshi, Kummar, & Rani, 2009; Cohen-Charash & Spector, 2001; Hausknecht, Day, &
Thomas, 2004; Konradt, Garbers, Boge, Erdogan, & Bauer, 2017). However, research also supports the notion that applicant reactions may impact test performance (e.g., Chan et al., 1997;
Smither et al., 1993) potentially reducing the validity of selection measures (e.g., Arvey,
Strickland, Drauden, & Martin, 1990; McCarthy et al., 2009).
Specifically, applicant reactions researchers have posited that perceptions of test fairness may have a direct affect on applicant motivation and, in turn, test performance thus diminishing test validity (e.g., Arvey et al., 1990; Cascio, 1987; Ryan & Ployhart, 2000; Robertson &
Kandola, 1982). In support of this position, research confirms that characteristics specific to a MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 2
situation are instrumental in the development of motivation and task specific self-efficacy (e.g.,
Bandura & Locke, 2003; Mortacchio, 1994; Strecher, DeVellis, Becker, & Rosenstock, 1986).
Both of which are constructs found to be critical in how one performs across a variety of
situations, (e.g., Chemers & Garcia, 2001; Locke, Frederick, Lee, & Bobko, 1984; Stajkovic &
Luthens, 1988); including testing (Bouffard-Bouchard, 1990). In regard to selection specifically,
Bandura (1997) suggests that negative reactions to tests and testing methods may be detrimental
to an applicant’s testing self-efficacy thus impacting test-taking motivation and impeding test
performance.
Although many studies have evaluated the relationship between reactions, test-taking self-efficacy (Bauer, Maertz, Dolen, & Campion, 1998), test motivation (Chan et al., 1997), and
performance (McCarthey, Van Iddekinge, Lievens, Kung, Sinar, & Campion, 2013), few studies
have systemically evaluated the process by which reactions to selection assessments directly
impact test performance (i.e., Chan et al., 1997). The current study strives to expand the understanding of applicant reactions by evaluating competing models to determine if and how
various selection methods impact perceptions of job-relatedness as well as how reactions influence performance through test motivation and self-efficacy beliefs specific to applicant testing. This study also goes beyond past applicant reactions research by considering the perception/performance relationship when controlling for individual differences related to achievement (i.e., general self-efficacy, Conscientiousness). In sum, this study aims to assess the relationship between test reactions and test performance through changes in test-taking self- efficacy and test motivation while controlling for individual differences related to test achievement (i.e., Conscientiousness, general self-efficacy) (see Figure 1).
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 3
Applicant Reactions and Organizational Justice
Applicant reactions, defined as the feelings and thoughts directly resulting from the selection procedure, have been examined primarily through organizational justice theories which seek to identify and describe how impressions of fairness and equity impact organizational and individual outcomes. Considerations of organizational justice has been called a vital and rudimentary requirement for the effective operations of an organization (Greenberg, 1990).
A majority of organizational justice research within industrial/organizational psychology has focused on distributive and procedural justice (Gilliland, 1993). Both types of justice perceptions significantly correlate with several critical organizational outcomes such as organizational commitment (Folger & Konovsky, 1989; Martin & Bennett, 1996), intentions to quit (Dailey & Kirk, 1992), and subordinate ratings of supervisor effectiveness (McFarlin &
Sweeney, 1992). Originally, organizational justice research centered around the behaviors and attitudes resulting from resource allocation and the outcomes of high-stakes decisions (e.g., salary, hiring decisions, promotions) with no regard for the perceived fairness of the process in which these decisions were made (Gilliland & Beckstein, 1996; Klugger & Rothstein, 1993;
Magner, Welker, & Johnson, 1996). However, the study of process fairness (i.e., procedural justice) has become very popular since it’s theoretical inception in the 1970’s (Bobocel & Gosse,
2015) and its integration into the workplace literature in the late 1980s (Lind & Lissak, 1985;
Sheppard & Lewicki, 1987).
Procedural justice perceptions have been linked with many pre- and post-hire behaviors that are of utmost importance to the success of organizations such as increased job performance, job satisfaction, organizational commitment, and trust toward the organization, as well as decreased withdrawal behavior (Colquitt, Conlon, Wesson, Porter, & Ng, 2001). In relation to MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 4 selection, applicant perceptions of process fairness have been found to better predict job attitudes than other justice dimensions including distributive justice (Bauer et al., 1998). Research supports the notion that, regardless of hiring or promotion decisions, aspects of the testing process are notable contributors to post-hire organizational outcomes.
While improved perceptions of both distributive and procedural justice have been shown to positively affect applicant attitudes and organizational attractiveness, there is little an organization can do to alter hiring and promotion decisions without sacrificing job performance.
Organizations can, however, design selection tools and use hiring procedures that will be positively perceived by applicants thus improving organizational reputation (Smither et al.,
1993) and fairness perceptions (Hausknect et al., 2004). Because of the evident utility to organizations and the empirical support for this justice construct, the present study aims to focus on the applicant reactions and perceptions related to procedural justice.
Procedural Justice Frameworks
The concept of procedural justice emerged in the mid to late 1970’s (Bobocel & Gosse,
2015) with two formative theories developed by Thibaut and Walker (1975) and Leventhal
(1980), which both focused on voice effects. The impact of voice effects, which says that fairness perceptions are enhanced when individuals feel as if they have had an opportunity to express their opinion about the process before the decision has been made, has been corroborated in numerous research studies across a variety of disciplines (Lind, Kanfer, & Earley, 1990).
As research on procedural justice increased and evolved, so did the development of models and theoretical frameworks related to these justice perceptions. However, over the past few decades, the Justice Theory Framework Perspective, as coined by Stephen Gilliland (1993), has largely dominated applicant reactions research in regard to organizational justice in a MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 5
selection context (e.g., Chan, Schmitt, Jennings, Clause, & Delbridge, 1988; McCarthy et al.,
2017). Gilliland (1993) hypothesized that situational characteristics (e.g., test type, human
resource personnel) inform perceptions for ten procedural justice rules. The degree to which
these rules are met contributes to fairness perceptions, and more distally, reactions during hiring
(i.e., motivation), reactions after hiring (e.g., job performance), and applicant self-perceptions
(i.e., self-efficacy). The ten justice rules have been lumped within 3 major categories: formal
characteristics, explanation, and interpersonal treatment.
Formal characteristics are those fairness perceptions directly related to process
characteristics which, in selection, are the formal qualities of the tests or hiring procedure. This
includes perceptions of test content and consistency of testing methods across applicants.
Explanation describes the quality and quantity of information provided to the applicant
throughout the process. Within selection this entails providing evidence supporting test validity, being open and clear about hiring procedures, and providing quality feedback in a timely
manner. The last major justice rule category, interpersonal treatment, describes the way in which
organizations interact with applicants. This is the degree to which applicants feel well treated
interpersonally and are allowed freedom to communicate concerns throughout the process.
Subsequent research has continued to show support for Gilliland’s (1993) model as well the justice rules relationship to fairness perceptions (Bauer et al., 1998; Chapman et al., 2005;
Gilliland, 1994; McCarthy et al., 2017; Ployhart & Ryan, 1998; Smither et al., 1993; Truxillo,
Bauer, & Sanchez, 2001). A comprehensive list of the justice rules and their definitions created by Gilliland (1993) are located in Table 1 below. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 6
Job Relatedness
Of the justice rules created by Gilliland (1993), job relatedness stands to be one of the
most impactful characteristics when fostering positive test reactions (Hausknecht et al., 2004;
Rynes & Connerley, 1993; Schleicher, Venkataramani, Morgeson, & Campion, 2006; Steiner &
Gilliland, 1996). As mentioned in Table 1, job relatedness is described as the extent to which the tests and items appear related to the job of interest (Gilliland, 1993). Gilliland believed that job relatedness had the greatest influence on procedural justice perceptions and many other researchers have empirically supported this (Bauer et al., 1998; Huaskenecht, et al., 2004; Rynes
& Connerley, 1993; Truxillo, Bauer, & Sanchez, 2001; Truxillo, Bodner, Bertolino, Bauer, &
Yonce, 2009; Zibarras & Patterson, 2015). MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 7
Table 1. Procedural Justice constructs by category with definition (Gilliland, 1993)
Category/Item Definition
Formal Characteristics
The extent to which a test appears to be valid and measure Job Relatedness relevant job information
Having the opportunity to show your knowledge, skills, and Opportunity to Perform abilities during the testing process
Opportunity to question the decision and have it modified if Reconsideration Opportunity an error was made
Consistency Ensuring that decisions are consistent across individuals over time Explanation
Feedback Timely and informative feedback regarding testing
Information regarding the validity and effectiveness of these Selection Information tests
Honesty Honesty and openness from those doing the recruiting
Interpersonal Treatment
The degree to which applicants are treated with warmth and Interpersonal Effectiveness respect
Opportunity for applicants to offer input and have their Two-Way Communication views considered during the process
Propriety of Questions Avoiding improper and prejudicial questions during recruitment MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 8
The construct of job-relatedness was designed with consideration of seminal procedural justice rules including both the accuracy and resource rules. The accuracy rule, created by
Leventhal (1990), states that for the process of decision making to be perceived as fair, decisions need to based on as much good information as possible. Similiarly, the resource rule empirically studied and developed by Sheppard & Lewicki (1987), outlines how decisions should be made using the appropriate expertise and sources deemed as accurate by the individual impacted by the decided outcome. Procedural fairness perceptions in selection are violated when hiring outcomes are based on poor information or when the test content is deemed as irrelevant to the position.
These justice rules have been since linked to an extensive array of empirical works supporting the connection between job-relatedness and fairness perceptions (Kluger & Rothstein, 1991;
Schmitt, Gilliland, Landis, Devine, 1993; Schmitt, Oswald, Kim, Gillespie, & Ramsay, 2004;
Smither et al., 1993; Truxillo et al., 2001; Truxillo, et al., 2009). Meta-analytic work confirms that applicants prefer selection procedures in which the test content is visibly linked to skills and abilities required by the job (Hauskenecht et al., 1994). For the lay individual, it seems that job- related content is perceived as more valid, meaning an applicant can more easily rationalize how the test content relates to future job performance (Gilliland, 1993).
Facets of Job Relatedness
Empirically, job relatedness has been measured using both perceptions of face validity and perceived predictive validity (Bauer et al., 2001; Honkaniemi, Feldt, Metsäpelto, &
Tolvanen, 2013, Smither et al., 1993). As mentioned previously, job-relatedness measures the degree to which a test appears to be valid and content relevant (Anastasi, 1988). Whereas face validity encompasses the content relevance and is defined as the appearance of a test as practical and measuring what is intended to be measured (Gilliland, 1993), perceived predictive validity MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 9
describes the degree to which the test material seems valid in that it is relevant to the prediction
of job performance (Smither et al., 1993). Although some research has examined face validity
and predictive validity separately, several studies have found them to be discriminate aspects of the overarching construct of job-relatedness (Hausknecht et al., 2004).
Face validity. Academically, face validity has been defined in a variety of ways such as the degree to which a test looks to measure what it intends to (Aronson et al., 1990), the suitability and relevance of test content to the purported test use (Nevo, 1985), and the degree to which a test seems job-related (Smither et al., 1993). However, the generic research methods
definition says that the test “on its face” measures the construct adequately in that the superficial
impressions relate to the construct (Crano, Brewer, & Lac, 2014, pg. 65). Essentially, if a test is
intended to be face valid, the meaning and utility should be outwardly apparent to any individual
(Nevo, 1985). In this study, face validity will be measured and defined as “the extent to which
applicants perceive the content of the selection procedure to be related to the content of the job
(Smither et al., 1993, pg. 54).”
While some researchers have regarded face validity as trivial, the importance in relation
to selection and applicant reactions cannot be ignored. Nevo (1985) argues that, although many
researchers discount or deride the importance of face validity as a test characteristic, face validity
is a necessary test quality, especially in situations where stakeholder acceptance is critical such
as selection (Mosier, 1947). This argument has been empirically substantiated, especially in
applicant reactions research (Chan & Schmitt, 1997; Ployhart, Ziegert, & McFarland, 2003).
Face validity has been linked to positive applicant reactions through perceived job-relatedness in
a number of highly cited, academic works (Hausknecht et al., 2004; Kluger & Rothstein, 1993;
Rynes & Connerley, 1993) and has been found to contribute to the psychometric validity of MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 10
testing procedures (Chan & Schmitt, 1997; Smither at al., 1993). It has been posited that face validity positively contributes to an applicant’s motivation to perform thus increasing psychometric test properties as compared to a testing procedure with a less salient intent (Chan
& Schmitt, 1997). Overall, research supports the notion that, through the implementation of face valid selection procedures, organizations can increase applicant perceptions of both procedural and distributive justice, attitudes towards tests, and attitudes toward selection more generally
(Hausknecht et al., 2004).
Predictive validity. In addition to face validity, job-relatedness is also comprised of perceived predictive validity or predictive job relatedness (Bauer et al., 2001). Perceived predictive validity describes the extent to which a test is believed to adequately predict future job performance. Unlike face validity, which describes the test as superficially appearing to relate to
test content, perceived predictive validity evaluates the degree to which an applicant believes the
test can accurately differentiate between high and low performers. The accuracy rule, as defined
by Sheppard and Lewicki (1987), says that hiring decisions, when sources are perceived as
expert and accurate, bolster justice perceptions. Sources, or selection strategies, seen by
applicants as predictive of work success are much more likely to be accepted as fair and suitable
for hiring procedures (Hauskenecht et al., 2004; Smither et al., 1993).
It is important to note that perceived predictive validity is not a psychometric property
and differs from a test’s actual ability to predict job performance. Whereas predictive validity is
a subfacet of criterion validity and is used to determine whether a scale can accurately predict
some type of performance or differentiate degrees of ability, perceived predictive validity is
assessed through self-report responses regarding the extent to which an applicant believes scores MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 11
on a specific test look as though they will relate to performance on the job rather than if the
scores actually do relate to future job performance.
Types of Selection Tests
As stated, the job-relatedness of a test or selection procedure has become synonymous
with positive applicant reactions, however these reactions have been found to vary across
selection techniques (Anderson & Witvliet, 2008; Hausknecht et al., 2004; Ryan & Ployhart,
2000). This means that some selection practices are inherently deemed as more or less job-
related based on their content and implementation method alone. Research has confirmed that
hiring practices using work simulations or directly measured work behaviors are favored over
those tests assessing for abstract characteristics, even if those abstract characteristics are
predictive of work performance (Dodd, 1977; Robertson & Kandola, 1982; Schmidt, Greenthal,
Hunter, Berner, & Seaton, 1977). Even some of the most commonly utilized selection
techniques, general mental ability tests, work sample tests, personality tests, and situational
judgment tests (SJTs), vary widely in likability in relation to fairness perceptions (e.g.,
Anderson, Salgado, & Hulsheger, 2010; Hausknecht et al., 2004; Kluger & Rothstein, 1991;
Smither et al., 1993). This section aims to outline some of the most utilized testing methods and
describe their standing in applicant reactions research.
Measures of general mental ability. At this time, the most effective selection tool in the prediction of job success is the measurement of cognitive ability or general mental ability (GMA;
Hunter, 1986; Hunter & Schmidt, 1996; Ree & Earles, 1992; Schmidt & Hunter, 1981). These tests are used to evaluate intelligence, defined as one’s ability to understand and reason through abstract concepts as well as problem solve effectively. Individuals high in cognitive ability are able to quickly acquire and apply job-knowledge necessary to their success within an MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 12
organization. Obviously, some jobs, those that are more complex, necessitate extensive job- knowledge, but selection assessments measuring cognitive ability successfully predict job performance even for those jobs requiring less cognitive complexity (Schmidt & Hunter, 2000).
Cognitive ability tests come in varying formats (i.e., paper and pencil tests, computer based assessments) with a variety of content (i.e., math problems, vocabulary items). In general, cognitive ability tests are well perceived by applicants in regard to fairness (e.g., Hausknecht et al., 2004; Kluger & Rothstein, 1991; Smither et al., 1993). However, perceived predictive validity is usually higher for measures of GMA than face validity perceptions (Smither et al.,
1993) meaning that applicants believe that these tests do predict performance to some extent but the test intent is somewhat concealed. Supporting job-relatedness research, fairness perceptions of cognitive ability tests are higher when math skills, for a job requiring basic math, are evaluated rather than abstract assessments like the Raven Matrices Test (Raven, 1936), which measures fluid intelligence using shapes and complex patterns (Smither et al., 1993).
Personality tests. Another commonly employed selection technique is personality testing, which is utilized by roughly 40% of Fortune 100 companies and 30% of all American companies (Erickson, 2004). Personality tests measure individual differences that describe one’s interactions, behaviors, and characteristics. Over time, personality has been operationalized in many different ways, but the Big Five seems to be the most commonly utilized framework for trait classification (Gatewood, Field, & Barrick, 2010). The Big Five consists of
Conscientiousness, Openness to Experience, Emotional Stability, Extraversion, and
Agreeableness. In a large meta-analysis, Barrick and Mount (1991) found that Conscientiousness and Emotional Stability were the most predictive personality traits for all jobs. However, other
Big-Five personality traits can also be important predictors depending on the specifics of that MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 13
job, such as Extraversion for customer service positions. While personality is not as predictive of
job performance as general mental ability, personality does add some incremental prediction
above GMA. Further, hiring managers overwhelmingly believe that personality is virtually as important as GMA making it a popular assessment method within organizations (Dunn, Mount,
Barrick, Ones, 1995).
Obviously, organizational perceptions of personality measurement are positive, but the
use of personality tests in the hiring process remains one of the most contentious issues in the
area of applicant reactions as these tests, in the eyes of the test taker, seem to lack relation to the
job and unnecessarily infringe upon applicant privacy (McFarland, 2003; Smither et al., 1993).
When compared to other selection measures, personality is often rated low in fairness, job-
relatedness, face validity, and predictive ability of the test to predict job performance (e.g.,
Anderson et al, 2010; Hauskenecht et al., 2004; Rosse, Miller, & Stecher, 1994; Smither et al.,
1993). These findings have been generalized across testing contexts and in a variety of countries
(Anderson et al., 2010). Personality tests are meant to assess general attitudes, preferences, and
behavioral tendencies of an applicant (Rosse et al., 204). This is often operationalized using brief
trait statements like “I use others to get ahead” or “I am often worried.” These items are rarely
contextualized meaning they do not ask about behavior directly related to the job but behavior
more generally. However, research shows that when items are contextualized and made work-
relevant, perceptions of face validity and predictive validity are improved (Holtrop, Born, de
Vries, & de Vries, 2014).
It has been argued that negative reactions might not come from the assessment of
personality exactly, as it is often assessed in interviews (Huffcutt, Conway, Roth, & Stone, 2001)
which are generally well-liked by applicants (Hausknecht et al., 2004). Rather, negative MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 14
reactions might stem from the way in which these tests assess personality (McFarland, 2003). In interviews, personality is measured with questions contextualized to work (Huffcut et al., 2001), while personality tests use general statements. Essentially, interviews measure personality in a job-related way while personality tests generally do not, likely contributing to the difference in applicant reactions. Personality tests, overall, appear unrelated to the job, are poorly perceived by applicants, and are often deemed as overly intrusive.
Situational judgment tests. The selection techniques discussed previously have a long history within the area of selection (Lievens, Peeters, & Schollaert, 2008). More recently, however, applied psychologists and human resource professionals in the United States have more frequently integrated situational judgment tests (SJTs) into the process of employee selection
(Ployhart, 2006). SJTs describe a testing method where participants are presented with a variety of situations directly related to the job and asked which actions, from a provided list, are the most appropriate. These are assessments designed to measure one’s ability to make successful judgments within the workplace (Cabrera & Nguyen, 2001). STJs are more similar to a measurement method (Chan & Schmitt 1997; Weekly & Jones 1999) in that they can be specifically designed to emphasize specific predictors like personality or cognitive ability.
Generic SJT’s do have a correlational overlap with cognitive ability and personality (Clevenger,
Pereira, Wiechmann, Schmitt, & Harvey, 2001), however, SJT’s, through the measurement of interpersonal skills, explain incremental variance in the prediction of job performance, meaning that when used with other effective predictors, like personality and cognitive ability tests, SJTs provide some unique predictive ability (e.g., Chan & Schmitt, 2002; Jackson, LoPilato, Hughes,
Guenole, & Shalfroshan, 2017; Lievens & Sackett, 2006; McDaniel, Hartman, Whetzel, &
Grubbs, 2007). MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 15
This testing medium is particularly appealing to practitioners because implementation is
usually inexpensive and interpersonally related skills can be measured quickly at any point in the
selection process (Roth, Bobko, & McFarland, 2005). SJT’s are valid and incrementally helpful
in predicting job performance, while also limiting racial subgroup differences as compared to
generic cognitive ability tests (Motowidlo, Dunnette, & Carter, 1990). As a con, however, the
creation and validation of SJTs can be difficult and more time consuming than other tests, as
they are usually designed for a specific job (Lievens, Peeters, & Schollaert, 2008).
As mentioned, SJTs are a more recent development in selection research and practice,
having not been formally designed until 1990 (Motowidlo et al.). Because of its late addition to
selection testing, this method has been largely neglected in prominent applicant reactions
research (Kanning, Grewe, Hollenberg, & Hadouch, 2006). A few studies, however, have been conducted to evaluate applicant perceptions of SJTs independently and, overall, applicant reactions to SJTs seem pretty positive (Chan & Schmitt, 1997; Kanning et al., 2006; Richman-
Hirsch, Olson-Buchanan, & Drasgow, 2000). These positive applicant reactions have been attributed to the salient relationship of test items to the job of interest (Lievens et al., 2008).
Essentially, because SJTs are usually created for each job and employ the use of scenarios specific to work tasks, they inherently appear more job-related than other tests.
As supported by many empirical studies, the job-relatedness of selection procedures equates to applicant perceptions of fairness (e.g., Bauer et al, 1998; Rynes & Connerley, 1993;
Steiner & Gilliland, 1996), but not all tests are created equal. Overall, research has shown that work sample tests and interviews are perceived to be the most job-related, followed by cognitive ability tests, and lastly, personality measures (e.g., Hausknecht et al., 2004; Smither et al., 1993).
Situational judgment tests have, at this time, not been compared to other measures in regard to MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 16
job-relatedness and fairness perceptions. However, based on the test content and research
relating SJTs to high fairness perceptions, it would be expected that the job-relatedness for SJTs would also be high, even when compared to other testing methods. One goal of this study is to replicate findings in previous applicant reactions research (e.g., Hausknecht et al., 2004; Smither et al., 1993), showing that job-relatedness perceptions vary by test. However, this study also aims to extend previous applicant reactions research through the incorporation of SJTs.
Hypothesis 1: Job relatedness perceptions will be highest for (a) situational judgment
tests, followed by (b) cognitive ability tests and (c) personality tests.
Hypothesis 2: Job-relatedness perceptions will positively relate to overall fairness
perceptions.
Job-Relatedness and Self-Perceptions
As mentioned previously, the Justice Theory Framework explains how specifics of the situation (e.g. test type) inform evaluations of several justice rules which drive fairness perceptions, and more distally, various outcomes including self-perceptions (e.g. motivation, test-taking self-efficacy, performance). This theoretical model coincides with other models derived across disciplines such as the model of achievement behavior which states that situational factors significantly contribute to one’s self-perception and intent to effectively orient effort (i.e., motivation; Schunk, 1995). These theories suggest that, within a selection context, the situational factors inherent to the testing situation (i.e. tests used) and the resulting fairness perceptions contribute to an applicant’s evaluation of personal competency, motivation to perform, and degree to which they perservere. (Bandura, 1997). In support of these theoretical models, research has shown that job-relatedness and its sub facets impact an applicant’s self- MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 17
perceptions toward testing through changes in self-efficacy (Bauer et al., 1998), self-esteem
(Chan, & Schmit, 1994), and test motivation (Chan et al., 1997).
Motivation
Motivation, most generally, is the psychological and situational mechanism that drives someone to act (Ryan & Deci, 2000) in both intensity and direction of behavior (Humphreys &
Revelle, 1984). Motivation largely contributes to the degree to which someone orients cognitive resources, feels efficacious towards a given task, and can persevere when presented with difficulty (Bandura & Locke, 2003), which is likely why motivation has been suggested as a primary driver of performance across a variety of situations (Chemers, Hu, & Garcia, 2001;
Locke, Frederick, Lee, & Bobko, 1984; Stajkovic & Luthens, 1988), including applicant assessment (Chan et al., 1997). As was posited by applicant reaction researchers (Arvey et al.,
1990), positive test reactions, specifically from job-related test materials, have been found to incite test-taking motivation (Hausknecht et al., 2004) thus improving performance on selection tests (e.g., Chan et al., 1997; McCarthy, Van Iddeking, Lievens, Kung, Sinar, & Campion, 2013).
A variety of theories have been used to explain the relationship between test perceptions
and motivation including process efficiency theory (Eysenck, Derakshan, Santos, & Calvo,
2007), interference theory (Sarason, Sarason, & Pierce, 1990), expectancy theory (Sanchez,
Truxillo, & Bauer, 2000), applicant attribution-reaction theory (AART; Ployhart & Harold,
2004), and social cognitive theory (Stajkovic & Luthans, 1979). Both process efficiency theory
and interference theory argue that perceptions of fairness orient an applicant’s allocation of
cognitive and attentional resources (Arvey et al., 1990, Bandura, 1982). Therefore, when few
extraneous factors cloud cognitive and attentional resources (i.e., test anxiety, perceived
injustice), applicants can more appropriately focus in on the task at hand, orient their attention, MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 18
and reduce peripheral processing thus increasing motivation and self-efficacy (Arvey et al.,
1990; Bandura, 1982; Chan, Schmitt, Sacco, & DeShon, 1998)
Another potential explanation is through Expectancy theory which describes how one’s
expectations about the future impact motivation (Derous, Born, & Witte, 2004). In a selection
context, expectations regarding procedural and distributive justice have the potential to play into
the formation of test reactions (Greenen, Proost, Schreurs, van Dam, & von Grumbkow, 2012), test motivation, and self-efficacy (Bell, Wiechmann, & Ryan, 2006; Chapman, Uggerslev, &
Webster, 2003; Derous et al., 2004; Schreurs, Derous, Proost, & De Witte, 2010). According to expectancy theory, if individuals expect the test to be related to the job and, upon review, believe otherwise, they are likely to have reduced perceptions of ability and motivation (Ryan et al.,
2000). Within the conceptual model by Bell, Ryan, and Weichmann (2004), past test experiences, beliefs in tests, and reactions to taking tests generally may also drive expectations thus impacting motivation. Similiarly, applicant attribution-reaction theory (AART; Ployhart &
Harold, 2004) posits that situations specific to selection are generally nuetral. It is the attributional processing that informs the creation of perceptions and behavior outcomes including test performance and motivation. In other words, in an attempt to understand novel situations, people define and categorize situation specific information (i.e. interpersonal treatment, tests), which is used to inform self-perceptions related to ability and motivation.
No one theory fully explains motivation, as behavior is a function of many variables
(Schunk, 1991). However, it is likely that because of research supporting these theories in relation to applicant reactions (e.g. Bauer et al., 1998; McCarthy et al., 2013; Ployhart, Ehrhart,
& Hayes, 2005) as well as the theorietical basis from which they were drawn, all contribute to the reaction-motivation to some extent. The attributions assigned to a situation (AART) are MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 19 likely derived from the associated expectations (expectancy theory) which inform applicant focus, attention orientation, and peripheral processing (interference theory). Based on theory as well as the consistent finding that test-taking motivation relates to job-relatedness (Hausknecht et al., 2004) and perceived fairness (Bauer et al., 2001; Hausknecht et al., 2004); it is hypothesized that job-relatedness will predict test-taking motivation.
Hypothesis 3: Job-relatedness perceptions will be positively related to test motivation.
Test Performance
As mentioned, motivation drives someone to orient cognitive effort and persevere in times of difficulty (Bandura & Locke, 2003) thus generating performance (Chan et al., 1997;
Chemers et al., 2001; Locke et al., 1984; Stajkovic & Luthens, 1988). Because motivation is so strongly linked to behavior, it has been stated that when reactions deplete test motivation and decrease test performance, those who would be the best applicants are erroneously selected out of the process thus harming the proficiency of the selection system and the organization (e.g.,
Arvey et al., 1990; Ryan & Ployhart, 2000). Employee performance is of utmost importance to the success of an organization and the more we know about the mechanisms that may alter performance, the better we can make our hiring processes to enhance organizational achievement. Performance is widely considered an outcome of both ability and motivation (Chan et al., 1997); therefore, it is hypothesized that test motivation will play an important role in the prediction of test performance.
Hypothesis 4: Test motivation will be positively related to test performance.
Self-Efficacy
As mentioned, the reactions-motivation relationship has been explained through a variety of theories (e.g., process efficiency theory, expectancy theory). However, one of the most MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 20
commonly utilized and empirically tested motivational theories is social cognitive theory (SCT),
which suggests that motivation is derived from both environmental and cognitive factors
(Stajkovic & Luthans, 1979). In essence, people cognitively integrate environmental cues to
make a determination about their ability to succeed thus driving their motivation to perform. This
defines a major feature of SCT, self-efficacy, which describes the degree to which a person feels
capable to affect the environment and produce desired outcomes (Bandura, 1977). According to
SCT, test content provides the applicant with environmental cues that must be integrated, using
individual cognition, to determine one’s test-taking self-efficacy or the degree to which they feel confident in their ability to be successful on a specific measure (Bandura, 1997).
Self-efficacy is a motivational construct described as a person’s belief that they are able to complete a task or group of tasks to attain specific performance goals (Bandura, 1977). Self-
efficacy contributes to how one feels about a task, how they choose to motivate themselves, the
goals they set specific to that task, and whether or not they can persevere when presented with
difficulties (Bandura & Locke, 2003). Stronger perceptions of self-efficacy are also related to
higher personal goal challenges and goal commitment (Bandura, 1991). Those with high, task-
oriented self-efficacy visualize success on the specific task while those who do not have
increased self-doubt, thus dwelling on the possibility of failure (Bandura, 1993).
Central to SCT is the idea that self-efficacy is derived from both the situation and the way that situation is appraised (Schunk, 1995). Within selection, the testing situation, tests utilized,
and the general appraisal of fairness informs one’s self-evaluation of competency thus impacting
test-taking motivation and test performance (Bandura, 1997). The relationship between test
reactions and self-efficacy can potentially be explained through the accuracy rule which states
that for decisions to be deemed as fair, they should be made using as much credible information MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 21
as possible (Leventhal, 1990). In the testing situation, however, where the test seems unrelated to
the job, it is difficult for applicants to make a judgment regarding a test’s accuracy in predicting
performance. This distrust in the testing process reduces one’s feelings of competency and ability
to succeed as the decision criteria is unclear.
It has been posited that fairness perceptions aide in one’s development of self-efficacy
through attribution theory (Truxillo, Bauer, & Sanchez, 2001; Weiner, 1985). When the test is
deemed as unfair and not related to the job of interest, applicants are more likely to attribute their
performance to external outcomes in order to reduce threats to the self. Essentially, tests that are
not job-related, thus evaluated as unfair, reduce perceptions of control therefore diminishing
one’s internal efficacy.
In alignment with Gilliland’s (1993) Justice Theory Framework’s perspective, the
relationship between testing perception and self-efficacy has been tested on numerous occasions
and found to be substantial (Hausknecht et al., 2004; McCarthy et al., 2013; Truxillo et al.,
2002). In summary, situational factors can contribute to the degree to which an individual is
motivated and feels efficacious about their test-taking ability (Schunk, 2005). Situational factors in selection, characteristics of the tests and the resulting fairness perceptions, lead an applicant to feel competent, therefore motivated to put in the necessary effort to be successful and persevere through difficult items (Bandura, 1997). As noted, motivation is a function of many variables
(Schunk, 1991); thus,it is hypothesized that job-relatedness will positively relate to motivation
partially through self-efficacy beliefs derived from the job-relatedness perceptions of the
selection procedures. A model containing all hypothesized relationships can be found in Figure
1. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 22
Hypothesis 5: Self-efficacy will partially mediate the relationship between job-
relatedness perceptions and test motivation.
Control Variables: Individual Differences
Lastly, it is important not only to consider how job-relatedness perceptions of varied
selection techniques influence test performance through motivational mechanisms but it is also
imperative that we evaluate the extent to which this process is impacted by individual
differences. Recently, a meta-analysis was conducted wherein the authors reviewed the current
state of applicant reactions research and provided commentary outlining important next steps for
research in the applicant reactions domain (McCarthy et al., 2017). One relatively novel and
valuable research area mentioned was the topic on individual differences.
In a previous study, Chan and colleagues (1997) empirically examined the directional
relationship between face validity, test-taking motivation, and test performance. In support of the proposed hypotheses, face validity did significantly predict performance through changes in test- taking motivation. In the discussion, researchers noted the findings could be evidence for a potential reduction in test validity based on test reactions but it was also plausible that these MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 23
differences reflected trait motivation which could, in turn, predict motivation on the job
therefore, bolstering rather than hindering test validity. This study begins to address this concern
by evaluating a similar model while also controlling for individual differences related to test and
job achievement: Conscientiousness (Barrick & Mount, 1991) and generalized self-efficacy
(GSE; Judge & Bono, 2001).
Conscientiousness. Conscientiousness is a Big Five Personality trait that encompasses how a person controls and regulates daily impulses, their degree of responsibility, and general predilection toward achievement-striving. Conscientiousness is made up of several facets that include orderliness, self-discipline, and cautiousness. Many meta-analytic studies have shown strong evidence for the relationship between Conscientiousness and job performance across a
variety of settings (Barrick & Mount, 1991; Chen et al., 2001) and has been declared the primary
dispositional predictor of job performance by Mount and Barrick (1995).
It has been recommended that organizations hire on Conscientiousness because of its
significant predictive ability (Barrick & Mount, 1991) and its relationship to productivity
measures, performance ratings, job tenure, and goal setting motivation (Chen et al., 2001; Judge
& Ilies, 2002). Highly conscientious individuals are more inclined to set positive goals, remain
resilient in difficult times, and strive to achieve (Barrick & Mount, 1991; Brown, Lent, Telander,
& Tramayne, 2011; Chen et al., 2001; Judge & Ilies, 2002) which is what makes them particularly successful in a work setting. These skills also make a person more inclined to succeed within a testing setting. In a meta-analysis by Hausknecht and colleagues (2004),
Conscientiousness was found to positively correlate with procedural justice perceptions only .08 but Conscientiousness and test motivation significantly correlate at .20 meaning that MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 24
Conscientiousness might not alter test perceptions, but rather the effect test perceptions have on
motivation and how one performs.
Generalized Self-Efficacy. Self-efficacy was originally defined as the “beliefs in one’s
capabilities to mobilize the motivation, cognitive resources, and courses of action needed to meet
given situational demands” (Wood & Bandura, 1989, p. 408). Because the definition included
“situational demands,” research focused on task-specific self-efficacy, which is related to many positive organizational outcomes (Judge & Bono, 2001; Stajkovic & Luthans, 1998). However, in the late 1980’s/early 1990’s, researchers began to study the trait level of self-efficacy termed generalized self-efficacy (GSE; Eden, 1988, 1996; Gardner & Pierce, 1998; Judge, Erez, &
Bono, 1998; Judge, Locke, & Durham, 1997). GSE has been defined as one’s perceptions of competence and perceived ability across various situations (Judge et al., 1998). The assessment of GSE allows researchers to determine differences in the way in which people view themselves and the degree to which they believe they can succeed in a variety of contexts (Chen, Gully, &
Eden, 2001).
Empirically, GSE and task-related self-efficacy have been shown to be separate constructs (Stajkovic & Luthans, 1998); however, they share many outcomes and antecedents
(Eden, 1988; Gardner & Pierce, 1998; Judge et al., 1997). In contrast, generalized self-efficacy refers to a broader scope than task specific self-efficacy and is much more resistant to transient influences (Eden, 1988). This suggests that those high in GSE will be less impacted by testing perceptions and more inherently efficacious and motivated to succeed. Given the scope of this project, to evaluate the relationship between job-relatedness perceptions and job performance through motivational mechanisms, it is also important to consider how achievement related individual differences modify the utility of the proposed model. If the proposed model has poor MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 25 fit when considering these individual differences, it might indicate that testing perceptions are less important in regard to selection utility, as those who are likely to perform well and be hired are resilient to testing reactions.
It has been stated that when reactions deplete test motivation thus impacting test performance, the proficiency of the selection system may be impacted (Arvey et al., 1990; Ryan
& Ployhart, 2000); however, it has also been noted that these dispositional or trait differences in motivation may reflect real candidate ability (McCarthy et al., 2013). Employee performance is of utmost importance to the success of an organization and the more we know about the mechanisms that may alter test proficiency as well as the boundary conditions under which proficiency remains, the better we can make our hiring processes to enhance organizational achievement.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 26
METHOD
Participants
For the current study, participants were recruited through Amazon’s Mechanical Turk
(MTurk). MTurk is a crowd sourcing internet marketplace that reaches a diverse group of
individuals interested in participating in research projects as well as completing other online
jobs. Past research suggests that MTurk is an appropriate and reliable way to obtain participants
as MTurk workers act similarly to those participants from other commonly used sampling
sources (Buhrmester, Kwang, & Gosling, 2011; Paolacci, Chandler, & Ipeirotis, 2010).
In an effort to ensure data quality, only those participants who had previously completed
50 HITS, had an approval rate of 95%, and lived in the United States were eligible. Participants
were also required to work in customer service at least 24 hours a week. The original sample
consisted of 466 participants, but 41 respondents were removed for failing data quality metrics
(e.g., more than one quality check missed, completing the survey in less than 4 minutes, test
performance 2.5 SD below mean or more). The average age of the 425 participants retained was
34.6 (SD = 9.7). Seventy-six percent were white, 10% were African American, 6% were
Hispanic, 7% were Asian, and 51% were male.
Procedure
First, eligible participants were asked to complete the items for the control variables (i.e., generalized self-efficacy, Conscientiousness). Similar to the procedure used by Holtz and colleagues (2005), participants were then provided a job description outlining a desirable (e.g., competitive salary, benefits) customer service position for which they were applying. This job description can be found in Appendix A. This type of research design has been shown to have good generalizability to the field when real job incumbents are not available (Hunthausen MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 27
Truxillo, Bauer, & Hammer, 2003; Schmit et al., 1995). Participants were also told that the top
10% of performers on the assessment would receive additional compensation (i.e., 50% of the promised payment) to increase motivation to perform as seen in similar work with non- applicants (e.g., Chan et al., 1998; Holtz et al., 1995).
Next, participants were randomly assigned to 1 of 3 conditions (personality test, cognitive
ability test, or situational judgment test) and asked to review and complete 5 items from that test.
This method allows us to capture perceptions devoid of reactions stemming from the actual test
performance. After completion of the small test sample, participants were asked to respond to the
items for face validity, perceived predictive validity, test-taking self-efficacy, and test motivation. All participants completed these items in that order. Lastly, participants were asked to complete the full selection assessment then respond to perceptions of overall fairness.
The assessments had been previously piloted in undergraduate classes to ensure time for completion was not a confound. Each test was expected to take roughly 8 minutes. The cognitive ability test was timed as it reflected the qualities of a speed test as described by Mead and
Drasgow (1993). Cognitive ability tests for customer service jobs usually contain relatively easy items that, given an unlimited amount of time, would be correctly answered by most individuals.
The time limit, which is usually employed in selection situations for assessments of general mental ability, is necessary in this given situation to differentiate high and low performers on commonly used test items.
Measures
Selection assessments. All selection tests used in this study are intellectual property of two consulting companies who specialize in the creation and implementation of selection materials. Because of the proprietary nature of these tests, test material and scoring keys can not
be published. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 28
Situational judgment test. The SJT contains 17 items meant to evaluate a person’s ability to judge appropriate behavior in specific situations. This is a validated measure used for selection in customer service positions by Work Skills First, Inc. After being given a situation, participants were provided with 2-6 possible behaviors. They were then asked to endorse how effective each behavior is on a 6-point Likert scale. Points are awarded for correctly identifying the behaviors as effective or ineffective. Higher scores indicate better performance. The directions and an example of a test item can be found in Appendix B.
Cognitive ability test. Fourty items were aggregated from a larger multi-hurdle selection system used by Polaris Assessments Incorporated specific to customer service positions within a large organization. Participants were given an 8-minute time limit to complete as many test items as they can. The forty items were meant to assess competencies related to basic math, reasoning skills, and attending to details. Example test items can be found in Appendix C. The test and scoring system has been validated by Polaris Assessments Inc. For test scoring, points are awarded for correct answers and higher scores indicate better performance.
Personality test. Sixty-five personality items were taken from a larger selection system used to hire for customer service jobs within a large, well-known organization. These items reflect competencies related to flexibility, communication, creativity, sociability, helping others, reliability, initiative, and customer service demeanor. Personality items were randomly presented with responses ranging from (1) strongly disagree to (5) strongly agree. The scoring system has been validated by Polaris Assessments Inc. Five points are assigned to the best answer and 1 point to the worst. Higher scores indicate better performance. Example test items can be found in
Appendix D. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 29
Face validity. Face validity was measured using a five-item survey created by Smither and colleagues (1993) in one of the seminal works on applicant attraction. These items are part of a larger questionnaire also evaluating perceived predictive validity, likelihood of improvement, affect, perceived knowledge of results, and organizational attractiveness. Example items include, “I did not understand what the examination had to do with the job” and “it would be obvious to anyone that the examination is related to the job.” Participants are asked to respond on a five-point Likert scale ranging from strongly disagree to strongly agree. The alpha reported by Smither and colleagues (1993) was .86. A full list of items for the academic measure used can be found in Appendix E.
Perceived predictive validity. The items being used to measure perceived predictive validity were also from the larger survey created by Smither et al. (1993). This measure contains five items where participants were asked to respond on a 5-point Likert scale ranging from
strongly disagree to strongly agree. This measure was found to have an alpha coefficient of .83 in
the original study by Smither and colleagues (1993). Sample items include, “failing to pass the
examination clearly indicates that you can’t do the job” and “my performance on the
examination was a good indicator of my ability to do the job.”
Testing self-efficacy. The measure for self-efficacy was created based on recommendations by Bandura (2006). These questions were created to assess self-efficacy for this specific task rather than general self-efficacy. Participants were asked to, “rate your degree
of confidence in each statement by responding appropriately to the scale below,” for each of the
items. Responses were reported on a five-point Likert-scale where 1 is strongly disagree and 5 is strongly agree. Sample items include, “I believe I can do well on this test” and “I have the ability to be successful on this test.” This method is similar to that used by Gilliland (1994). MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 30
Test motivation. Test motivation was measured using five items from a ten item scale originally created by Arvey, Strickland, Drauden, and Martin (1990). In the original survey, the
10 items had a coefficient alpha of .85. Participants were asked to respond to these items as if they were taking the selection test as an applicant for the specific job outlined on a 5-point Likert scale ranging from strongly disagree to strongly agree. Example items include, “I want to be among the top scorers on this test” and “doing well on this test is important to me.” The scale was abbreviated in an effort to eliminate redundancy and decrease time strains on the participants. Two items from the larger 10 item version were removed because they were negatively worded. The other three were dropped because of substantial overlap in item content.
An example being that the item “I tried my best on this test” was excluded while "I tried to do the very best I could to on this test” was utilized in this abbreviated version.
Generalized self-efficacy. GSE was measured using the New General Self-Efficacy
Scale (NGSE) developed by Chen, Gully, and Eden (2001). The measure consists of 8 items.
Participants were asked to rate the extent to which each statement describes them on a 1-5
Likert-scale where 1 is strongly disagree and 5 is strongly agree. An example item is “I will be able to achieve most of the goals that I have set for myself.”
Conscientiousness. Conscientiousness was measured using the 10 item version of the
NEO IPIP adapted by Goldberg (1992). Participants were asked to report the degree to which on a 5 item Likert-scale, they agreed with the following statement. Some examples of
Conscientiousness items include “[I] get chores done right away” or “[I] waste my time.” The
NEO IPIP is a shortened, 50 item, personality inventory adapted by Goldberg (1992). This measure has been found to have high internal consistency in previous work (α = .86) and correlate highly with the appropriate scales. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 31
RESULTS
The means, standard deviations, sample sizes, and reliability estimates by experimental
group can be found in Table 2. To test Hypothesis 1, that SJT’s would be significantly highest in
job-relatedness, followed by cognitive ability tests then personality, a one-way Analysis of
Variance (ANOVA) was conducted. Levene’s test indicated that the assumption of homogeneity
of variance was not met for this data (F = 15.6, p < .001) so a Welch adjusted F ratio (43.2) was
obtained. This main effect was significant indicating substantial differences between at least two
of the three test conditions in terms of job-relatedness, F(2,268.12) = 43.2, p < .001, η2 = .20. To determine how each experimental condition varied by perceived job-relatedness, Dunnet’s T3
post hoc procedure was implemented (Table 3.). This analysis determined that those in the
cognitive ability condition believed the test to be less job-related (M = 3.07, SD = 1.0) than those
in either the personality condition (M = 3.73, SD = 0.73) or SJT condition (M = 4.02, SD = 0.90).
The difference in job-relatedness perceptions between the personality condition and SJT condition was also significant wherein the SJT was viewed as more job-related. As a result,
Hypothesis 1 was partially supported in that SJT’s were viewed as the most job-related but cognitive ability tests were deemed as the least job-related. In order to determine if job- relatedness perceptions were significantly related to fairness perceptions, the correlation was evaluated and found to be significant, r (416) = .60, p < .001, supporting Hypothesis 2.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 32
Table 2. Means and Standard Deviations for Study Variables by Test Condition Cognitive Ability Situational Personality Test Test Judgment Test (n = 144) (n = 140) (n = 141)
M (SD) M (SD) M (SD) Job-Relatedness 3.1 (1.0)bc 3.7 (0.7)ac 4.0 (0.6)ab
Face Validity 3.2 (1.1)bc 4.0 (0.8)ac 4.4 (0.7)ab
Perceived Predictive Validity 2.9 (1.0)bc 3.5 (0.9)a 3.6 (0.9)a
Test-Taking Self-Efficacy 4.0 (0.7)b 4.3 (0.6)a 4.2 (0.7)
Test Motivation 4.4 (0.7) 4.5 (0.7) 4.5 (0.7)
Test Performance 25.4 (9.39) 259.6 (35.58) 28.5 (3.82)
Conscientiousness 4.1 (0.8) 4.1 (0.7) 4.0 (0.7)
Generalized Self-Efficacy 4.2 (0.7) 4.2 (0.7) 4.1 (0.8)
Fairness 3.58 (1.1)bc 4.03 (0.9)a 4.14 (0.8)a
Note. Superscripts indicate significant difference between conditions where a is cognitive ability tests, b is personality test, and c is situational judgement test.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 33
Table 3. Dunnett T3 Comparison for Test Type
95% CI Mean Diff Lower Test (I) Test (J) St. Error Upper bound (I-J) bound Cognitive Personality -0.66* 0.11 -0.92 -0.41 Ability SJT -0.95* 0.10 -1.2 -0.71
Personality Cog 0.66* 0.11 0.41 0.92
SJT -0.29* 0.08 -0.49 -0.09
SJT Cog 0.95* 0.10 0.71 1.2
Personality 0.29* 0.08 0.09 0.49
* The mean difference is significant at the .05 level. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 34
Correlations, means, and standard deviations for all study variables, as aggregated
between conditions, can be found in Table 4. In order to test Hypotheses 3-5, structural equation modeling was used. First a confirmatory factor analysis (CFA) was conducted in MPlus (Muthen
& Muthen, 1998-2012) to assess the distinctness of the latent constructs. The indicators for job- relatedness were the sum score for the construct of face validity and a separate sum score for the construct of perceived predictive validity. All other latent constructs were made up of their individual test items. Test performance was entered as an observable variable. Model fit was assessed using various fit indices including the chi-square statistic wherein a non-significant value indicates good fit; however, the chi-square statistic is very sensitive to sample size. In order to triangulate fit across methods, other fit statistics were also evaluated including the comparative fit index (CFI; Hu & Bentler, 1995) where a value of .90 indicates acceptable fit and the standardized root mean residual (SRMR) which evidences moderate fit when the value is .08 or lower but a more conservative estimate is .05 or below (Hu & Bentler, 1999). Root mean square error (RMSEA) is indicative of moderate fit when the value is below .08 (Browne &
Cudeck, 1993). Using these fit statistics and predetermined cut-offs, the original CFA evidenced poor fit (χ2(391) = 1465.11, p < .001; CFI = .863; SRMR = .059; RMSEA = .08, 90% CI [.076-
.085]). The model modifications index suggested that substantial misfit could be found from
measurement error within the measure of test-taking self-efficacy as two questions shared residual error. Both items seem to measure a similar portion of the construct space (i.e., I can do well/achieve on this test) so for the next CFA they were allowed to correlate. This model evidenced significantly better fit (∆χ2(1) = 124.87, p < .05), but overall fit was still poor
(χ2(390) = 1340.24, p < .001; CFI = .879; SRMR = .065; RMSEA = .076, 90% CI [.071-.080]).
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 35
Table 4. Means, Standard Deviations, Correlations, and Reliability Estimates Variable M SD 1 2 3 4 5 6 7 8 9 1. Job-Relatedness 3.61 0.9 (.92) 2. Face-Validity 3.86 1.0 .90* (.91) 3. Perceived 3.36 1.0 .89* .61* (.87) Predictive Validity 4. Test Taking Self- 4.44 0.7 .50* .40* .50* (.82) Efficacy 5. Test Motivation 4.31 0.8 .37* .38* .27* .56* (.87) 6. Test Performance - - .18* .25* .06 .34* .46* (.69-.97) 7. Conscientiousness 4.07 0.7 .20* .20* .16* .37* .47* .30* (.90) 8. Generalized Self- 4.16 0.7 .19* .15* .19* .52* .46* .22* .68* (.93) Efficacy 9. Fairness 3.58 1.1 .60* .47* .60* .52* .38* .22* .29* .35* (.90) * = p < .01. Note: Reliability estimates in main diagonal; the alpha coefficient for test performance is a range because of the three test conditions: cognitive ability test (α = .87), SJT (α = .69), and personality test (α = .97).
Although the measures utilized in this study have been shown to have good internal
consistency (Arvey et al., 1990; Goldberg, 1992; Smither et al., 1993), in order to further
evaluate model misfit, an exploratory factor analysis (EFA) was conducted. A principle
components analysis was used with oblimin rotation as previous works show many of these
constructs to be correlated (Cheng & Ickes, 2009; Hasknecht et al., 2004; Zimmerman, 2000).
Because the indicators for job-relatedness were sum scores, these items were not entered into the
EFA nor were the observed performance scores. Upon interpretation of the EFA and scree plot
(Cattell, 1966), there were 5 substantial factors with an eigen value above 1 accounting for 68%
of the variance. The Kaiser-Meyer-Olkin measure of sampling adequacy was .94 and Bartlett’s
test of sphercity was significant (χ (351) = 7336.25, p < .05) supporting reasonable fit.
The first factor explained 42% of the variance, factor 2 explained 10% of the variance,
factor 3 explained 8%, and factors 4-5 explained roughly 4% each. Three of the measures loaded
as expected with primary loadings being above .5 and cross-loadings below .3. However, the
items for Conscientiousness were split for those positively and negatively worded and two of the
positively worded Conscientiousness items loaded substantially onto the generalized self- MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 36 efficacy factor as well. Because of the cross-loading, the item “make plans and stick to them” was removed and the EFA was run again.
The next EFA had similar loadings and the Kaiser-Meyer-Olkin measure of sampling adequacy was still .94 and Bartlett’s test of sphercity was significant (χ (325) = 6901.83, p <
.05). Again, all measures loaded as expected except there was still one Conscientiousness item with a substantial cross-loading and a negatively worded Conscientiousness factor. The item
“carry out my plans” was removed. Then the EFA was run a final time. This EFA consisted of five factors accounted for 69% of the variance with all items loading as expected except for
Conscientiousness having both a positively worded and negatively worded factor. Summary results of the final EFA can be found in Table 5.
Research confirms that negatively worded items have the potential to diverge via factor analysis because of differences in response patterns (DiStefano & Molt, 2006). Because the EFA suggested an additional method factor and the measure of Conscientiousness used (i.e., IPIP) has been shown to have good psychometric properties (Donnellan, Oswald, Baird, & Lucas, 2006), this factor was integrated into the CFA so that the latent Conscientiousness factor was created using a composite score for the positive worded items and a separate composite for the negatively worded items. The final CFA evidenced good fit (χ2(194) = 570.27, p < .001; CFI =
.931; SRMR = .051; RMSEA = .068, 90% CI [.061-.074]).
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 37
Table 5. Summary of Exploratory Factor Analysis Results based on a principle components analysis with oblimin rotation for 4 measures (N = 379) Generalized Conscientiou Conscientiou Test Taking Self-Efficacy -sness (+) -sness (-) Motivation Self-Efficay GSE1 .847 GSE2 .814 GSE3 .871 GSE4 .898 GSE5 .809 GSE5 .807 GSE7 .655 GSE8 .716 ConPos1 .700 ConPos2 .622 ConPos3 .714 ConNeg1 .322 .681 ConNeg2 .788 ConNeg3 .757 ConNeg4 .774 ConNeg5 .764 Motivation1 .701 Motivation2 .757 Motivation3 .835 Motivation4 .757 Motivation5 .659 TTE1 .584 TTE2 .873 TTE3 .812 TTE4 .610 Eigenvalues C 1.03 2.04 1.17 2.61 % of variance 41.1 4.1 8.2 4.7 10.5 Note: Factor loadings < .3 have been suppressed.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 38
All structural models were conducted using maximum likelihood bootstrapping (5,000
samples were drawn) to estimate confidence intervals (95%) and estimate standard errors. Next
the hypothesized model was tested (see Figure 1) and found to have acceptable fit
(χ2(198) = 594.96, p < .001; CFI = .928; SRMR = .075; RMSEA = .069, 90% CI [.062-.075]).
Not noted in Figure 1, the control variables were allowed to correlate for all structural models.
The control variables were also regressed onto test performance, test-taking motivation, and test-
taking self-efficacy. All relationships are reported using standardized parameter estimates.
Job-relatedness perceptions did significantly predict test-taking self-efficacy (ß = .60, p <
.001) but did not significantly predict test-taking motivation (ß = -.04, p > .05). Therefore,
Hypothesis 5 was supported while Hypothesis 3 was not. In addition, test-taking self-efficacy did predict test-taking motivation (ß = .59, p < .01), further supporting the proposed mediation and
Hypothesis 5. Test motivation also significantly predicted test performance (ß = .45, p < .001) in support of Hypothesis 4. The indirect effect of job-relatedness on performance through motivation was evaluated and not signficant (ß =-.02, (95% CI [.-.16, .12], p > .05). However, the indirect effect of job-relatedness on test performance through test-taking self-efficacy and motivation was significant (ß =.17, (95% CI [.02, .28], p < .05). These results were confirmed when controlling for Conscientiousness, which did not significantly predict test-taking self- efficacy (ß = -.08, p > .05) or test performance (ß = .12, p > .05) but did significantly predict test motivation (ß = .39 , p < .05). The control variable generalized self-efficacy did significantly predict test-taking self-efficacy (ß = .58, p < .01) but not motivation (ß = -.13, p > .05) or performance (ß = -.11, p > .05). The model accounted for 63% of the variance in test-taking self- efficacy, 51% of the variance in test motivation, and 22% of the variance in test performance.
Standardized parameter estimates for the hypothesized model can be found in Figure 2. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 39
Standardized parameter estimates for the control variables as regressed onto test-taking self-
efficacy, motivation, and test performance can be found in Table 6.
Table 6. Standardized path estimates for control variables as regressed onto model variables Hypothesized Full Mediation Complex Model Model Model Con GSE Con GSE Con GSE Test Taking Self-Efficacy -.08 .58* -.08 .58* -.08 .58* Motivation .39 -.13 .38* -.10 .39 -.13 Test Performance .12 -.11 .13 -.11 .12 -.11 *p < .01
To determine the extent to which the hypothesized model fit the specific data, two
alternative models were also tested. The first alternative was a full mediation wherein test
performance was predicted by motivation, motivation was predicted by test-taking self-efficacy, test-taking self-efficacy was predicted by job-relatedness. This model, although more
parsimonious, evidenced good fit (χ2(199) = 595.186, p < .001; CFI = .928; SRMR = .075;
RMSEA = .068, 90% CI [.062-.075]). The chi-square difference test comparing the hypothesized MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 40
model to the more parsimonious alternative model was not significant suggesting that the more
parsimonious, full mediation model was more appropriate (∆χ2(1) = 0.23, p > .05). In this model, job-relatedness significantly predicted test-taking self-efficacy (ß = .60, p < .001). Test-taking self-efficacy significantly predicted test motivation (ß = .55, p < .001). Test motivation significantly predicted test performance (ß = .45, p < .001). The indirect effect of job-relatedness on test performance through test-taking self-efficacy and test motivation was also significant (ß
=.15, (95% CI [.12, .38], p < .001). The model accounted for 63% of the variance in test-taking self-efficacy, 50% of the variance in test motivation, and 22% of the variance in test performance. Standardized parameter estimates for the more parsimonious full mediation model can be found in Figure 3.
In addition, a more complex model was tested wherein test performance was directly predicted by job-relatedness and test-taking motivation. Test motivation was predicted by test-
taking self-efficacy and job-relatedness. Test-taking self-efficacy was predicted by job- relatedness. The complex model also evidenced good fit (χ2(197) = 594.99, p < .001; CFI = .927;
SRMR = .075; RMSEA = .069, 90% CI [.063-.075]). The chi-square difference test evaluating the hypothesized model and more complex model was not significant (∆χ2(1) = 0.03, p > .05) MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 41
suggesting that hypothesized model was more appropriate. However, the chi-square difference test in comparison to the full mediation model was also not significant (∆χ2(2) = 0.196, p > .05).
Standardized parameter estimates for the more complex model can be found in Figure 4. Based
on the model comparisons, similarity in fit statistics, and increased parsimony associated with
the full mediation model, the alternative full mediation was selected as more appropriate than
either the hypothesized model, the more complex model, or the different model.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 42
DISCUSSION
As expected, results indicate that selection tests vary in their perceived job-relatedness.
However, unlike previous research, which has found cognitive ability tests to be perceived as more job-related than personality tests (Hausknecht et al., 2004; Smither et al., 1993), our findings were that situational judgment tests were perceived as being most job-related followed by personality tests and cognitive ability tests. The differences in findings could potentially be attributed to the specific job used (i.e., customer service worker). Customer service jobs are generally perceived as highly interpersonal. Of the Big Five personality traits, specific to customer service jobs, Agreeableness has been shown to be incrementally predictive (Frei &
McDaniel, 1998). It is also possible that the diminished perception of job-relatedness for the cognitive ability test was because of the test used. Although this test has been validated for customer service jobs, the items are related to math, spelling, reading tables/charts, and grammar, which may not seem perceived as directly related to the tasks conducted by customer service workers. On the other hand, the personality test used was compiled of questions directly related to interpersonal competencies and many were contextualized to a work setting potentially improving the perceived job-relatedness. However, the results are mixed regarding the utility of contextualized measures to improve test reactions (Holtop, Born, de Vries, & de Vries, 2014;
Holtz, Ployhart, & Dominguez, 2005).
The situational judgment test was deemed as significantly more job-related than the other test types. Because SJTs are a more novel testing strategy than the other two utilized herein, few studies have compared perceptions of these tests to one another. These findings suggest that the use of SJTs may be helpful when fostering positive test reactions. SJTs can be written to evaluate MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 43
a variety of abilities (i.e., personality, cognitive ability; Christian, Edwards, & Bradley, 2010) so
it may be beneficial to use SJTs when possible.
Again, although the tests did not relate to perceived job-relatedness as expected, the tests
did vary in applicant’s perceptions. In addition, perceived job-relatedness did significantly relate
to overall fairness perceptions of the testing process. This correlation was not only significant but
large (r = .60) suggesting that overall fairness may be heavily impacted by the degree to which a test seems related to the job of interest. These findings further indicate that if organizations are interested in applicants’ perceptions of process fairness regarding the selection strategy, it would be wise to utilize testing methods that are directly related to the job.
These findings also indicate that face validity perceptions do impact test performance indirectly through changes in self-efficacy and more distally test motivation after controlling for the effects of Conscientiousness and GSE. This model supports the relationship previously posited by applicant reactions researchers (Hausknecht et al., 2004; Macan et al., 1994; Smither et al., 1993) and found in part by Chan and colleagues (1997). Although, in the hypothesized model it was expected that test-taking self-efficacy would only partially mediate the relationship between reactions and test performance, the full mediation model was found to be more appropriate due the minimal changes in fit statistics and increased parsimony. These findings suggest that, not only is test performance affected by applicant reactions, but this relationship can be explained, at least in part, through changes in test-taking self-efficacy and test motivation.
As noted, self-efficacy, is the degree to which a person feels capable to affect the environment and produce desired outcomes (Bandura, 1977) which, in turn, informs how one
feels about a task, how they choose to motivate themselves, the goals they set specific to that
task, and their resilience when presented with difficulty (Bandura & Locke, 2003). According to MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 44
Social Cognitive Theory, in the case of selection, environmental factors such as reactions to tests
and testing methods may impact an applicant’s testing self-efficacy (Bandura, 1997) thus
informing intent to perform and test performance outcomes. Although it has been noted that
motivation is very complex and made up of variables other than self-efficacy (Schunk, 1991), in
the case of applicant reactions, it may be that self-efficacy is the primary driver to one’s motivation which substantially contributes to test performance.
Implications
Overall, this study offers several contributions to the academic literature and the proficiency of organizational selection. First, although the directional relationship between face validity and test performance through motivation has been examined before (Chan et al., 1997), this model has not been tested while considering individual differences nor was self-efficacy introduced as a mediator driving those changes in motivation. It has been argued that negative reactions could reduce the validity of assessments through decreased test performance (e.g.
Arvey et al., 1990; McCarthy et al., 2009). However, in the previous study evaluating this relationship (Chan et al, 1997), it was noted that their specific findings could reflect trait motivation, which could, in turn, predict motivation on the job therefore bolstering, rather than
hindering test validity. Through the inclusion of individual differences related to both resilience and effort toward achievement (Brown, Lent, Telander, & Tramayne, 2011; Judge & Ilies, 2002)
as well as test and job performance (e.g. Barrick & Mount, 1991), we were better able to
determine the extent to which the applicant reaction-performance relationship holds devoid of individual differences. The hypothesized model, and the more appropriate fully mediated model, showed good fit regardless of these dispositional differences supporting the notion that reactions matter even for those more likely to achieve thus providing evidence that depleted performance MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 45
might cause good applicants to be erroneously selected out of the process diminishing the
proficiency of the selection system.
In addition, Chan and colleagues (1997) noted that, in the past, researchers have subsumed individual differences such as motivation and test perceptions as random error when measuring performance thus limiting our ability to improve measurement and fully understand the performance construct. This study provides a more comprehensive understanding of the attitudinal mechanisms related to performance that can enable more appropriate measurement
and prediction of job success.
These findings also support the premise that organizations should be using job-related
tests. These results, along with the work of previous research, find perceptions of job-relatedness to significantly relate to overall fairness perceptions (Hausknechet et al., 2004). Prior research in this area also show job-relatedness perception to contribute to organizational attractiveness
(Smither et al., 1993), intentions to seek legal action (Bauer et al., 1998; Smither et al., 1993), and intentions to accept a job offer (Chapman et al., 2005; Hausknecht et al., 2004) as well as the increased psychometric properties of tests (Chan & Schmitt, 1997). The findings confirm that perceptions of job-relatedness vary by test and potentially by job. Organizations should be considering job-relatedness, specifically face validity and perceived predictive validity, along
with other methodological test qualities in the development of effective selection assessments.
In sum, this research supports the notion that both organizations and researchers should
consider the perceived job-relatedness when creating selection assessments. As was posited, the
validity of assessments could be at risk for those tests deemed as unrelated to the job by
applicants (Arvey et al., 1990; McCarthy et al., 2009; Ryan & Ployhart, 2000). However, at this
time, the extent to which rank ordering is changed based on the reactions-performance MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 46 relationship has not been assessed. It is possible that if all people are equally impacted by job- relatedness perceptions, the effect on performance will wash out making this model unimportant for the proficiency of selection tests. Based on the findings of prior research and this study specifically, future applicant reactions research should evaluate the degree to which reactions impact performance and how that alters the proficiency of selection systems.
Limitations
As noted, this sample was collected from MTurk and did not consist of real job applicants, which could impact the studies ecological validity. In an attempt to reduce this effect, participants were provided with a desirable job description that has been found to have good generalizability when real job applicants are not available (Hunthauser et al., 2003; Schmit et al.,
1995). Participants were also incentivized to perform through a substantial bonus payment that has been shown to increase testing motivation more similar to a real job scenario (Chan et al.,
1998; Holtz et al., 1998). However, future studies should attempt to recreate this model within a sample of actual job applicants as motivation may more largely derived from the desire to get the job and not the test content.
In addition, responses were aggregated for all experimental groups (i.e., personality, cognitive ability, and situational judgment test). Although, at this time theory suggests that face validity perceptions impact motivation and performance regardless of test type (Gilliland, 1993;
Chan & Schmitt, 1997), it is possible that this model works differently for various testing methods. Future research should assess test type as a potential moderator within this model.
Conclusion
Overall, this research indicates that applicant reactions, specifically the degree to which a test seems job-related, impacts test performance through changes in self-efficacy and test MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 47 motivation. This relationship was supported while also controlling for achievement related traits further suggesting that reactions may have a significant impact on how one performs and even more importantly, who is selected for the job. Further research should continue to determine the degree to which test reactions impact performance, the potential moderators altering this relationship, and how the proficiency of a selection system is impacted by applicant reactions.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 48
REFERENCES
Adams, J. S. (1965) Inequity in social exchange. In L. Berkowitz (Ed.), Advances in
experimental social psychology (Vol. 2, pp. 267-299). New York: Academic Press.
Aiman-Smith, L., Bauer, T. N., & Cable, D. M. (2001). Are you attracted? Do you intend to
pursue? A recruiting policy-capturing study. Journal of Business and psychology, 16(2),
219-237.
Alexander, S., & Ruderman, M. (1987). The role of procedural and distributive justice in
organizational behavior. Social justice research, 1(2), 177-198.
Anastasi A. Psychological testing. New York: Macmillan, 1988.
Anderson, N., & Witvliet, C. (2008). Fairness reactions to personnel selection methods: An
international comparison between the Netherlands, the United States, France, Spain,
Portugal, and Singapore. International Journal of Selection and Assessment, 16(1), 1-13.
Anderson, N., Salgado, J. F., & Hülsheger, U. R. (2010). Applicant Reactions in Selection:
Comprehensive meta‐analysis into reaction generalization versus situational specificity.
International Journal of Selection and Assessment, 18(3), 291-304.
Aronson, E., Carlsmith, J. M., & Ellsworth, P. C. (1990). Methods of research in social
psychology (2nd ed.). New York: McGraw-Hill.
Arvey, R. D., Strickland, W., Drauden, G., & Martin, C. (1990). Motivational components of
test-taking. Personnel Psychology, 43(4), 695-716.
Bakhshi, A., Kumar, K., & Rani, E. (2009). Organizational justice perceptions as predictor of job
satisfaction and organization commitment. International journal of Business and
Management, 4(9), 145.
Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change. Psychological MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 49
review, 84(2), 191.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American psychologist, 37(2),
122.
Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational behavior and
human decision processes, 50(2), 248-287.
Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning.
Educational psychologist, 28(2), 117-148.
Bandura, A. (1997). Self-efficacy: The exercise of control.
Bandura, A. (2006). Guide for constructing self-efficacy scales. Self-efficacy beliefs of
adolescents, 5(307-337).
Bandura, A., & Locke, E. A. (2003). Negative self-efficacy and goal effects revisited. Journal of
applied psychology, 88(1), 87.
Barrett-Howard, E., & Tyler, T. R. (1986). Procedural justice as a criterion in allocation
decisions. Journal of Personality and Social Psychology, 50(2), 296.
Barrick, M. R., & Mount, M. K. (1991). The big five personality dimensions and job
performance: a meta‐analysis. Personnel psychology, 44(1), 1-26.
Bauer, T. N., & Truxillo, D. M. (2006). Applicant reactions to situational judgment tests:
Research and related practical issues. Situational judgment tests: Theory, measurement,
and application, 233-249.
Bauer, T. N., Maertz Jr, C. P., Dolen, M. R., & Campion, M. A. (1998). Longitudinal assessment
of applicant reactions to employment testing and test outcome feedback. Journal of
applied Psychology, 83(6), 892.
Bauer, T. N., Truxillo, D. M., Sanchez, R. J., Craig, J. M., Ferrara, P., & Campion, M. A. (2001). MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 50
Applicant reactions to selection: Development of the selection procedural justice scale
(SPJS). Personnel psychology, 54(2), 387-419.
Bell, B. S., Wiechmann, D., & Ryan, A. M. (2006). Consequences of organizational justice
expectations in a selection system. Journal of Applied Psychology, 91(2), 455.
Bobocel, D. R., & Gosse, L. (2015). Procedural Justice: A Historical Review and Critical
Analysis. The Oxford Handbook of Justice in the Workplace, 51.
Bouffard-Bouchard, T. (1990). Influence of self-efficacy on performance in a cognitive task. The
Journal of Social Psychology, 130(3), 353-363.
Brown, S. D., Lent, R. W., Telander, K., & Tramayne, S. (2011). Social cognitive career theory,
conscientiousness, and work performance: A meta-analytic path analysis. Journal of
Vocational Behavior, 79(1), 81-90.
Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. Sage focus
editions, 154, 136-136.
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk a new source
of inexpensive, yet high-quality, data?. Perspectives on psychological science, 6(1), 3-5.
Cabrera, M. A. M., & Nguyen, N. T. (2001). Situational judgment tests: A review of practice and
constructs assessed. International Journal of Selection and Assessment, 9(1‐2), 103-113.
Cascio, W. F. (1987). Applied psychology in personnel management. Prentice-Hall.
Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in
situational judgment tests: subgroup differences in test performance and face validity
perceptions. Journal of Applied Psychology, 82(1), 143.
Chan, D., & Schmitt, N. (2002). Situational judgment and job performance. Human
Performance, 15(3), 233-254. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 51
Chan, D., & Schmitt, N. (2004). An Agenda for Future Research on Applicant Reactions to
Selection Procedures: A Construct‐Oriented Approach. International Journal of
Selection and Assessment, 12(1‐2), 9-23.
Chan, D., Schmitt, N., DeShon, R. P., Clause, C. S., & Delbridge, K. (1997). Reactions to
cognitive ability tests: The relationships between race, test performance, face validity
perceptions, and test-taking motivation. Journal of Applied Psychology, 82(2), 300.
Chan, D., Schmitt, N., Sacco, J. M., & DeShon, R. P. (1998). Understanding pretest and posttest
reactions to cognitive ability and personality tests. Journal of Applied Psychology, 83(3),
471.
Chapman, D. S., Uggerslev, K. L., & Webster, J. (2003). Applicant reactions to face-to-face and
technology-mediated interviews: A field investigation. Journal of Applied
Psychology, 88(5), 944.
Chemers, M. M., Hu, L. T., & Garcia, B. F. (2001). Academic self-efficacy and first year college
student performance and adjustment. Journal of Educational psychology, 93(1), 55.
Chen, G., Gully, S. M., & Eden, D. (2001). Validation of a new general self-efficacy scale.
Organizational research methods, 4(1), 62-83.
Christian, M. S., Edwards, B. D., & Bradley, J. C. (2010). Situational judgment tests: Constructs
assessed and a meta-analysis of their criterion-related validities. Personnel Psychology,
63, 83–117.
Clause, C. S., Delbridge, K., Schmitt, N., Chan, D., & Jennings, D. (2001). Test preparation
activities and employment test performance. Human Performance, 14(2), 149-167.
Clevenger, J., Pereira, G. M., Wiechmann, D., Schmitt, N., & Harvey, V. S. (2001). Incremental
validity of situational judgment tests. Journal of Applied Psychology, 86(3), 410. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 52
Cohen-Charash, Y., & Spector, P. E. (2001). The role of justice in organizations: A meta-
analysis. Organizational behavior and human decision processes, 86(2), 278-321.
Colquitt, J. A., Conlon, D. E., Wesson, M. J., Porter, C. O., & Ng, K. Y. (2001). Justice at the
millennium: a meta-analytic review of 25 years of organizational justice research.
Journal of applied psychology, 86(3), 425.
Crano, W. D., Brewer, M. B., & Lac, A. (2014). Principles and methods of social research.
Routledge.
Dailey, R. C. and D. J. Kirk: 1992, ‘Distributive and Procedural Justice as Antecedents of Job
Satisfaction and Intent to Turnover’, Human Relations, 45, 305–317.
Deci, E. L. (1971). Effects of externally mediated rewards on intrinsic motivation. Journal of
personality and Social Psychology, 18(1), 105.
Deci, E. L., & Ryan, R. M. (2010). Self‐determination. John Wiley & Sons, Inc..
Derous, E., & Born, M. P. (2005). Impact of face validity and information about the assessment
process on test motivation and performance (Vol. 68, No. 4, pp. 317-336). Presses
Universitaires de France.
Derous, E., Born, M. P., & Witte, K. D. (2004). How applicants want and expect to be treated:
Applicants' selection treatment beliefs and the development of the social process
questionnaire on selection. International Journal of Selection and Assessment, 12(1‐2),
99-119.
DiStefano, C., & Molt, R. W. (2006). Further investigating method effects associated with
negatively worded items on self-report surveys. Structural Equation Modeling, 13(3),
440–464.
Dodd, W. E. (1977). Attitudes toward assessment center programs. In J. L. Moses & W. C. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 53
Byham (Eds.), Applying the assessment center method (pp. 161-183). New York: Perga-
mon Press.
Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The mini-IPIP scales:
tiny-yet-effective measures of the Big Five factors of personality. Psychological
assessment, 18(2), 192.
Dunn, W. S., Mount, M. K., Barrick, M. R., & Ones, D. S. (1995). Relative importance of
personality and general mental ability in managers' judgments of applicant
qualifications. Journal of Applied Psychology, 80(4), 500.
Eden, D. (1988). Pygmalion, goal setting, and expectancy: Compatible ways to raise productiv-
ity. Academy of Management Review, 13, 639-652.
Eden, D. (1996, August). From self-efficacy to means efficacy: Internal and external sources of
general and specific efficacy. Paper presented at the 56th Annual Meeting of the
Academy of Management, Cincinnati, OH.
Erickson, P., (2004). Employer Hiring Tests Grow Sophisticated in Quest for Insight about
Applicants. Knight Ridder Tribune Business News. 1.
Eysenck, M. W., Derakshan, N., Santos, R., & Calvo, M. G. (2007). Anxiety and cognitive
performance: attentional control theory. Emotion, 7(2), 336.
Folger, R., & Konovsky, M. A. (1989). Effects of procedural and distributive justice on reactions
to pay raise decisions. Academy of Management journal, 32(1), 115-130.
Gardner, D. G., & Pierce, J. L. (1998). Self-esteem and self-efficacy within the organizational
context: An empirical examination. Group & Organization Management, 23(1), 48-70.
Gatewood, R., Feild, H. S., & Barrick, M. (2015). Human resource selection. Nelson Education.
Geenen, B., Proost, K., Schreurs, B., van Dam, K., & von Grumbkow, J. (2013). What friends MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 54
tell you about justice: The influence of peer communication on applicant
reactions. Revista de Psicología del Trabajo y de las Organizaciones, 29(1), 37-44.
Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice
perspective. Academy of management review, 18(4), 694-734.
Gilliland, S. W. (1994). Effects of procedural and distributive justice on reactions to a selection
system. Journal of applied psychology, 79(5), 691.
Gilliland, S. W., & Beckstein, B. A. (1996). Procedural and distributive justice in the editorial
review process. Personnel Psychology, 49(3), 669-691.
Gist, M. E., & Mitchell, T. R. (1992). Self-efficacy: A theoretical analysis of its determinants
and malleability. Academy of Management review, 17(2), 183-211.
Goldberg, L. R. (1992). The development of markers for the Big-Five factor structure.
Psychological Assessment, 4, 26-42.
Greenberg, J. (1990). Organizational justice: Yesterday, today, and tomorrow. Journal of
management, 16(2), 399-432.
Harland, L. K., Rauzi, T., & Biasotto, M. M. (1995). Perceived fairness of personality tests and
the impact of explanations for their use. Employee Responsibilities and Rights Journal,
8(3), 183-192.
Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection
procedures: An updated model and meta‐analysis. Personnel Psychology, 57(3), 639-
683.
Holtrop, D., Born, M. P., de Vries, A., & de Vries, R. E. (2014). A matter of context: A
comparison of two types of contextualized personality measures. Personality and
Individual Differences, 68, 234-240. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 55
Holtz, B. C., Ployhart, R. E., & Dominguez, A. (2005). Testing the Rules of Justice: The Effects
of Frame‐of‐Reference and Pre‐Test Validity Information on Personality Test Responses
and Test Perceptions. International Journal of Selection and Assessment, 13(1), 75-86.
Honkaniemi, L., Feldt, T., Metsäpelto, R. L., & Tolvanen, A. (2013). Personality types and
applicant reactions in real‐life selection. International Journal of Selection and
Assessment, 21(1), 32-45.
Hu, L. T., & Bentler, P. M. (1995). Evaluating model fit.
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis:
Conventional criteria versus new alternatives. Structural equation modeling: a
multidisciplinary journal, 6(1), 1-55.
Huffcutt, A. I., Conway, J. M., Roth, P. L., & Stone, N. J. (2001). Identification and meta-
analytic assessment of psychological constructs measured in employment interviews.
Journal of Applied Psychology, 86(5), 897-913.
Humphreys, M. S., & Revelle, W. (1984). Personality, motivation, and performance: a theory of
the relationship between individual differences and information processing.
Psychological review, 91(2), 153.
Hunter, J. E. (1986). Cognitive ability, cognitive aptitudes, job knowledge, and job performance.
Journal of vocational behavior, 29(3), 340-362.
Hunthausen, J. M., Truxillo, D. M., Bauer, T. N., & Hammer, L. B. (2003). A field study of
frame-of-reference effects on personality test validity. Journal of Applied Psychology,
88(3), 545.
Judge, T. A., & Bono, J. E. (2001). Relationship of core self-evaluations traits—self-esteem,
generalized self-efficacy, locus of control, and emotional stability—with job satisfaction MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 56
and job performance: A meta-analysis. Journal of applied Psychology, 86(1), 80.
Judge, T. A., & Ilies, R. (2002). Relationship of personality to performance motivation: a meta-
analytic review. Journal of applied psychology, 87(4), 797.
Judge, T. A., Erez, A., & Bono, J. E. (1998). The power of being positive: The relation between
positive self-concept and job performance. Human performance, 11(2-3), 167-187.
Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional causes of job satisfaction:
A core evaluations approach. Research in organizational behavior, vol 19, 1997, 19,
151-188.
Kanning, U. P., Grewe, K., Hollenberg, S., & Hadouch, M. (2006). From the Subjects' Point of
View. European Journal of Psychological Assessment, 22(3), 168-176.
Kluger, A. N., & Rothstein, H. R. (1993). The influence of selection test type on applicant
reactions to employment testing. Journal of Business and Psychology, 8(1), 3-25.
Konradt, U., Garbers, Y., Böge, M., Erdogan, B., & Bauer, T. N. (2017). Antecedents and
consequences of fairness perceptions in personnel selection: a 3-year longitudinal study.
Group & Organization Management, 42(1), 113-146.
Leventhal, G. S. (1980). What should be done with equity theory?. In Social exchange (pp. 27-
55). Springer US.
Lievens, F., & Sackett, P. R. (2006). Video-based versus written situational judgment tests: A
comparison in terms of predictive validity. Journal of Applied Psychology, 91(5), 1181.
Lievens, F., Peeters, H., & Schollaert, E. (2008). Situational judgment tests: A review of recent
research. Personnel Review, 37(4), 426-441.
Lind, E. A., & Tyler, T. R. (1988). The social psychology of procedural justice. Springer Science
& Business Media. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 57
Lind, E. A., Kanfer, R., & Early, C. (1990). Voice, Control & Procedural Justice, 59 J.
Personality and social psychology, 952.
Locke, E. A., Frederick, E., Lee, C., & Bobko, P. (1984). Effect of self-efficacy, goals, and task
strategies on task performance. Journal of applied psychology, 69(2), 241.
Macan, T. H., Avedon, M. J., Paese, M., & Smith, D. E. (1994). The effects of applicants'
reactions to cognitive ability tests and an assessment center. Personnel Psychology,
47(4), 715-738.
Magner, N., Welker, R. B., & Johnson, G. G. (1996). The interactive effects of participation and
outcome favorability on turnover intentions and evaluations of supervisors. Journal of
Occupational and Organizational Psychology, 69(2), 135-143.
Marsh, H. W., & Shavelson, R. (1985). Self-concept: Its multifaceted, hierarchical structure.
Educational psychologist, 20(3), 107-123.
Martin, C. L., & Bennett, N. (1996). The role of justice judgments in explaining the relationship
between job satisfaction and organizational commitment. Group & Organization
Management, 21(1), 84-104.
Martocchio, J. J. (1994). Effects of conceptions of ability on anxiety, self-efficacy, and learning
in training. Journal of applied psychology, 79(6), 819.
McCarthy, J. M., Bauer, T. N., Truxillo, D. M., Anderson, N. R., Costa, A. C., & Ahmed, S. M.
(2017). Applicant Perspectives During Selection: A Review Addressing “So
What?,”“What’s New?,” and “Where to Next?”. Journal of Management, 43(6), 1693-
1725.
McCarthy, J. M., Van Iddekinge, C. H., Lievens, F., Kung, M. C., Sinar, E. F., & Campion, M.
A. (2013). Do candidate reactions relate to job performance or affect criterion-related MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 58
validity? A multistudy investigation of relations among reactions, selection test scores,
and job performance. Journal of Applied Psychology, 98(5), 701.
McCarthy, J., Hrabluik, C., & Jelley, R. B. (2009). Progression through the ranks: Assessing
employee reactions to high‐stakes employment testing. Personnel Psychology, 62(4),
793-832
McDaniel, M. A., Hartman, N. S., Whetzel, D. L., & Grubb, W. (2007). Situational judgment
tests, response instructions, and validity: a meta‐analysis. Personnel psychology, 60(1),
63-91.
Mcfarland, L. A. (2003). Warning against faking on a personality test: Effects on applicant
reactions and personality test scores. International Journal of Selection and
Assessment, 11(4), 265-276.
McFarlin, D. B., & Sweeney, P. D. (1992). Research notes. Distributive and procedural justice as
predictors of satisfaction with personal and organizational outcomes. Academy of
management Journal, 35(3), 626-637.
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil
cognitive ability tests: A meta-analysis. Psychological Bulletin, 114(3), 449.
Mosier, C. I. (1947). A critical examination of the concepts of face validity. Educational and
Psychological Measurement.
Motowidlo, S. J., Dunnette, M. D., & Carter, G. W. (1990). An alternative selection procedure:
The low-fidelity simulation. Journal of Applied Psychology, 75(6), 640.
Mount, M. K., & Barrick, M. R. (1995). The Big Five personality dimensions: Implications for
research and practice in human resources management. Research in personnel and
human resources management, 13(3), 153-200. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 59
Muthen, L. K., & Muthen, B. O. (1998). Mplus [computer software]. Los Angeles, CA: Muthén
& Muthén.
Nevo, B. (1985). Face validity revisited. Journal of Educational Measurement, 22(4), 287-293.
Pajares, F., & Schunk, D. (2001). The development of academic self-efficacy. Development of
achievement motivation. United States, 7.
Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on amazon mechanical
turk. Judgment and Decision making, 5(5), 411-419.
Ployhart, R. E. (2006). Staffing in the 21st century: New challenges and strategic opportunities.
Journal of management, 32(6), 868-897.
Ployhart, R. E., & Harold, C. M. (2004). The Applicant Attribution‐Reaction Theory (AART):
An Integrative Theory of Applicant Attributional Processing. International Journal of
Selection and Assessment, 12(1‐2), 84-98.
Ployhart, R. E., & Ryan, A. M. (1998). Applicants' reactions to the fairness of selection
procedures: the effects of positive rule violations and time of measurement. Journal of
Applied Psychology, 83(1), 3.
Ployhart, R. E., Ziegert, J. C., & McFarland, L. A. (2003). Understanding racial differences on
cognitive ability tests in selection contexts: An integration of stereotype threat and
applicant reactions research. Human Performance, 16(3), 231-259.
Raven, J. C. (1936). Mental tests used in genetic studies: The performance of related individuals
on tests mainly educative and mainly reproductive. Unpublished master’s thesis,
University of London.
Ree, M. J., & Earles, J. A. (1992). Intelligence is the best predictor of job performance. Current
directions in psychological science, 1(3), 86-89. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 60
Richman-Hirsch, W. L., Olson-Buchanan, J. B., & Drasgow, F. (2000). Examining the impact of
administration medium on examinee perceptions and attitudes. Journal of Applied
Psychology, 85(6), 880.
Robertson, I. T., & Kandola, R. S. (1982). Work sample tests: Validity, adverse impact and
applicant reaction. Journal of occupational Psychology, 55(3), 171-183.
Rosse, J. G., Miller, J. L., & Stecher, M. D. (1994). A field study of job applicants' reactions to
personality and cognitive ability testing. Journal of Applied Psychology, 79(6), 987.
Roth, P. L., Bobko, P., & McFarland, L. (2005). A meta‐analysis of work sample test validity:
Updating and integrating some classic literature. Personnel Psychology, 58(4), 1009-
1037.
Ryan, A. M., & Ployhart, R. E. (2000). Applicants’ perceptions of selection procedures and
decisions: A critical review and agenda for the future. Journal of Management, 26(3),
565-606.
Ryan, A. N. N., McFarland, L., & SHL, H. B. (1999). An international look at selection
practices: Nation and culture as explanations for variability in practice. Personnel
Psychology, 52(2), 359-392.
Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: Classic definitions and
new directions. Contemporary educational psychology, 25(1), 54-67.
Rynes, S. L., & Connerley, M. L. (1993). Applicant reactions to alternative selection procedures.
Journal of Business and Psychology, 7(3), 261-277.
Sanchez, R. J., Truxillo, D. M., & Bauer, T. N. (2000). Development and examination of an
expectancy-based measure of test-taking motivation. Journal of applied
psychology, 85(5), 739. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 61
Sarason, I. G., Sarason, B. R., & Pierce, G. R. (1990). Anxiety, cognitive interference, and
performance. Journal of Social Behavior and Personality, 5(2), 1.
Schleicher, D. J., Venkataramani, V., Morgeson, F. P., & Campion, M. A. (2006). So you didn't
get the job… now what do you think? examining opportunity‐to‐perform fairness
perceptions. Personnel Psychology, 59(3), 559-590.
Schmidt, F. L., & Hunter, J. E. (1981). Employment testing: Old theories and new research
findings. American psychologist, 36(10), 1128.
Schmidt, F. L., & Hunter, J. E. (1996). Measurement error in psychological research: Lessons
from 26 research scenarios. Psychological Methods, 1(2), 199.
Schmidt, F. L., & Hunter, J. E. (2000). Select on intelligence. Handbook of principles of
organizational behavior, 3-14.
Schmidt, F. L., Greenthal, A. L., Hunter, J. E., Berner, J. G., & Seaton, F. W. (1977). Job sample
vs. paper-and-pencil trades and technical tests: adverse impact and examinee attitudes.
Personnel Psychology, 30(2), 187-197.
Schmit, M. J., & Ryan, A. (1997). Applicant withdrawal: the role of test‐taking attitudes and
racial differences. Personnel Psychology, 50(4), 855-876.
Schmitt, N., Gilliland, S. W., Landis, R. S., & Devine, D. (1993). Computer-based testing
applied to selection of secretarial applicants. Personnel Psychology, 46(1), 149-165.
Schmitt, N., Oswald, F. L., Kim, B. H., Gillespie, M. A., & Ramsay, L. J. (2004). The Impact of
Justice and Self‐Serving Bias Explanations of the Perceived Fairness of Different Types
of Selection Tests. International Journal of Selection and Assessment, 12(1‐2), 160-171.
Schreurs, B., Derous, E., Proost, K., & De Witte, K. (2010). The relation between selection
expectations, perceptions and organizational attraction: A test of competing models. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 62
International Journal of Selection and Assessment, 18(4), 447-452.
Schunk, D. H. (1991). Self-efficacy and academic motivation. Educational psychologist, 26(3-4),
207-231.
Schunk, D. H. (1995). Self-efficacy and education and instruction. In Self-efficacy, adaptation,
and adjustment (pp. 281-303). Springer US.
Seaton, E W. (1977). Job samples vs. paper-and-pencil trade and technical tests: Adverse impact
and examinee attitudes. Personnel Psychology, 30, 187~197.
Sheppard, B. H., & Lewicki, R. J. (1987). Toward general principles of managerial fairness.
Social Justice Research, 1(2), 161-176.
Skarlicki, D. P., & Folger, R. (1997). Retaliation in the workplace: The roles of distributive,
procedural, and interactional justice. Journal of applied Psychology, 82(3), 434.
Smither, J. W., Reilly, R. R., Millsap, R. E., Pearlman, K., & Stoffey, R. W. (1993). Applicant
reactions to selection procedures. Personnel psychology, 46(1), 49.
Speer, A. B., King, B. S., & Grossenbacher, M. (2016). Applicant Reactions as a Function of
Test Length. Journal of Personnel Psychology.
Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-
analysis. Psychological bulletin, 124(2), 240.
Steiner, D. D., & Gilliland, S. W. (1996). Fairness reactions to personnel selection techniques in
France and the United States. Journal of Applied Psychology, 81(2), 134.
Strecher, V. J., McEvoy DeVellis, B., Becker, M. H., & Rosenstock, I. M. (1986). The role of
self-efficacy in achieving health behavior change. Health education quarterly, 13(1), 73-
92.
Thibaut, J. W., & Walker, L. (1975). Procedural justice: A psychological analysis. L. Erlbaum MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 63
Associates.
Thomas, K. M., & Wise, P. G. (1999). Organizational attractiveness and individual differences:
Are diverse applicants attracted by different factors?. Journal of Business and
Psychology, 13(3), 375-390.
Truxillo, D. M., Bauer, T. N., & Sanchez, R. J. (2001). Multiple Dimensions of Procedural
Justice: Longitudinal Effects on Selection System Fairness and Test‐Taking Self‐
Efficacy. International Journal of Selection and Assessment, 9(4), 336-349.
Truxillo, D. M., Bodner, T. E., Bertolino, M., Bauer, T. N., & Yonce, C. A. (2009). Effects of
Explanations on Applicant Reactions: A meta‐analytic review. International Journal of
Selection and Assessment, 17(4), 346-361.
Truxillo, D. M., Steiner, D. D., & Gilliland, S. W. (2004). The importance of organizational
justice in personnel selection: Defining when selection fairness really matters.
International Journal of Selection and Assessment, 12(1‐2), 39-53.
Tyler, T. R., & Bies, R. J. (1990). Beyond formal procedures: The interpersonal context of
procedural justice. Applied social psychology and organizational settings, 77, 98.
Tyler, T. R., & Lind, E. A. (1992). A relational model of authority in groups. Advances in
experimental social psychology, 25, 115-191.
Vroom,V. R. Workand motivation.New York:Wiley, 1964.
Weekley, J. A., & Jones, C. (1999). Further studies of situational tests. Personnel Psychology,
52(3), 679-700.
Weiner, B. (1985). An attributional theory of achievement motivation and
emotion. Psychological review, 92(4), 548.
Wood, R., & Bandura, A. (1989). Social cognitive theory of organizational MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 64
management. Academy of management Review, 14(3), 361-384.
Zibarras, L. D., & Patterson, F. (2015). The Role of Job Relatedness and Self‐efficacy in
Applicant Perceptions of Fairness in a High‐stakes Selection Setting. International
Journal of Selection and Assessment, 23(4), 332-344.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 65
APPENDIX A. JOB DESCRIPTION
Imagine that you are applying for a new job in customer service within an organization that is of great interest to you. This position offers a higher pay, good benefits, and more vacation time than your current position. You are being tasked with taking an assessment to determine fit for this specific job. Please read the directions and take each test seriously as your scores determine acceptance into the hypothetical position of interest as well as a potential for a real pay bonus.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 66
APPENDIX B. SITUATIONAL JUDGEMENT
*The full measure, without the answer key, can be viewed upon request.
The survey presents you with several situations that an employee might encounter at work. Read each situation and the possible actions that one might take in response to each situation. Some actions are effective responses to the situation. Other actions are less effective responses to the situation. Please rate each action on its effectiveness using the following rating scale:
1 - Extremely Ineffective 2 - Moderately Ineffective 3 - Slightly Ineffective 4 - Slightly Effective 5 - Moderately Effective 6 - Extremely Effective
1. A customer is disrespectful and rude to you
Ask the customer to leave until he is 1 2 3 4 5 6 calm. Treat the customer as the customer 1 2 3 4 5 6 treats you. Try to solve the problem s quickly as 1 2 3 4 5 6 possible to get the customer out of the store.
Copyright © 2012 Work Skills First, Inc.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 67
APPENDIX C. COGNITIVE ABILITY TEST
The following assessment is timed. You will have 8 minutes to complete this section. Please read the items carefully and answer as many as you can. You are allowed to use scratch paper but please do not use any technological device such as a calculator or cell phone. An example items is below.
Please select the best answer for the questions below.
1. A customer would like to purchase a videocassette tape for $3.99 and a stuffed animal for $8.93. What is the total for these two items without tax?
A. $14.18 B. $10.36 C. $12.92 D. $11.47
Please click next when you are ready to begin this timed assessment.
Example Items:
Answer the following questions by filling in the blank lines with the correct answers. Feel free to use scratch paper to figure out your answers. Please do not include any additional words, numbers, or symbols other than the exact answer.
1. 23 x 11 =
If Statements 1 and 2 are both true, is Statement 3 true or false?
1. Statement 1: Madison is older than Frances Statement 2: Penny is older than Madison Statement 3: Frances is older than Penny
What should be the next number in the series? Please use the forward slash (/) if you need to create a fraction.
1. 3, 9, 15, ?
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 68
APPENDIX D. PERSONALITY TEST
Please respond to the items below regarding the extent you either agree or disagree as each statement pertains to you.
Example Items:
1. I hate working with ignorant people. 2. I tend to see difficulties everywhere. 3. I tend to focus on my work more intensely than other people. 4. I prefer not to take risks when choosing a job or an organization to work for. 5. I am not easily distracted. 6. I catch on to things quickly. MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 69
APPENDIX E. STUDY MEASURES
Face validity:
Considering the assessment(s) you just took, please respond to the statements below regarding the extent to which you agree or disagree.
• I do not understand what the examination has to do with the job. • I can not see any relationship between the examination and what is required on the job • It would be obvious to anyone that the examination is related to the job • The actual content of the examination is clearly related to the job. • There was no real connection between the examination and the job.
Perceived predictive validity:
• Failing to pass the examination clearly indicates that you can't do the job. • Performance on the examination would be a good indicator of my ability to do the job. • The employer can tell a lot about the applicant's ability to do the job from the results of this test. • I am confident that the test can predict how well an applicant will perform on the job. • Applicants who perform well on this type of test are more likely to perform well on the job than applicants who perform poorly.
Test-Taking Self-efficacy:
• I believe I can do well on this test. • I believe I will get the job based on the results of this test. • I believe I could accurately represent myself as a good employee on this test. • I have the ability to be successful on this test.
Test Motivation:
• Doing well on this test would be important to me. • I would try to do the very best I could to on this test. • I would be extremely motivated to do well on this test. • I want to be among the top scorers on this test. • I would not put much effort into this test.
Overall Fairness:
• I think that the testing process is a fair way to select people for the job. • I think that the test was fair. • Overall, the method of testing used was fair MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 70
Generalized self-efficacy:
• I will be able to achieve most of the goals that I have set for myself. • When facing difficult tasks, I am certain that I will accomplish them. • In general, I think that I can obtain outcomes that are important to me. • I believe I can succeed at most any endeavor to which I set my mind. • I will be able to successfully overcome many challenges. • I am confident that I can perform effectively on many different tasks. • Compared to other people, I can do most tasks very well. • Even when things are tough, I can perform quite well.
Conscientiousness:
Please respond to the items below regarding the extent you agree or disagree with the statement as it pertains to you using the scale provided below.
• Am always prepared. • Pay attention to details. • Get chores done right away. • Carry out my plans. • Make plans and stick to them. • Waste my time. • Find it difficult to get down to work. • Do just enough work to get by. • Don't see things through. • Shirk my duties.
MODEL OF PERFORMANCE, MOTIVATION, AND TEST PERCEPTIONS 71
APPENDIX F. HSRB APPROVAL
Subject Information and Consent Form
This research is being conducted at Bowling Green State University by the primary investigator, Shelby Wise, and the assistant investigator, Dr. Clare Barratt, in the psychology department.
This research aims to further understand how test reactions, in relation to hiring tests, impact test performance. We hope these findings will inform our understanding of current testing practices within organizations.
Participants should be MTurk workers who are between 18 and 65 years of age and work at least 24 hours a week on average in the United States. The survey contains three screening items to assess fit for this project. You will not be paid for the three screening items. If selected, the full survey should take approximately 15 minutes and participants will be compensated $1.50 for conscientious responding, meaning quality attempts to read directions and complete the survey items.
This study also offers a potential bonus of $0.75. The top 10% of performers (those scoring in the top 10% on the hiring assessment) in each condition will receive the bonus. These assessments have been developed and utilized to hire people within well- known organizations. Scoring is based on data linking assessment items to job performance.
This is completely voluntary and you are free to withdraw at any time with no penalty. Discontinuing participation will in no way impact your relationship with Bowling Green State University. Completion of the survey indicates consent to participate.
We have no access to any of your personal information and this will be completely anonymous. It is recommended that after the completion of the survey you clear your browser and page history.
There are no risks associated with this survey that would be experienced outside of everyday life.
If you have any questions regarding your participation in this study, please contact Dr. Clare Barrett at 419-372-2301 or [email protected] or Shelby Wise at [email protected] or 419-372- 4201. You may also contact the Chair, Institutional Review Board (IRB) at 419-372-7716 or [email protected], if you have any questions about your rights as a participant in this research.
Thank you for your time. By clicking next, I indicate that:
I have been informed of the purposes, procedures, risks and benefits of this study. I have had the opportunity to have all my questions answered and I have been informed that my participation is completely voluntary. I agree to participate in this research.