<<

Journal ofOccupation aland OrganizationalPsycholog y (2001), 74, 441–472 Printedin GreatBritain 441 Ó 2001The British Psychologi calSociety

Personnel selection

Ivan T.Robertson* and Mike Smith Manchester School of Management, UMIST,UK

Themain elementsin thedesign and validation of personnelselection procedure s havebeen in placefor many years.The role of jobanalysis, contemporary models of workperformance and criteria are reviewed criticall y.After identifyin gsome important issues andreviewing research work on attractingapplicants, including applicantperception sof personnelselection processes, theresearch on major personnelselection methods is reviewed.Recent work on cognitiveability has conŽrmed the good criterion-relatedvalidity, but problems of adverseimpact remain.Work on personalityis progressing beyondstudies designed simply to explorethe criterion- relatedvalidity of personality.Interviewand assessment centreresearch is reviewed,and recent studies indicating the key constructs measuredby both arediscussed. In both cases, oneof thekey constructs measuredseems to begenerally cognitive ability. Biodata and the processes usedto developbiodata instruments arealso criticallyreviewed.The articleconcludes with acriticalevaluation of theprocesses forobtaining validity evidence(primarily from meta-analyses)andthe limitations of thecurrent state of theart. Speculativ efutureprospects arebrie y reviewed.

Thisarticle focuses on personnel selectionresearch. Muchcontempora ry practice withinpersonnel selectionhas been inuenced by the research literature, butit is clearlynot the casethat there isasystematiclinear owfrom the research literature intothe work ofpractitioners. The situationismuch more complex. Forexample, assessmentcentres were designedoriginal lyto meet aclearpractical need. Their originaldesign was heavily in uenced by psychologists.There was,however, relativelylittleresearch intosome of the speciŽc componentsof assessment centres, when they were Žrstused for practicalpersonnel selectiondecision s,inthe armedservices and in commercial settings.Research intothe overallvalidity of assessmentcentres andinto the validity,adverseimpact and utility of many of the component partsof assessmentcentres followedfrom these highlypractical beginnings.In turn, thisresearch hasinformed the practice of contemporary assessmentcentres. Similarly,complex interplaytakes place for allother selection methods.This article, then, aswellas reecting contempor ary research interests,as far aspersonnel selectionis concerned, willalso inevitab lyre ect contemporary practice tosome degree.

*Requests for reprints should be addressedto Prof. IvanRobertson, Manchester School of Management,UMIST, POBox88, Manchester M601QD, UK (e-mail: [email protected]). 442 IvanT. RobertsonandMike Smith The traditionalmodel for selectionand assessme ntpractice hasnot changedfor many years.Smith andRobertson (1993) indicated the majorsequence of events involvedin the designand validati on of any personnel selectionsystem. The traditionalsystem involves the initialdetailed analysis of the .This analysis is then usedto indicate the psychologicalattribute srequiredby anindivid ual who may Žllthe jobe Vectively. In turn, personnel selectionmethods are designedwith the goalof enablingthose responsib lefor selectionto attract and evaluatecandidat es’capabili tieson these attributes.A validationprocess is used toassess the extent towhich the personnel selectionmethods provide valid predictorsof jobperformanc e, orother criterionvariables such as absenteei smor . Probablythe mostsigniŽ cant change withinthe personnel selectionresearch literatureinthe lastdecade or sohasbeen the increasedconŽ dence thatresearcher s havein the validityof mostpersonnel selectionmethods. This increased conŽ dence hasarisen from the resultsobtained by investigatorsusing meta-anal ysis(Hunter & Schmidt, 1990).Meta-ana lyticstudies of awidevariety of methodshave indicate dthatwhen the artefactuale Vectsof samplingerror, range restriction andmeasureme ntunreliabi lityare removed, the ‘true’validity of personnel selectionmethods is much higherthan originall ybelieved.Many selectionmethods have been subjectedto adetailedmeta-anal yticalreview. One of the bestlists of meta-analysesof selectionmethods is contained in Schmidt and Hunter’s (1998)article where they identifymeta-ana lysesof 17methods of selection.Figure1 isbasedon Schmidt andHunter’ s review andshows the validity, estimatedby meta analyses,of many selectionmethods. The numbers on the rightshow the validitieswhen overalljob performanc eratings—usually by superiors—are usedas criteria. The Žgureson the left of the diagramshow the validitiesobtained when progressduring is used as the criterion.The twosets of resultsare very consistent, even thoughthere are fewer meta-analyses availablefor trainingcriteria. The tableis also interesti ngbecause itcasts light upon the developmentof thoughtconcerning criteria.In the mistsof time (before 1947!),psycholog istssought a singlecriterion against which they couldcalibrate selection instrumen ts.Followin gDunnette’s advicein the 1960s tojunk THE criteria,they soughtto obtain data for abankof diversecriteria. The useof multiplecriteria had disadva ntages:they were often impracticalor costlyto collect andoften ledto confusion because they producedsigniŽ cantly diVerent validities.Some orderwas restored in the 1980s,when criteria were organizedinto three groups:productio ncriteria,personnel dataand judge- ments. Schmidt andHunter’ s tablesimply that,in practice, psychologistshave combined productioncriteriaand judgement alcriteria(usually superviso ryratings) toproduce two de-facto categories.The hegemony of supervisory ratingsas a criterionhas, if anything,been strengthenedby the current emphasison contextual andcitizensh ipbehaviour sasan element of jobperformanc e(seelater in this paper): supervisory ratingsare one of the few waysthat such occupatio nal citizenshipcan be gauged. Asfar ascriteria are concerned, the mostsigniŽ cant changes within personnel selectionresearch concern the broadeningof the constructof jobperformanc e Personnelselection 443

Figure 1. Accuracyof SelectionMethods. suchthat job performan ce includesnot only e Vective performanceof the relevant tasksbut also contextua lperformance or organizationalcitizensh ipbehaviour (Borman &Motowidlo,1997; Coleman &Borman, 1999). 444 IvanT. RobertsonandMike Smith The areaof personnel selectionthat has developed least and seems increasingly problematicisjobanalysis .The traditionalrole ofjobanalysis within the personnel selectionparadigm is toprovidea Žxed startingpoint for allsubsequen tstepsin the process.Anyone who hasgiven the remotest thoughtto contempora ry organiz- ationallife willrecognize thatjobs are no longeranywhere near asstable as they were, even 10or 15years ago. At one time, the lifespanof awork-related technologyandthe spanof individualemployees were reasonablywell matched. Nowadays,technologies,work practicesand even organizationalforms come andgo withinthe lifetime ofanindividualor even withina speciŽc decade. Thismeans that in many selectionsituatio ns,the requirementtounderstandthe job ismadeparticula rly complex anddi Ycult,because the jobin questionis likelyto be radicallydi Verent inways that are very di Ycultto predict within as little as 5 or maybe 10years. In theirreview of personnel selection,Houghand Oswald (2000) noted the importanceofthe changingnature of workand the di Ycultiesthat this presents for traditionaljob analysis .They indicatethat, in recognition of the increasinglyrapid changesthat are takingplace inthe workplace,many researchersand practitio ners now conductanalyses that focus ontasksand the cross-functionalskills of workers, ratherthan traditio naljob analysis with its focus onmore staticaspects of .In particular, they notedthe useof O*NETasa exible databasethatcontains information aboutboth work behavioursandworker attributes, includinginfor- mationon personality variables,cognitivevariables ,behaviouraland situatio nal variables(Petersen,Mumford, Borman, Jeanneret, &Fleishman,1999).This modern approach tojobanalysis has many usefulattribute sbutclearly cannot Žnd away of predictingthe future requirementsof jobswith any degree of certainty.

Major issues Thisarticle is not intendedto provide a comprehensivereview of the research literatureconcerningpersonnel selectionin the lastdecade or so.The brief from the editorsof thisspecial issue to the authorsincluded a requirementto‘...impose themes over whatis powerful inthe areas,synthesiz e some ofthe existingliteratur e, make clearwhat we know andwhat we donot yet know’. Thisarticle has been writtenwith these requirementsin mind.Recent reviewsof the personnel selection research literatureprovidea detailedaccount ofthe current stateof the art.Hough andOswald (2000) and Salgado (1999), have both provideddetailed and compre- hensivereviews of the personnel selectionresearch literature.The review ofHough andOswald (2000) covers the whole areaof personnel selectionfrom joband work analysisthrough to professional,legal and ethical standard s.Salgado’s (1999)review concentrateson personnel selectionmethods. Both Houghand Oswald (2000) and Salgado (1999) provide convincin gevidence of the earlierstatement that the resultsof meta-analysishave provided strong evidence ofgoodvalidity for many personnel selectionmethods. Several methods, includingcognitiveability tests, personal ityquestionn aires,interview s,assessme nt centres andbiodata, have all been shownto have reasonab lygood validity .One majorarea that causes di Ycultiesfor both researchersandpractitio nersrelates to Personnelselection 445 the fairnessand adverse impact of personnel selectionmethods. Adverse impact occurs when members of one sub-groupare selecteddispropor tionatelymore or lessoften thanthe members of another sub-group.In the UnitedStates, this has causedproblems for anumber ofyearsin relation to people from di Verent ethnic minority groups.Similar problems havearisen in the UnitedKingdom and other countries.In generalterms, cognitiveability creates most problems when itcomes toadverse impact. Even when combined withmethods that have a lower adverse impact, cognitiveability frequentl ycreatesadverse impact problems for selection systems(Bobko, Roth, &Potosky,1999; Schmitt, Rogers,Chan, Sheppard, & Jennings,1997).Some personnel selectionmethods that do not show an adverse impact, e.g. personalityquestionnaires(Ones & Visweveran,1998) are beingmore widelyused (Shackleto n&Newell, 1997).Other methods,such as biodata, which showminimal adverse impact and reasonabl ygoodlevels of validity,continue tobe usedrelativel ylittle(Bliesene r, 1996;Shackleto n&Newell, 1997). Fora number of years, the personnel selectionresearch literaturehasbeen dominatedbystudiesthat have explored the validityof speciŽc personnel selection methods.The developmentof meta-analysisand subsequen tuseof the technique toprovidebetter estimatesof the validityof awhole rangeof methodsprovided a signiŽcant step forward. Validityevidence concerningawiderange of methodsis now reasonably,stableand a number oftopicssuch as thosementioned above, i.e. joband work analysis ,criterionmeasureme nt, adverseimpact and fairness, are beginningto be increasinglyvisible within the personnel selectionresearch literature. They are alsoimportant within the practitioner domain.Some other issues,which are of growingimportanc einpersonnel selectionresearch and practice, are: selectionprocedure sthattake account ofthe groupwithin which the candidateswill work (i.e. team member selection);selectionfor multi-national ,where recruitsare requiredto work acrossdi Verent cultures;the reactionsof applicantstopersonnel selectionexperience sandthe criterion-related validityto be obtainedfrom combining di Verent selectionmethods. All of these issuesare consideredlaterin this article.

Describingjobs and workercharacter istics Traditionally,job analyses are dividedinto two main kinds: task-ori entatedjob analysisand worker-or ientatedjob analysis (see Sandberg, 2000).

Taskanalysis Duringthe periodof thisreview, relativelylittlework hasbeen conductedon job analysisin its strict sense. Hough and Oswald (2000) do not includea single reference totask analysis in their lengthy review. Salgado’s (1999)review does includea sectionon jobanalysis ,butit dealsmainly with the secondstage— worker characteristics.Sandberg (2000) maintains that task analysis has the advantageof identifyingessentialactivitiesandgiving concrete, detaileddescripti onsthat can be readilyapplied. Contempora ry research on taskanalysis is sparse. One empirical 446 IvanT. RobertsonandMike Smith studyis Landis, Fogli, and Goldberg’ s (1998)use of future-orientatedjob analysis . They givea descriptionofthe stepsneeded to obtain an inventory offuture tasks for three new entry-levelpositions in an insuranc eorganization.Their method seems particularly relevantin anenvironmentthatis fastpaced and electroni c. They recommend thata panelof SMEs (Subject-matter Experts) shouldinclude both incumbentsandnon-incum bentsbecause non-incumbentsare better ableto takea strategicviewof future developments.Sanchez (2000)gives an example of ‘strategicjobanalysis ’withair-tra Yccontrollersand mentions applicat ionsby Porter(1985) and Wright and McMahan (1992). The paucityof empiricalresearch on taskanalysis would imply either thatthe topicis unimporta ntor thatwe havereached asatisfactory stateof knowledgein thisarea. For details of the psychometric propertiesof taskanalysis ,weneed torely on olderwork suchas that of Sanchez andFraser (1992). Work ontheoriesof job performancesuchas Borman andMotowid lo(1993) and Campbell (1994)that distinguishtask performanc eandcontextual performanceislargely ignored in contemporary taskanalysis .In practice, there isstill the tendency tofocus upon speciŽc, discretetasks and ignore contextua laspectssuch as maintaini ngmorale, courtesy andother citizenshipbehaviour slistedby Viswesvaranand Ones (2000).

Person speciŽcation (worker-or ientatedanalysis ) There hasbeen more work inthe areaof PersonSpeciŽ cation, although much of thishas been conductedunder the headingsof competency determinationor worker-orientatedjob analysis .The knowledge,skillsand abilitie s(KSAs)that appearin aperson speciŽcation are usuallydetermine dinthe lightof resultsfrom ataskanalysis .However, many practitionersgo directly to KSAs by asking subject-matterexperts toidentifythe competenciesrequiredfor the job.Very little isknown aboutthe validity,reliability or other psychometricproperties of this process. Anumber of instrumentshas been developedto establis hthe personality requirementsof ajob. Rollandand Mogenet (1994) developed an ipsative system thatidentiŽ es the mostsalient ‘ bigŽ ve’personal ityfactors for performanceina givenjob. Raymark, Schmidt, andGuion (1997)developed a Personality-related PositionRequiremen tsForm thatalso aids the identiŽcation of relevantpersonali ty traits.Hogan and Rybicki (1998) developed an instrumen tthatcreates a proŽle of jobrequiremen tsthat can be usedin conjuncti on withthe HoganPersonali ty Questionnaire.Westoby andSmith (2000)developed a 60-itemquestionn aire, completed bysubject-matterexperts, thatindicates which 16PFscalesare likelyto be importantdetermina ntsof performanceina speciŽc job.The extent towhich these instrumentsidentify personal itytraits that provide good predictio nsof subsequentperformancehasnot been evaluated.These instrumentsare too new for anauthorita tiveassessme ntof theirpsychometr icproperties .Their structured, systematicapproachesusingmultiple informant s should increasethe reliabilityand validityof ‘person speciŽcations’ , andprelimina ryevidence suggeststhat this is so. Amore accurate speciŽcation of the personalityrequirementsofajobwould mean Personnelselection 447 thatirrelevan tdimensionscan be eliminated,and the ‘averagevalidity ’of the remainingdimensio nsshould be higher.

Modelsof work performanceandcriteria The successof aselectionsystem is gauged against criteria. Often, the choice of these criteriais decided by convenience. The attenuationor contaminationarising from the useof poor criteriaresults in a systematicunderestimationof the true validityof the selectionmethods. Murphy (2000) wrote ‘validitycoe Ycients are usuallylarger than, and more consistentthan,a casualreview of the literature... wouldsuggest’ . Problemswith criteria can be mitigatedin two main ways. First, they can be chosen more carefully, on the basisof ataskanalysis ,asindicated above. Second, they can be chosen on the basisof modelsof work performance. Viswesvaranand Ones (2000) gave an excellent overview of modelsof , andtheir paper isworth consideringinsome detail.They giveuseful examples of Borman andMotowid lo’s (1993)distinct ionbetween taskand contextualperformance. Taskperforman ce isdeŽ ned as ‘ the proŽciency withwhich incumbentsperform activitiesthatare formally recognisedas part of the job; activitiesthat contribut etothe ’s technicalcore either directlyby implementinga partof itstechnolog icalprocess, or indirectlyby providingit with neededmaterials or services’. Taskperforman ce islikely to be containedin most jobdescripti onsand has, perhaps, been over-emphasizedand over-used in developingcriteria. Contextualperforman ce isvery similarto the concept of organizationalcitizens hip,organiza tionalspontanei ty, extra-rolebehaviourand pro-socialorganizationalbehaviour .Contextual performanceconsistsof behaviour thatpromotes the welfare of individualsor groups,and it includes component s suchas altruism ,courtesy, civicvirtue, conscient iousness,makingconstruct ive suggestions,protectin gthe organization,developin goneself andspreadin g goodwill. Viswesvaranand Ones (2000) provided a very usefulcollation of the dimensions of jobperformanc e, which havebeen identiŽed by some other researchers.The dimensionsare organizedinto two groups: dimensio nsfor singleoccupatio nsand dimensionsthatare applicabletoalloccupatio ns.These Žndingsare summarizedin Table 1. The listsof dimensionsof work performanceare remarkableinseveral ways.First, they containa substantialnumber oftautologicalor vaguephrases such as‘ overalljob performanc e’.Second, they showremarkabl evarietyand little common ground.It would seem thatwe havea considerable way togo before research providesus witha common setof variablesunderlyin gworkperformanc e. Some of thatdi Yculty may liein deŽ ciencies inthe scalesused to measure work performance (seeArvey & Murphy, 1998).Measurem ent of work performance invariablytakesone oftwoforms: countsof outputor other behavioursenshrined inorganizationalrecords or ratingsby other people. Unfortunately, organizational recordsare often incomplete, irrelevantor seriouslycontaminatedby artefacts. Ratingsby other people are often unreliableandsubjectiv e. To make matters worse, jobperformanc eisusuallymeasured as astaticphenomenon ,whereaswork performance isdynamic in many ways.Error varianceproduced by these 448 IvanT. RobertsonandMike Smith Table 1. Dimensionsof jobperformance from variousstudies

Job performance dimensions for speciŽc jobs Entry-levelservice jobs: Hunt Entry-levelmilitary jobs: Campbell, McHenry,and (1996) Wise (1990) (1)Adherence to confrontational (1)Core technicalproŽ ciency rules (2)Soldiering proŽ ciency (2)Industriousness (3) EVort andleadership (3)Thoroughness (4)Personal discipline (4)  exibility (5)Physical Ž tness andmilitary bearing Borman (5)Attendance andMotowidlo (1985) (6) OV-task behaviour (6)Allegiance (commitment andsocialization) (7)Unruliness (7)Team work(socialization and morale) (8) Theft (8)Determination (morale andcommitment) (9)Drug misuse Campbell (1990) Managers: Conway (1999);Borman (1)Job-speciŽ c proŽciency andBrush (1993) (2)Non-job-speciŽ c proŽciency (1) and supervision (3)Written and oral communication (2)Interpersonal relations and (4)Demonstrating e Vort communications (5)Maintaining personal discipline (3)Technical behaviours, e.g. (6)Facilitating help and team performance administration (7)Supervision (4)Useful behaviours, e.g. (8)Management or administration handlingcrisis Viswesvaran (1993) Job performance dimensions for jobs in (1)Overall job performance general (2)Productivity Bernadinand Beatty (1984) (3) EVort (1) Quality (4)Job knowledge (2)Quantity (5)Interpersonal competence (3)Timeliness (6)Administrative competence (4) Cost-eVectiveness (7) Quality (5)Need for supervision (8)Communication competence (6)Interpersonal impact (9)Leadership (10)Compliance withrules

deŽciencies inmeasureme ntcan obscure underlyingdimensio nsof work perform- ance. Even worse, error variancecan be mistakenfor true varianceand wrongly interpreted. Itwould appear that, despite these measurementproblems, about50% of the variancein performan ce ratingsis common, andthere isa ‘g’factor of work performancethatis analogoustothe ‘g’factor ofcognitiveability.If a‘g’factor of jobperformanc edoesexist, research e Vortwould be neededto develop ways in which itcan be measuredand to establis hhow andwhy itvaries from person to person. Hunter, Schmidt, Rauchenberger, andJayne (2000)suggest that a ‘g’factor inwork performanceisdetermined by twocharacter istics:general mental ability andconscient iousness.The ‘g’factor inwork performanc eisvery similarto the universaldomainsuggested by Smith’s (1994)theory of the validityof predictors. Personnelselection 449 Table 2. Ratingsof the importance ofattributesin selection decisions according to one ScottishConsultancy Organization

Percentage of Attribute Rating 0–3 maximum

Honesty 2.89 96 Conscientiousness 2.77 92 General ability 2.72 91 Potential 2.65 88 Experience 2.56 85 Adaptability 2.53 84 Drive 2.45 82 Experience 2.34 78 Fitof values 2.39 80 Job knowledge 2.25 75 Socialability 2.19 73 Health 2.10 70 ProfessionalqualiŽ cation 2.03 68 Accident/appearance 1.99 66 AcademicqualiŽ cations 1.78 59 Years withother Žrms 1.61 54 Similarityto futurecolleagues 1.50 50 Age 1.50 50 Outsideinterests 1.20 40

Itisinterestingtocompare explicitmodels of jobperformanc ewiththe implicit modelsused by selectors.Table 2showsdata derived from asurveyby Scholarios andLockyer (1999)of the way inwhich smallconsultan cyorganizationsin Scotland recruit professionalssuch as accountan ts,architect s,lawyers and surveyors . Typically,each consultancy recruitedtwo professio nalseach year. Scholariosand Lockyer presentedinterview ees witha listof candidatequalities.Itcan beseen that there isconsidera ble correspondence between the explicitmodel of Hunter et al. andthe implicitmodel. Both modelsstress the importanceof conscientiousness/ integrityand general ability. However, the explicitmodel places general ability higherthan conscient iousness.Huang(2000) surveyed HRM practicesin four multinationalorganiza tionsin Taiwanand found thatmultinati onalstend to select employees on the basisof jobskills rather than their Ž twiththe . Newell (2000)speculate dthatthe qualitiesunderlyingjobperformanc ewill change inthe lightof changesin the economy andbusiness organizat ion. She arguedthat Knowledg eManagement—the wayin which anorganiza tioncreates, utilizesand stores the expertise thatunderlies its products —isthe current managementfashion.Proponent sclaimthat Knowledg eManagementwill be the key tocompetiti ve advantageinan ‘ information age’in the sameway thatthe managementof capitaland physical resources was the key advantagefor the old 450 IvanT. RobertsonandMike Smith ‘smokestack’industri es.In the ‘KnowledgeEra’, jobswill change rapidly,and a key attributewill be the abilityto form socialnetworks that facilitat eaccessto a huge poolof information heldby other people. Some of these changesare also identiŽed by Schmitt andChan’ s (1998)excellent text ‘PersonnelSelection: A TheoreticalApproach’. They suggestthat in the future, there willbe anincreasein the speedof technicalchange, the useof teams,communica tion,globaliz ation andservice orientati on. Such changeswould increase the importanceof team selectionand expatriat eselection.Thismight suggest that will be inappropriateand that the abilityto shareinformati onwillbe akey determinantof jobperformanc e. However, jobanalyses are notnecessari lystatic, and many incorporate aresponsibilityto develop, change andbe exible inresponse to environmentalchanges. It could also be thatcognitive ability will be ata premium ina businessworld where huge quantitiesof informationneed tobe assimilated andprocessed .

Attracting applicants Once ajobhas been deŽned and the qualitiesof the idealapplicant speciŽ ed, it is necessaryto attract applican ts.Recruitme ntremains an area that escapes the attentionof many researchers.Indeed, Mathews and Redman (1998) claimed that the areaof recruitmenthasbeen neglected. Purcelland Purcell (1998) identify changes in the labourmarket suchas outsourcing(sub-con tractingspecialis tservicesfrom outsideproviders ), in-sourcing(the useof agency sta V andtemporary employees)andthe establish- ment of acadreof core workers. These trendsimply the developmentof separate labourmarkets that need di Verent recruitmentstrategies.Mathews and Redman (1998)surveyed the viewsof 191managers and executives .They discoveredthat 54%look at job adverts weekly, while83% look at job adverts at least once a month. For80% of managers,this browsing is not connected withjob-seeki ngbut isconcerned withcomparing salariesand keeping intouch withthe marketplace. Mathewsand Redman asked the sampleto rate the importanceof amenu of 21 itemsthat might appear in an advert. The itemsachieving the highestrank were description of the job,, key responsibilities,career prospects,closingdate, company details,location and experience needed. When applicantviewswere compared withthe itemsthat appear in actual adverts, only amoderate correlation wasfound. Actualadverts seem tounder-pl aythe importanceof promotion prospectsand closing date whilst over-emph asizingpersonal character istics.The ‘applicants’were alsoshown a listof personalcharacteri sticsthat frequentl yappear inadverts,and they were askedto ratewhether theirinclusio nwouldencourage or discourage anapplicati on. Adjectivesthat are mostlikely to discoura ge an applicationare: analytical,creative, innovative, energetic andinterpers onal.It wouldbe fascinatingto know the connotationsthat give these wordsa less favourableimpact. The surveyalso showed that managers prefer toapply using a CV(55%)or anapplicati on form (41%).The methodschosen by Žrmstend to under-represent these media(40% and 12%, respectiv ely). Recommendationsto Personnelselection 451 respondby makinga telephone callare not popularwith manageria lapplicants (3%).

Applicantperceptions Applicantperceptionsplaya key role inrecruitmen tsincenegative views will inhibitsome people from puttingthemselve sforward. Furthermore, negative attitudesmighta Vect the motivation, andthus subseque ntperformanc e, of applicants,when they taketests or attendinterview s.Applicantperceptionsmight alsoin uence the degree towhich they are preparedto give answers that are truthful. Research inthis area has followed two main approache s:determini ng aspectsof the selectionprocedure thattend to bedislikedand attemptin gtoexplain why applicantsdevelopnegative views. Considerable research, priorto the periodof thisreview, attemptedto determine applicants’views on selectionmethods (see Hough & Oswald,2000, p. 647; Salgado,1999, p. 29).Many authors such as Rynes andConnelly (1993),approache d the topicat a macro-levelby askingpeople (often studentsseeking jobs in their Žnalyear) torate various selection devices on the degree towhich they likedeach selectionmethod. Asagrossover-sim pliŽcation designed to incitefurther reading, the resultsof these studiesshow that candidat estend to like work samplesand unstructuredinterview sbuttend to dislike tests. Conway andPeneno (1999) compared applicantreactions to three types ofinterviewquestion s.Moscoso(2000) suggestedthatjob knowledg eor jobexperience may be apossiblemoderatin g variable.Candidateswith extensive knowledgeor experience withina speciŽc job may react more favourablytostructure dinterviewsbecause they are better equippedto answer these types of questions.Kroeck andMagnusen (1997) examined candidateandinterview er reactionsto video conference jobinterview s. In generalcandidat espreferred traditionalface-to-fa ce interviews.Tonidand eland Quinones (2000)explored reactionsto adaptive testing and found thatcandidat es likethe capabilityprovidedby adaptivetesting to ‘ skip’questions but thought that the way inwhich adaptivetesting presents di Verent candidateswithdi Verent setsof questionswasunfair. Other studiesapproache dthe issueof candidatereactionsat amicro-level. They triedto identify speciŽ c facets of selectionmethods that caused dislike. For example, some very earlywork focused onthe impactof the non-verbalbehaviour of interviewers. Again,as an over-simp liŽcation, interview ees likedinterview ers who emit ahigh-levelof positivenon-verba lbehavioursuchas nods and smiles. Bretz andJudge (1998) examined the way inwhich information containedin a writtenrealistic job preview inuenced candidateperceptions.Not surprisingly, they found thatnegative informati on reducedthe organization’s attractiveness, whereasa highsalary and the prospect ofbeinginvited to visitthe organizationfor asecondinterview increased attractiv eness. Very inuential papers by Gilliland(1993)and Schmitt andGillilan d(1992) focused attentionupon the role thatperceptio nsof fairnessplay in determini ng applicantattitudes.Subsequent investigatorsdi Verentiatedproceduraljusticefrom distributivejustice and investig atedthe e Vectsof both types infostering positive 452 IvanT. RobertsonandMike Smith attitudes.Generally ,the evidence suggeststhat feelings of injusticeengender negativeattitude stowardsselection systems. Thorstein sonand Ryan (1997) investigatedthe e Vect of selectionratios and outcome (accepted or rejected)on applicantperceptions.They found thatselection ratio had very littlee Vect, while the outcome interms ofreceiving ajobo Verwasa signiŽcant variable. Elkins and Phillips(2000) investig atedperception sof fairness(both procedura landdistribu - tive)and job relatedne ssofabiodatainstrumen t.The mostimportant determina nt of both kindsof fairnesswas the selectiondecision .If the selectiondecision is in the candidate’sfavour, the procedure islikely to beviewedas fair. Ifthe selection decisiongoes against a candidate, itis likely to be viewedas unfair. Ployhartand Ryan (1998) also used a laboratory simulation toinvestiga te candidates’perception sof fairness.The resultsagain showed that the outcome, receiving ajobo Ver, wasthe strongestdeterminantofperceptionsoffairness.The inuence of proceduralviolationssuch as too much or too littletime tocomplete the testhad an asymmetri ce Vect. Aruleviolation that produced favourabl e treatment wasnot perceived asunfair, whilsta ruleviolatio nthatproduced unfavourable treatment wasregarded as unfair. Asa simpliŽcation, it would seem thatan outcome thatis favourabl etothe individualor atreatment thatis favourabletothe individualis seen asfair, whilean unfavoura ble outcome or unfavourable treatment isseen asunfair. Notionsof fairnessor unfairnessvery much dependon the individual’sviewof the impact. Itseems that the concepts of fairnessand self-inte restare closelyintertwin ed! Chan, Schmitt, Jennings,Clause, and Delbridg e(1998)investig atedperception s of fairness,job relevance anda self-servingbias. They suggestedthatwhen self-esteem isthreatened by rejection,candidatesreduce the threatby perceiving the systemas unfair. Chan et al.’sstudyis notablebecause itusedgenuine applicants for jobs.

Legal aspectsof candidatepercepti ons In anextreme case,an adverse reaction from applicantscan leadto a legal challenge.Gilliland(1993) suggested that such challenge sare lesslikely if candidatesfeel thatthe selectionmethod hasfour characteristics:(1) job related- ness,(2) an opportuni ty for the candidatetodemonstr ateability, (3) sympathet ic interpersonaltreatment ,(4)questions that are notconsider edimproper. Generally, about64% of legalchallenge sare madeby ‘new hires’rather than someone who is alreadyworking for the organization. Terpstra, Mohammed,andKethley (1999)analysed 158 US FederalCourt cases. They compared the proportionof legalchallenge swiththe proportionthatwould be expected on the basisof the frequency withwhich aselectionmethod isused. The methodsmost likely to be challengedinproportio ntotheir use are testsof physicalability (350%), ability tests (230%) and unstructu redinterview s(200%) (Žgures in bracketsrepresent the percentage ofover-representation). These Žgures need tobe interpretedwith care. Unstructuredinterview sare usedvery frequently and,in absoluteterms, are the method mostlikely to bechallenged.However, work sample(20%), assessme ntcentres (20%)and structure dinterviews(50%)were least Personnelselection 453 likelyto be challenged.The number of challengesisnot necessarilythe most importantfactor sincemany challengesare defeated. Terpstra et al. (1999)found thatall the challengesagainstassessme ntcentres andstructure dinterviewswere dismissed.The largemajority of challengesagainstwork samples(86%) and ability tests(67%) were alsounsuccess ful. Abouthalf the casesagainst unstructu red interviews(59%)and physical ability tests (58%) were resisted.These resultsshow thatemployers who are solelymotivated by avoidingtrouble should not use tests of physicalability, tests of mentalability or unstructuredinterview s.Employers who are preparedto argue their case in court, however, shouldbe mostwary of physicalability tests and unstruct uredinterview s. Research resultson candidates’perceptio nsneed tobeviewedin the lightof the methodologiesused. Most studies, with the notableexception ofthatof Chan et al. (1998),have used simulati onsand student participa nts.Furthermo re, the measure- ment of key variablesisless than ideal. Fairness ,for example, isfrequently operationalizedby responsesto questionssuchas, ‘ wouldyou recommend thisto afriend?’or ‘wouldyou accept ajobo Ver?’The answersto these questionsare heavilycontamina tedby factors suchas the predispositionsof afriend or alternativeprospects of .Often, the scalesused to measure fairness havefew items,and the useof ascalewith a smallnumber of itemscan cause problems withreliabili ty. Consequently,only broadŽ ndingsshould be interpreted. The one Žndingthat emerges from moststudies is thatcandida tes’perception sare determinedby the o Ver, or not, of ajob. Anunkindcritic might observe thatthe only reliableapplicati on from thisgenre of research isthat organiza tionsshould improve candidateperceptionsby o Veringmore candidatesa job. Personnel selection methods Since the very earliestresearch on personnel selection,cognitiveability has been one ofthe majormethods used to attemptto discriminatebetween candidatesand topredict subseque ntjob performanc e. Duringthe 1980s,several meta-anal ytic studiesof the criterion-relatedvalidity of cognitiveabilitytests produced conclusiv e results(see Schmidt &Hunter, 1998).These studieshave produced clear Ž ndings concerningboth the validityof cognitiveabilityand the extent towhich cognitive abilityis fairwhen usedin testingpeople from di Verent ethnic groups.The Žndings haveshown that cognitive ability provides criterion -relatedvalidity that generaliz es acrossmore or lessall occupatio nalareas. The resultsconcerning di Verential validityhave also been reasonablyconclusive, indicatingthat cognitiv eability providesaccurate predictionsof subsequentwork performancethatare more or lessequally accurate acrossdi Verent ethnic groups.In other words,cognitive ability testingdoes not providedi Verentially‘unfair’(Cleary, 1968)predictio nsfor members ofdi Verent ethnic minority groups.Of course, these scientiŽc Žndingsdo not imply thatit iswiseto usecognitive tests for allselection purposes .Asalready noted,the problems of adverseimpact are di Ycultto cope with,given that members ofsome minority groupsobtain lower scoreson suchtests. This state of aVairs,i.e. no di Verentialvalidity but poorer scoresfor some groupsis challengi ng for people involvedin the designand validati on of selectionprocedure s.There is no simplesolution. 454 IvanT. RobertsonandMike Smith Afurther conclusiveŽndinghas shown that the core dimensionof cognitive ability(general mental ability, or ‘g’)isthe key component inprovidingpredictions of subsequentjob performan ce. The useof speciŽc abilities(i.e. sub-components ofgeneralmental ability) does not enhance the predictionsprovidedby the useof ‘g’alone (Olea & Ree, 1994;Ree, Earles,& Teachout, 1994).Tradition alabilitytests havefocused on assessingspeciŽc competenciesthat have been consideredsince the early 1900sto underlie intellig ence (see, for example, Carroll,1993). These factors (nowconceptua lizedas  uidintellig ence, crystallizedintellige nce, visualiz- ation,retrieval and cognitive speed) still underlie the majorityof cognitiveability testsused today. One areaof interestrelated to cognitive ability concerns the developmentof ‘practicalintellig ence’(Sternberg &Wagner, 1986,1995). For these authors, practicalintellige nce can be distinguishedfrom the kindof intelligence that liesbehind success in academic pursuits. Practica lintelligence isunrelated to formal academicsuccess but related quite directly to the abilitiesthat people developin seeking to attain their goals in every-day life. Althoughthe ideas putforward by Sternberg andWagner are interesting, there is,so far, little conclusiveevidence thatpractical intellige nce isany more e Vective atpredictin g subsequentjobperformanc e, or indeedprovides anything that is signiŽ cantly diVerent from generalmental ability. There are few publishedstudies with reasonablesizesamples that have investiga tedthe criterion-relatedvalidity for tacit knowledge, andwhere thishas been done(e.g. Sue-Chan, Latham, Evans,& Rotman, 1997,cited in Salgado, 1999), the resultshave shown that validity is modestand provides little gain beyond whatis already obtainab lefrom testsof generalmental ability. Anotherrelated, but di Verent, concept from practicalintellige nce isemotional intelligence. Emotional intelligence (Goleman, 1996)relates to the waysin which people perceive, understandandmanage emotion. Amongst practitioners, the useof the term emotionalintellig ence iswidesprea d,but a thoroughsearch of the scientiŽc literaturefailedto provide any studiesthat demonstratedthe criterion-relatedvalidity of emotionalintellig ence for any speciŽc occupationalarea.

Personality Untilquite recently, personalitywas not apopularpersonnel selectionmethod. Indeed,as recently as1965,Guion andGottier concludedthat it wasimpossib leto conducta review of the criterion-relatedvalidity of personalitybecause too few studieswere availableinthe literature. The 1990shave seen ahuge growthin the useof personality assessmentwithin personnel selectionpractice andresearch studiesdesigned to evaluate and explore the role of personalitywithin personnel selection(e.g. Barrick &Mount,1991; Frei & McDaniel,1997;Ones, Viswevera n, &Schmidt, 1993;Salgado, 1998; Tett, Jackson,& Rothstein, 1991).All of these studiesadopted a meta-analytic procedure andprovided positive evidence for the criterion-relatedvalidity of personality. From aninitial position of scepticism concerning the contributionthat personali ty couldmake toe Vective personnel Personnelselection 455 selection,researchersand practitio nershave moved toa positionwhere there is conŽdence thatpersonali tycan playa role. From thisbase, more reŽned research questionshavebegun tobeinvestigated.These includeseveral interesti ngquestions suchas: the levelof analysisthat should be usedwhen utilizingpersonal ityfor personnel selectionand assessme ntpurposes(e.g. the big-Žve levelor more speciŽc factors); the extent towhich conscientiousnessor abroadfactor relatingto integrityacts as the singlebest predictor for personality, inmuch the sameway that generalmental ability works in the cognitiveability domain; and the role of intentionalorother forms ofdistortionininuencing candidateresponsesand the incrementalvalidityprovided by personalityassessmentover andabove thatwhich isprovided by other more establishedmethods of personnel selection,suchas generalmental ability. The research focusingon the useof levelof analysisbest used for personality assessmentis, in many ways,directly related to the extent towhich broadfactors suchas conscient iousnessprovide most of the essentialpredictivepower of personality. Several researchershave attempted to address the issueof the appropriate levelof analysiswhen usingpersonal ityassessme nt. One view, best reected by Onesand Viswevera n(1996)maintain sthatbroad measures using the bigŽ ve or similarframeworks providethe bestlevel of analysisfor personality assessment. Others(e.g. Schneider,Hough,& Dunnette, 1996)favour the narrower approach usingmore speciŽc personalityfactors (seealso Robertson & Callinan, 1998).There isno simplesolution to this debate. The key questionseems to focus on the speciŽc areasof jobperforman ce thatthe personality assessmentis designedto predict. Deniz Onesand her collaboratorshave shown, quite convincingly,that when itcomes tothe predictionof overalljob performanc e, particularlywhen dataare aggregatedover largesamples, broad measures such as conscientiousnessor integrityproduce goodvalidity coe Ycients.However, other investigators(e.g. Robertson,Baron, Gibbons, MacIver,& NyŽeld, 2000) have shownthat for particularoccupationalareas and particula rjobperformanc e factors, broadmeasures, such as conscienti ousness,do not provide convincin g levelsof validity. Mostof the research concerning the e Vectsof impression managementor intentionalorunintentionaldistorti ononthe validityof personalityassessme nthas providedresults that indicate that in practical terms, there are relativelyfew problems. There isevidence thatapplican tsdo distort their responses when personalityassessme ntisusedin aselectionprocedure (seeHough, 1998). Despite thisevidence, the research concerningthe impactof matterssuch as motivati onal distortion, self-deceptionand impressi on managementusuallyshows that there is no detrimentalin uence on validity(e.g. Barrick &Mount,1996; Christian sen, Gothin, Johnson, &Rothstein,1994;Hough, 1998). Some studieshave found small eVects, butit has also been shownthat intention aldistorti on can beminimizedif applicantsare warnedof the consequences ofsuchdistorti on. Althoughdistorti on by candidatesdoes not appearto create majorproblems for criterion-related validity,itmay stillbe valuableto include ‘ socialdesirabi lity’scales in personal ity instruments.This is currently common practice anddoes provide a safeguard againstsome forms ofimpression managementby candidates. 456 IvanT. RobertsonandMike Smith Interviews Asalways, there hasbeen considerableresearch intointerview sasa selection method. Salgado’s (1999)review givesan excellent account of workto date. Probablythe mostcomprehens ivereview of interviewswasconducted by McDaniel,Whetzel, Schmitt, andMaurer (1994). A more recent review of interviewshasbeen providedby Moscoso(2000). Predictivevalid ityand structure of interviews. Probablythe mostconsisten tŽndingis that interviewsare improvedby usinga structure.Typical corrected validitycoe Ycients, quotedby Salgado,are 0.56for highlystructure dinterviewsand0.20 for interviews withvery littlepredeterm inedstructure .The twomain ways of structuring interviewsare situationalinterview ingand behaviou rdescription interviewing.It wouldseem thatsituatio nalinterview sobtainhigher validiti esthan behaviour descriptioninterviews(0.50vs. 0.39). Other notableŽ ndingsfrom Salgado’s review are thatpast-orie ntatedquestions have a highervalidity than future-or ientated questions(0.51 vs. 0.39) and that the concurrent validityof interviewsisrather higherthan the predictivevalidity. Constructvalid ityof interviews. Unlikecognitive ability or personalitytests, interview s donot focus on speciŽc constructs—they are designedto assess many di Verent candidateattributes.Recent work hasfocused upon the constructvalidity of interviewsanddetermini ngwhatinterview sactuallymeasure. Highlystructure dand job-relatedinterviewscouldbe measuringcognitive factors suchas cognitiveability (HuVcutt, Roth, &McDaniel,1996;Hunter & Hirsch,1987), tacit knowledg e (Harris,1998) or jobknowledg e, whileunstruct uredinterview smay be measuring socialskills and aspects of personality. Schmidt andHunter (1998)and Schmidt and Rader(1999) consider that interview smeasure a me´lange of experience,cognitive ability,speciŽ c abilitiesandaspects of personalitysuch as conscient iousness. Table 3presentsa rangeof correlationsgathered from anumber of sources. They are arrangedprimaril yby the characteristicmeasured and the correlations, where available, withinterview performanc e. Table 3cannot leadto incontrov ertibleconclusio nsbecause itdoes not comprehensivelycover allrelevant studies. Furthermo re, the correlationspresented may notbe totallycomparabl e;some may bebasedon smallsamples and other on largerones; some correlationsmay besubjectto restrictionofrange, whilstothers may havebeen corrected.Nevertheless,some tentativeconclusio nsmay bedrawn. Such conclusions yinthe face of the notionthat interview smeasure cognitive abilityplus conscient iousness.The datain the tablesuggest that interview sare primarilymeasuring social skills, experience andjob knowledg e. General mental abilityhas only amoderate correlation withinterview performanc e, andthe contributionof conscientiousnessseems to be quitesmall. Extrovers ionand emotionalstabilit ywouldseem tomake small,but notable, contribut ions. Agreeablenessand openness to experience alsoseem tomake only asmall contribution. Newintervie wmethods. Mostresearch oninterviewshasfocused upon the traditional, unstructuredinterview and the variousforms of structuredinterviewsuch as Personnelselection 457 situationalinterview sandbehaviou rpatterneddescript iveinterview s.Other forms of interviewsare possible,especiall ywithtechnolog icaladvances .Schuler (1989) developeda multimodalinterviewthat is dividedinto four parts:self-pres entations, vocationalquestions,biographicalquestions and situatio nalquestions .Astudywith 306subjects suggests that self-pres entationand situatio nalquestions were highly correlatedwithsocial skills. Silvester ,Anderson,Haddleton, Cunningham-Snell, andGibb (2000)compared face-to-faceinterviewswiththe telephone interviewsof 70applicant stoa multinationaloil corporati on. Applicantsreceived lower ratings for telephone interviews.However, thosewho were interviewedby telephone and then face toface improvedtheir ratings more thancandidat eswho were interviewedface toface andthen bytelephone. Thisresult probably arose because telephone interviewersfocused only upon the verbalcontent of replies,but face-to-face interviewersadded credit for other aspectssuch as non-verba l behaviour.

Assessmentcentres The criterion-relatedvalidity of assessmentcentres hasbeen establishedfor some time. In theirreviews, both Houghand Oswald (2000) and Salgado (1999) noted the generallygood evidence for criterion-relatedvalidity and also the indications thatassessme ntcentres create alowadverse impact (Baron &Janman, 1996). Althoughthe criterion-relatedvalidity for assessmentcentres iswell establis hed, there hasbeen signiŽcant concern concerningthe constructsthatare measuredby assessmentcentres. Repeatedfactor analyticstudies have indicated that the key factors thatemerge from ananalysis of assessmentcentre dataare relatedto exercises rather thanthe dimensionsor psychologicalconstruct sthatare being assessed.Houghand Oswald (2000) noted several features thatmight be usedto improve the psychometric qualityof assessmentcentre ratings.These are: (a) havingonly afew conceptuallydistinct construct s;(b) using concrete job-related constructdeŽ nitions; (c) using frame ofreference assessortraining with evaluati ve standards;(d) using cross-exer ciseassessme nt; and(e) usingseveral psycholog y- trainedassessor s. Scholz andSchuler (1993,cited in Salgado, 1999) conducted a meta-analysis of assessmentcentre dataattemptin gtoexplore the key constructsthatare measuredin the overallassessme ntrating. They found thatthe overallassessme nt ratingwas highly correlate dwithgeneral intellig ence (0.43),achievemen tmotiva- tion(0.4), social competence (0.41),self-conŽ dence (0.32)and dominance (0.30). These resultssuggest that the primary constructmeasured within assessme nt centres relatesto general mental ability (see also Goldstein ,Yusko, Braverman, Smith, &Chung, 1998).Taken together, these Žndingsraise two key questions aboutthe role of assessmentcentres withinpersonnel selectionand assessme nt. The Žrstquestion concerns the extent towhich assessmentcentres provideutility inthe personnel selectionprocess. They are clearlyan expensive resource and require largenumbers of assessorsandextensive updatingand monitorin gof exercises.If the predictivevalueobtained from assessment centres couldbe equallywell obtained from cheaper methods,such as psychometr ictesting, 458 IvanT. RobertsonandMike Smith Table 3. Correlatesof interviews

Type of Study interview Characteristic r

Sue-Chan, Latham, Evans, and Situational Self-eYcacy .56 Rotman (1997) Sue-Chan, Latham, Evans, and BDI Self-eYcacy .55 Rotman (1997) Cook,Vance, andSpector (1998) Locusof control Schuler andFunke (1989) Multi-modal Socialskills .60 Salgadoand Moscoso (2000) BehaviourSI Socialskills .54 Salgadoand Moscoso (2000) ConventionalSI Socialskills .38 Hunter andHirsch (1987) Unstructured Socialskills HuVcutt,Roth, andMcDaniel (1996) Socialskills Salgadoand Moscoso (2000) BehaviourSI Experience .54 Conway andPeneno (1999) BDI Experience .43 Conway andPeneno (1999) Situational Experience .29 Salgadoand Moscoso (2000) ConventionalSI Experience .26 Schmidtand Rader (1999) Experience Schmidtand Hunter (1998) Experience Salgadoand Moscoso (2000) BehaviourSI Job knowledge .50 Burroughs andWhite (1996) BDI Job knowledge .39 Maurer, Solamon, andTroxtel (1998)Situational Job knowledge .34 US OYceof Personnel Management BDI Job knowledge .23 (1987) Harris (1998) Structured Job knowledge Harris (1998) Structured Tacitknowledge HuVcutt,Roth, andMcDaniel (1999) Unstructured Fundamental ability .50 Salgadoand Moscoso (2000) ConventionalSI General mental ability.43 HuVcutt,Roth, andMcDaniel (1999) Structured General mental ability.35 Salgadoand Moscoso (2000) BehaviourSI General mental ability.26 Schuler, Moser, Diemand, andFunke Multi-modal General mental ability.21 (1995) Harris (1998) Structured Abilitiesand skills Hunter andHirsch (1987) Structured General mental ability Schmidtand Rader (1999) Ability(speciŽ c to job) Schmidtand Rader (1999) General mental ability HuVcutt,Roth, andMcDaniel (1999) General mental ability HuVcutt,Roth, andMcDaniel (1999) General mental ability Sue-Chan, Latham, Evans, andRotman General mental ability (1997) Schmidtand Hunter (1998) General mental ability Salgadoand Moscoso (2000) ConventionalSI Extraversion .56 Salgadoand Moscoso (2000) BehaviourSI Extraversion .15 Schuler (1989) Multi-modal Extraversion Salgadoand Moscoso (2000) ConventionalSI Emotional stability .54 Salgadoand Moscoso (2000) BehaviourSI Emotional stability .09 Conway andPeneno (1999) General questionEmotional stability

(Continued) Personnelselection 459 Table 3. Continued

Type of Study interview Characteristic r

Schuler (1989) Multi-modal Emotional stability Cook,Vance, andSpector (1998) Emotional stability Salgadoand Moscoso (2000) ConventionalSI Agreeableness .21 Salgadoand Moscoso (2000) BehaviourSI Agreeableness .20 Conway andPeneno (1989) General questionAgreeableness .17 Salgadoand Moscoso (2000) ConventionalSI Conscientiousness .25 Salgadoand Moscoso (2000) BehaviourSI Conscientiousness .13 Schmidtand Rader (1999) Conscientiousness Schmidtand Hunter (1998) Conscientiousness Salgadoand Moscoso (2000) ConventionalSI Open to experience .26 Salgadoand Moscoso (2000) BehaviourSI Open to experience .04 Salgadoand Moscoso (2000) ConventionalSI Grade PointAverage .15 Salgadoand Moscoso (2000) BehaviourSI Grade PointAverage .14 Caldwelland Burger (1998) Personalityin general HuVcutt,Roth, andMcDaniel Personalityin general (1999) Schuler (1989) Multi-modal Achievementmotivation Cook,Vance, andSpector (1998) Achievementmotivation Harris (1998) Structured Achievementmotivation

BDI=behaviour description interviews. BehaviouralSI= situational interviews similar to those developed by Janz. Conventional SI=situational interviews similar to those developed by Lathamand Saari. Multimodal= interviews developed by Schuler—see section, ‘Newinterview methods’. the cost-eVectivenessofassessmentcentres inthe personnel selectionand assess- ment processshould be questioned.The secondconcern relatesto the extent to which the information providedfrom assessmentcentres can be usedto indicate strengthsand weakness esin candidat esand can providea basisfor further development. Concern over the constructvalidity of the dimensionsassessed in assessmentcentres raisesquestions over the validityand reliabili tyofthe assessment ofspeciŽc competencies, derivedfrom assessment-centre scores.

Biodata Althoughbiodata are usedfar lessfrequently thanother selectionmethods such as the interviewand psychometr ictests, they haveattracted consider ableresearch attention.The attentionmay havebeen prompted by Salgado’s (1999)conclusio ns thatbiodata have substant ialand generaliz ablecriterion validity and that construct validityis wellestablis hed.Bliesener ’s(1996)authorita tivemeta-anal ysissuggested thatthe validityof biodatascales was 0.30. However, severalfactors appearedto moderate thisŽ nding.Concurrent validitystudies yielded a higherŽ gure of 0.35. The type ofcriterionused in a studyappeared to have a signiŽcant e Vect: studies 460 IvanT. RobertsonandMike Smith usingtraining criteria obtained validiti esof0.36, andstudies using objective criteria obtainedvaliditi esof0.53. Interestingly, the validityof biodatafor females ishigher thanthe validityfor males(0.51 and 0.27, respectively). Morerecently, Mount,Witt, andBarrick (2000) demonstra tedthat empirical lykeyed biodatascales had incrementalvalidityover acombination of testsof generalmental ability and personality. Biodatahave been appliedto awiderange of occupationssuchas clericaljobs in the privatesector (Mount et al.,2000),accountan ts(Harvey-Cook &Ta Zer, 2000), mechanicalequipment distributors(Stokes &Searcy, 1999),hotel sta V (Allworth & Hesketh,1999), civil servants (West & Karas,1999), managers (Wilkins on, 1997) andnaval ratings (Strickle r&Rock, 1998).An attempthas even been madeto use biodatato predictpeople’ s vocationalinterests(Wilkins on, 1997)and ability to deal withother people from awiderange of backgroundsand ethnic groups(Douthitt , Eby, &Simon, 1999).Research hasnot placedany emphasison whether items shouldbe ‘hard’biodata items, where acandidate’sresponsecan, inprinciple , be subjectto external veriŽcation, or whether itemsshould be ‘soft’and rely upon asubjectiveresponse. Manystudies used ‘ soft’items that resemble questions found intests of personality. Indeed,it is suspected that if a‘blind’trial were tobe conducted,mostpeople wouldbe unableto di Verentiate many biodata questionnairesfrom personality questionnaires. Typically,biodataquestionn aireswere designedto measure successin a joband yieldedone score ofoverallsuitabil ity.Examples ofthisstill exist and can beseen inapaper byHarvey-CookandTa Zer(2000),who employed biodatato predictthe successof traineeaccountan tsin professio nalexams. However, the majorityof studiesnow usebiodata to produce scoreson dimensionsthatcan then be combined tomake predictions.Scales that have been constructedrange from: emotionalstabilit y(Strickler &Rock, 1998);family andsocial orientati on—work orientation (Carlson,Scullen, Schmidt, Rothstein,&Erwin, 1999),to money managementandinterest in home repairs(Stokes &Searcy, 1999).These examples serve toshowthe breadthof the dimensionsmeasuredby biodata.It istempting to believe thatresearch coulddistil a smallerand more universalsetof dimensions. Publicationsconcernin gbiodataraise two main issues: the method of ‘keying’ usedin the constructionof biodataforms, andthe accuracy andgeneraliz abilityof biodata.

Constructionof biodatascales (keying). Archteypalbiodatascales were constructedon the basis of empirical weights.The mainalternati vetothe empiricalmethod wasthe rational method, where agroupof experts wouldassemble a number of itemsthat they believedwould be relevantto success in the job. The mainmethods used by the papersincluded in this review liebetween the rationalmethod andthe empiricalmethod. Typically, experts assemblea groupof itemsfor each traitto be measured.A draftquestionn aireis completed by alarge sample.The itemsfor anindividu altrait are then factor-analysed,and any items thatdo not have a signiŽcant loading on the mainfactor are eliminated.The procedure isrepeated for each of the other traits(see Allworth & Hesketh,1999; Karas& West, 1999).This procedure ensuresthat scales are unidimensional,but it Personnelselection 461 isless than ideal because, for example, the itemseliminate dfrom each analysis might,in sum, constituteanadditionalunsuspectedfactor. Anidealmethod would betoenter allquestions on abiodataform inone largefactor analysis.However, thisis rarely possible.Abiodataform mightcontain 150 items, and reliable analysis wouldrequire asampleof atleast450. Stokes andSearcy (1999)construct edscales usingall three methodsand compared theiraccuracy inpredicti ngoverall performance andsales performanc eintwo samples of mechanicalequipment distributors.They found thatthe rationalscales were aspredictiveasthe empirical scales. The generalizabilityof biodatais animportantissue. If they are not generalizable, aform needsto bere-standardizedin each situationwhere itisused.The continual re-standardizationinvolvesconsidera ble extra e Vortand resources that would deter many organizations.Initial concerns aboutthe lackof generalizability(e.g. Dreher &Sackett, 1983)were allayedto an extent by Rothstein, Schmidt, Erwin, Owens, andSparks (1990), who found thata biodataform for supervisorshada validityof about0.32 across organizat ionsand groups. Carlson et al. (1999)provided further reassurance. They developeda biodataform for managerswhich focused upon Žve factors. The form wasdeveloped in asingleorganiza tion,and performan ceratings were usedas criteria. A validitystudy with a largesample of managersin one organization(Carlson et al.,1999)achieved an observed validity of 0.52. Subse- quently, datawere accumulatedfor 7334managers from 24varied organiza tions. The levelof progression withinthe organizationwas used as the criterion. Meta-analysisproduced a mean observedvalidity across organiza tionsof 0.48. This clearlydemonstr atedthat a biodatakey, createdin asingleorganiza tiongeneraliz ed acrossorganiza tionsand industri es(see also Rothstei n et al., 1990).

Re´sume´s, CVs andapplicat ionforms After interviews,re ´sume´s,CVs and applicat ionforms collectively constitute the secondmost frequently usedmethod ofselection.Usually,they are the Žrstcontact thata candidatemakeswith a potentialemployer, anderrors at thisstage will have adisproportionatee Vect. Itis therefore amazingthat they havebeen soneglected by researchers.The neglect isstill more amazingbecause ithas been notedby a stringof reviewersfrom Stephens, Watt,and Hobbs (1979) to Brown andCampion (1994).Bright and Hutton (2000) provided an excellent overview ofrecent research which seemed tofocus upon twothemes: the useof competency statementsand physicalattractiv eness. Competency statementsare self-evaluationsmade by candidates.They cannot be easilyveriŽ ed byaselectorin the sameway that qualiŽ cations or jobhistory can be veriŽed. A typicalcompetency statementmight read, ‘ Iamhighlymotivated with a proven trackrecord inachieving goals and targets’ . Aninvestig ationby Earl, Bright, andAdams (1998) indicate dthatthe inclusionofcompetency statementsin CVsincreased the probabilityof producinganinvitation toan interview.Further- more, competency statementsdidmost to improve the chances ofcandidateswho were thoughtto bepoor inother respects. Brightand Hutton (2000) looked at this eVect inmore detail.They chose four goodCVs and inserted into them, on a 462 IvanT. RobertsonandMike Smith systematicbasis,competency statements.Sometimes, these were relatedto the job, andsometimes ,the competencystatementswere placedin a letterof application thatpreceded the CV.Itisimportantto note thatthe materialsinthisinvestiga tion were drawnfrom actualCVs submitted in responseto anactualvacancy. The CVs were judgedby apanelof recruitmentconsultants,human-res ource managersand linemanagers with expertise inevaluatingCVs. Some of the panelwere givenjob descriptions,whilst others were only giventhe advertisement. The resultswere clear-cut.The higherthe number of competency statements,the higherthe of the CV.Itdid not matter whether the competency statementswere relatedto the jobor if they were placedin the introductory letter. Asimilarmethodolo gy wasadopted by Watkinsand Johnston (2000), but they usedstudents to judge CVs. Watkins and Johnston simulated a vacancy for a graduatetrainee manager and the CVsof twofemale applicants,who di Vered in course grades,work experience andpositions of responsibilityetc., sothatone CV wasclearly superior to the other. The CVsincluded ,onasystematicbasiseither no photograph, aphotograph of anattracti ve candidateor aphotograph of an ‘average’candidat e. The studentswere askedto imagine they were the recruiting oYcer for acompany, andthey were requiredto evaluate the CVinterms of suitability,probabili tyofaninvitationtoan interviewand the likelystarting salary. The resultsrevealed a complex butconsisten tpattern. The inclusionof a photograph, whether attractiveor average,made little di Verence togood CVs which consistentlyevoked higher evaluati ons,more invitationsto interview sand higherindicati onsof startingsalary. However, the inclusionof anattracti ve photographdidhelp the evaluationofanaverageCV anddid increase the reported probabilityofaninterviewo Ver. But, the inclusionofanattractivephotographwith anaverage CV didnot signiŽ cantly alter the estimatesof likelystarting salary.

Validation of selection procedures The research literatureon personnel selectionmethods generally focuses on one speciŽc indicatorof validity,the criterion-relatedvalidity coe Ycient. Thisis given prominenceabove allother indicatorsof validity.Clearly, inmany ways,this emphasison the extent towhich personnel selectionprocedures can adequately predictwork criteriais appropria te. The whole purpose of apersonnel selection processis to identify candidat eswho are mostor leastsuited to the occupational areain question. Although the current andhistorica lfocus on criterion-related validityas the majorquality standard for personnel selectionmethods seems appropriate, di Ycultiesarise when consideringthe interpretationof the evidence, concerning criterion-relatedvalidity of personnel selectionmethods. Meta-ana lysis hasprovided a statisticalmechanism for givinga clearindicati on of the criterion- relatedvalidity for personnel selectionmethods. The positivecontribut ionto the research literature of meta-analysisshould not be underestimated.It is clear, however, thatsome misapplicationsand misinterp retationsof meta-analytic results havebeen unhelpful.Hermelin andRobertson (in press) have provided a comparative analysisof the dataconcernin gmeta-analytic resultsfor many ofthe majorpersonnel selectionmethods. Personnelselection 463 In meta-analyticstudies, a speciŽc indicatorfor the validityof apersonnel selectionmethod isusually produced. In common withany other correlation coeYcient, thispoint estimate for validitywould have a conŽdence intervalaround it.The conŽdence intervalindicates the likelylower andupper boundaryfor this estimate.A simplecompariso nof the mean validitieshides the fact thatthe lower boundaryfor the validityof one method may belower thanthe lower boundaryfor the validityof the other, despitethe fact thatthe mean estimatesof validityare the other way around.In other words,the key messagehere isthatcomparing the point estimatesfor validityderived from meta-analytic studieswithout looking at the relevantconŽ dence intervalsis inappropr iateand may leadto misleadi ng conclusions. The secondpoint concerns the way inwhich individualinvestig atorshave conductedmeta-ana lyticstudies. As Hermelin andRobertso n(inpress) show, when investigatorsfocus on the samepersonnel selectionmethod, they donot usea consistentsetof correction factors inorderto derivea meta-analytic mean validity coeYcient. Thus, some studiescorrect for factorssuch as range restricti on inthe predictor,andothers do not. Furthermore, for example, when correctingfor unreliability,studies di Ver inthe correction factors thatare applied,even for the samepersonnel selectionmethods. These di Verent practiceslead to conclusio ns thatare not comparable. Finally,when consideringthe validityevidence for di Verent personnel selection methods,the issueof constructvalidit yneedsto beconsidered.Onlytwo personnel selectionmethods (mental-a bilitytesting and personali ty testing)are directly associatedwithspeciŽ c constructs.These twoselection methods are deŽned by the constructsthatthey measure. Other selectionmethods are notdeŽ ned in thisway. They are deŽned by the proceduresadopted and not the speciŽc constructs measured.Forexample, the InternationalPersonnel Manageme ntAssociat ion (TaskForce onAssessmentCenter Guidelines, 1989)speciŽ es 10essentialelements thatdeŽ ne anassessmentcentre. These includea varietyof criteriasuch as the fact thatthe assessmentcentre mustbe basedon jobanalysis ,thatmultiple assessme nt techniquesmustbe usedand that multiple assessor smustobserve the performance ofeach assessee.Infact, allof the 10criteriarelate to the proceduresand structure ofthe assessmentprocess. None isrelated to the constructor constructsthatare measuredby the process.Thus, when comparing validitiesfor assessmentcentres, structuredinterviews,cognitive -abilitytests and personal itytests, we are not comparing similarapproache s.Muchmore needsto beknown aboutthe constructs thatare measuredwithin speciŽ c assessmentmethods. Without such informati on, comparative evaluation ofvalidityis almost meaningle ss. The meta-analyticallyderived informati on onthe validityof personnel selection methodsis, neverthel ess,useful and has provided researcher sandpractitio nerswith aclearindicati onofthe validitiesassociatedwithdi Verent methods.Unfortuna tely, the current technologyof meta-analysisand the databaseon which investigators may drawdoes not allowfor athoroughevaluatio nofthe extent towhich selection methodsmay be combined toprovide incrementa lvalidity.In general, meta- analyticstudies have focused onthe validityof one particularmethod. In practice, of course, personnel selectionprocedure sfrequently usemany methods.In this 464 IvanT. RobertsonandMike Smith context, the key questionfor people who are designingpersonnel selection proceduresconcerns the extent towhich di Verent methodsprovide unique and non-overlappinginformati on concerning candidates’likely performanc e.The issue ofthe incrementalvalidityprovided by di Verent methodsis somethingthat is being activelyexplored by personnel selectionresearche rs.Within the last5 yearsor so, anumber of articleshas appeared, attemptin gtoassess the extent towhich combinationsof methodsare usefulor, bycontrast,provideoverlappi ngandhence redundantinformation. Studiesconcerning the incrementalvalidity of interviews, cognitiveability and personali ty (e.g. Cortina, Goldstein, Payne, Davison,& Gilliland,2000) biodata and personali ty (e.g. Mount et al.,2000)provide an indication ofthe kindof research thatis currently beingconducted .The articleby Cortina et al. (2000),for example, useda meta-analytic approach toassess the relationshipsbetween cognitiveability, conscienti ousnessand interview s.These resultswere then combined withresults concerning criterion-relatedvalidity from previousmeta-anal ysisto provide estimates of the extent towhich each of the individualmethods provided unique validity .The resultssuggeste dthatinterview scoreshelped to predict job performanc e, beyond the information providedby cognitiveability and personali ty(conscientiousness).Inparticular,highlystructure d interviewswere shownto contribut esubstantiallyto the predictionof job performance. Asmore research of thiskind is conducted and published ,investi- gatorsshould develop a clearer viewof the bestways in which personnel selection methodsmay be combined toprovide optimal selection procedure s.

Future prospects The advancesin the theory andpractice ofselectionand assessme ntin the last50 yearshave been enormous. We now know, withsome certainty,the accuracy and validityof mostmethods of selection.Wehavea much clearer conceptualgraspof fairnessand the natureof jobcriteria. There hasalso been asigniŽcant, but not very fruitful, investigationof selectionfrom the candidates’perspecti ve. Developmentsin several areas mentioned inthisarticle are likelyto be important. Itis also possible that future advanceswill be madein new areas,not so far mentioned inthis article. Two new areasof particularinterestare: the useof physiologicalmeasures and the benchmarkingof selectionsystems.

Physiologicalmeasures Current research inselectionand assessme ntappearsto haveoverlooked advances inthe widerrealm ofpsychology, which suggestthat physiolog icalmeasures may be usefulas assessmenttools.For example, Shafer (1982,1985) and Shafer andMarcus (1973)investiga tedseveral indices derived from EEGrecords.One index, ‘the neuraladaptabi lityindex’ measures the degree towhich the amplitudeof brainwavesdecreaseswhen astimulusis repeated. Itwashypothesi zedthat high-IQ individualswould habituate more quicklyand thus conserve neuralresources. It wasfound thatneural adaptabi lityhad a corrected correlation of 0.82with scores Personnelselection 465 from the Weschler intelligence scale—acorrelati on comparabletothat found between twoestablis hedtests of intelligence. Similarly,Eysenck andBarrett (1985) investigatedthe complexityofbrainwaves(average evoked potential s)engendere d byastandardstimulus .They obtaineda correlationof0.83between the complexity ofaperson’s averageevoked potential and full scale scores on the Weschler test. There issome suggestion thatcertain aspects of personality suchas emotional stabilityandextravers ionalso have neurophys iologicalcorrelates.Itis too earlyto saywhether these developmentswill have any advantageover the measures customarilyused in selection.However, itissurprisingthat their potential has not been investigatedin applied settings .

Benchmarkingselection systems Practitionersin the Želdof selectionand assessme ntoften need tocompare (benchmark)their systems against the systemsused by leadingorganizat ions. Practitionersin other Želds,such as producti on managers,can usea number of methodsto benchmark theiroperation sagainstthe productionoperationsof leadingcompanies .Aproductionmanager can havehis or her methodsexternall y auditedand obtain a globalscore andan indicati on of thosefacets of hisor her selectionsystem that fall below the standardsof bestpractice. In many organiz- ations,the selectionfunction issubject to similar pressures ,andit is likely and desirablethatmethods of auditingselection systems are developedin the near future.

Explanatory models Itseems likelythat, now the validityof selectionmeasures has been establishedto areasonabledegree of satisfaction,attention will turn to explainin gwhy measures are valid(or not valid).A beginninghas been madeby Hunter et al. (2000) and Schmidt andHunter (1998),who suggestthat measures of ‘g’are predictivebecause generalintellig ence allowspeople toacquire job knowledge ,which inturn has a direct eVect upon workperforman ce. Earlierin this article, research on the constructvalidity of interviewsandassessme ntmethods was reviewed. Agraspof the constructsbeingassessed by speciŽc methodsis aprerequisitefor understand- ingthe reasonsbehind criterion- relatedvaliditi es. However, adeeper levelof analysisthan that delivered by current methodsof research andtheory isneeded. Meta-analysesare likelyto yieldhigh validiti eswhen acharacteristicis related to work performanceina substantialmajority of occupations.Smith (1994)called suchcharacter istics‘ universals’and suggested the existence of three suchcharac- teristics:‘intelligence’, ‘vitality’andthe proportionoftheir‘ life space’an individual ispreparedto devoteto hisor her work. Measuresof acandidate’s‘vitality’andthe proportionof theirlife spacedevoted to work are rarely includedin current selectionsystems— although they may be measuredindirectl yby interviewsand biodata.Of course, the fact thata characteristichas broad, generaliz ablevalidity acrossa widerange of occupationsdoes not mean thatit is the mostimportant 466 IvanT. RobertsonandMike Smith factor inany givenoccupatio nalarea, or speciŽc job. Thispoint is well illustra ted by the research concerningthe big-Žve personality factor, conscientiousness.In meta-analytic studies,conscienti ousnesshas been shownto be important,witha similarlevel of validityacross many areas(Barrick & Mount,1991; Salgado, 1998). It is diYcultto imaginemany jobsin which itis not advantageousfor incumbents tobe dependable, self-disciplinedand likely to meet deadlines.Although such attributesare anasset in many jobs,they may notbe the mainfactors in determininghighlevels of performance. Depending on the job, other factors may havea much more inuential role indetermini ngthe extremes (highor low)of performance. Smith (1994)also identiŽ ed a seconddomain of characteristicsrelevant to speciŽc occupations.Perhaps the mostimportant of these ‘occupationalcharacter - istics’is job knowledg e, which can emerge asa validpredictor for generic meta-analysisbecause, althoughjob knowledge varies, it is easily recognized and classiŽed. Other variables,e.g. speciŽc personalityfactors, may berelevantonly to certainoccupatio ns.For example, extraversionmay be relevantto manageria land socialoccupatio ns,while introvers ionmay be relevantto precision work, suchas electronicsof ‘back-room’research workers. Ageneric meta-analysisof these characteristicswould probably yield a mean validityclose to zero (witha wide variance),buta moderatoranalysis that ‘ coded’for the type of occupationcould yielduseful validiti es. Smith (1994)identiŽ ed a thirddomain concerning the characteristicsthat help a person relateto a speciŽc work settingor aspeciŽc employer. Forexample, an intelligent (universal)lawyer with a greatdeal of legalexperience (occupational) mightsucceed inaneighbourhoodlaw centre butmight fail in aslickcity lawŽ rm. The diVerence inoutcome mightbe inthe degree towhich the incumbent’svalues coincidewith the valuesof colleaguesandthe hostorganiza tion.Characteri sticsof thiskind might be termed ‘relationals’because they determine anindividual’s Žtto speciŽc employers. Amodelof thiskind would help identifythe need for new types of measuressuch as vitality or life spacedevoted to work. Measuresof ‘relational’ characteristicswill be the mostdi Ycultto employ because they wouldrequire assessmentof both the candidateandtheir organiza tion.Such amodelwould also callfor amore sophisticatedmeta-anal ysis.In e Vect, amodelof thiskind would reect agrowingconcern inindividu aldi Verences research, which involves attemptingtocombine the inuences of both person andsituatio nfactors in modelsof behaviourcausation.

Predictivemod els Asearlier sections of thisarticle have shown, the constructvalidity of some selectionmethods (e.g. interviews,assessme ntcentres andbiodata) is not well understood.There are compellingreasonsfor researcherstoexplore construct validityissues more extensively. Onthe scientiŽc front, itis important to understandthe reasonsfor relationshipsbetween predictorvariable sandcriteria relatedto performanc e(e.g. supervisory ratings,promotion s,organiza tionalciti- zenship)attachmen t(e.g. turnover, absenteeism, commitment)andwell-bein g(e.g. Personnelselection 467 jobsatisfact ion).The identiŽcation of key predictorconstruct sassessedby di Verent selectionmethods is important in understa ndingthe key attributeslinkedwith criteria.In the samevein, criterionvariable construct sneed more conceptualand empiricalclariŽ cation. The three majorcategorie ssuggestedabove, performance, attachmentandwell-bein g,providea broaderset of criterionareas than those used inmost contempor ary personnel-selectionresearch. These broaderareas are importantissues for organizations,individu alsand psycholog icalresearcher s.It wouldbe helpful tosee more studiesemerging which usea number of predictors andbroad criteria. Such studieswould provide a basisfor better understandingof predictorand criterion construct sandthe relationshipsbetween them. Such studies wouldalso provide good primary datafor subsequentmeta-ana lyses. These subsequentmeta-analysescould then provideclariŽ cation of predictor– criterionrelations hipswhen more thanone predictoris used. Although we now havea reasonablegrasp of the validityof many selectionmethods when usedas singlepredictor s,we need toknow much more abouthow touse predictor sin combination. Abeginninghas been madehere bySchmidt andHunter (1998), but much more needsto be done. The meta-analytic work islimitedby the availability ofprimary datafrom studiesof the kindmentioned above.

References Allworth,E., &Hesketh,B. (1999).Construct-or ientedbiodata: capturing change-rela tedand contextuallyrelevantfuture performance . InternationalJournal ofSelection and Assessment, 7, 97–111. Arvey,R. D.,&Murphy,K. R.(1998).Performance evaluation in work settings. Annual Review of ,49, 141–168. Baron, H.,&Janman, K.(1996).Fairness intheassessment centre.In C.L.Cooper& I.T.Robertson (Eds.), Internationalreview ofind ustrialand organizationalpsychology (Vol.11, pp. 61–113). London: Wiley. Barrick,M. R.,&Mount, M.K.(1991).The Big Fivepersonality dimensions and job performance :a meta-analysis. ,44, 1–26. Barrick,M. R.,&Mount, M.K.(1996).E Vectsof impression management andself-decept ionon the predictivevalidityof personalityconstructs. Journal ofApplied Psychology, 81, 261–272. Bernadin,H. J.,&Beatty,R. W.(1984). Performanceappraisal: assessing humanbehaviour at work . Boston: Kent. Bliesener,T. (1996).Methodologi calmoderators in validatingbiographicaldatain personnelselection. Journal ofOccupational & OrganizationalPsychology ,69, 107–120. Bobko, P.,Roth, P.L.,&Potosky, D.(1999).Derivation and implicatio ns of ameta-analyticmatrix incorporating cognitiveability, alternativ epredictors,and job performance . Personnel Psychology,52, 1–31. Borman, W.C.,&Brush, D.H.(1993).More progress towards ataxonomy of managerial performancerequireme nts. Human Performance,6, 1–21. Borman, W.C.,&Motowidlo,S. J.(1993).Expanding the criterion domain to includeelements of contextualperformance .InN.Schmitt &W.C.Borman (Eds.), Personnel selectionin organizations . San Francisco,CA: JosseyBass. Borman, W.C.,&Motowidlo,S. J.(Eds.) (1997).Organization alcitizenshipbehavior and contextual performance. Human Performance,10, 69–192. Bretz,R. T.,&Judge,T. A.(1998).Realistic job previews: a test of theadverse self-select ion hypothesis. Journal ofApplied Psychology, 83, 230–337. Bright, J.E.H.,&Hutton, S.(2000).The impact of competencesstatements on resumes for short-listing decisions. InternationalJournal ofSelection and Assessment, 8, 41–53. 468 IvanT. RobertsonandMike Smith Brown, B.K.,&Campion,M. A.(1994).Biodata phenomenolog y:recruiter s’perceptions and use of biographicalinformation in resumescreening. Journal ofApplied Psychology ,79, 897–908. Burrows, W.A.,&White,L. L.(1996).Predicting sales performance. Journal ofBusiness and Psychology, 11, 73–83. Caldwell,D. F.,&Burger,J. M.(1998).Personality characteri stics of jobapplicants success in screeninginterviews. Personnel Psychology,51 , 119–136. Campbell,J. P.(1990).Modeling the performance prediction problem in industrialand organisation al psychology.In M.D.Dunnette &L.M.Hough (Eds.), Handbookof Ind ustrialand Organisational Psychology,Vol1, 687– 732. Palo Alto: Consulting Psychologists Press. Campbell,J. P.(1994).Alternativ emodelsof jobperformance and the implication sforselection and classiŽcation. In M.G.Rumsey,C. B.Walker,& J.H.Harris(Eds.), Personnel selectionand classiŽcation . Hillsdale,NJ: Earlbaum. Campbell,J. P.,McHenry,J. J.,&Wise,L. L.(1990).Modelling job performance in apopulationof jobs. Personnel Psychology,43 , 313–333. Carlson,K. D.,Scullen,S. E.,Schmidt,F. L.,Rothstein, H.,&Erwin,F. (1999).Generalisab le biographicaldata validity can be achieved without multi-organizationaldevelopmen tandkeying. Personnel Psychology,52, 731–755. Carroll,J. B.(1993). Human cognitiveabilities: a survey offactor-analy ticstud ies .Cambridge:Cambridge UniversityPress. Chan,D., Schmitt, N.,Jennings, D.,Cause,C. S.,&Delbridge,K. (1998).Applicant perception sof test fairness:integrating justice and self-servin gbias perspectives. InternationalJournal ofSelection and Assessment, 6, 232–239. Christiansen,N.D.,Gothin, R,D.,Johnson, N.G.,&Rothstein, M.G.(1994).Correcting the 16PF forfaking: E Vectsof criterion-relatedvalidity and individua lhiringdecisions. Personnel Psychology,47, 847–860. Cleary,T. A.(1968).Test bias: predictionof gradesof negroand white students inintegratedcolleges. Journal ofEd ucationalMeasurement, 5, 115–124. Coleman,V., &Borman, W.(inpress). Investigatingtheunderlying structure of thecitizenship performancedomain. ResearchRev . Conway,J. M.(1999).Distinguishin gcontextualperformance from task performancefor managerial jobs. Journal ofApplied Psychology ,84 , 3–13. Conway,J. M.,&Peneno,G. M.(1999).Compare structured interview question types: construct validityand applicant reactions. Journal ofBusiness and Psychology,13, 485–505. Cook,K. W.,Vance,C. A.,&Spector,P. E.(2000).The relation of candidatepersonality with selection-interviewoutcomes. Journal ofApplied ,30,4 , 867–885. Cortina,J. M.,Goldstein,N. B.,Payne,S. C.,Davison, H.K.,&Gilliland,S.W.(2000).The incrementalvalidity of interviewscores over and above cognitive ability and conscientio usness scores. Personnel Psychology,53, 325–351. Douthitt, S.S.,Eby,L. T.,&Simon, S.A.(1999).Diversity of lifeexperience s: thedevelopme ntand validationof graphicalmeasures ofreceptiveness to dissimilarothers. InternationalJournal ofSelection and Assessment, 7, 112–125. Dreher,G. F.,&Sackett,P. R.(1983). Perspectiveson sta Yng and selection .Homewood, IL:Irwin. Earl,J., Bright, J.E.H.R.,&Adams, A.(1998).‘ In my opinion’: what gets graduatesresumes short-listed? AustralianJournal ofCareer Development, 7, 15–19. Elkins,T. J.,&Phillips,J. S.(2000).Job context, selection decision outcome, and perceived fairness of selectiontests: Biodataas an illustrativecase. Journal ofApplied Psychology, 85, 479–484. Eysenck,H. J.,&Barrett,P. (1985).Psychophysio logyand the measurement ofintelligence.In C.R. Reynolds& P.C.Wilson (Eds.), Methodologicaland statisticalad vances inthestud yofindividual di Verences (pp. 1–49). New York:Plenum Press. Frei,R. L.,&McDaniel,M. A.(1997).Validity of customer servicemeasures inpersonnel selection: Areviewof criterionand construct evidence . Human Performance,11, 1–27. Gilliland,S. W.(1993).The perceived fairness of selectionsystems: an organizationaljustice perspective. Academyof Management Review, 18, 694–734. Personnelselection 469 Goldstein,H. W.,Yusko,K. P.,Braverman,E. P.,Smith, D.B.,&Chung,B. (1998).The role of cognitiveability in thesubgroup di Verencesand incrementa lvalidityof assessment centreexercises. Personnel Psychology,51, 357–374. Goleman, D.(1996). Emotionalintelligence .London:Bloomsbury. Guion, R.M.,&Gottier,R. F.(1965).Validity of personalitymeasures inpersonnelselection. Personnel Psychology,18, 135–164. Harris,M. M.(1998).The structured interview: what constructs arebeing measured? In R.Eder& M. Harris(Eds.), Theemployment interview: theory, research and practice .Thousand Oaks, CA:Sage Publications. Harvey-Cook,J. E.,&Ta Zer,R. J.(2000).Biodata in professional entry-leve lselection:statistical scoringof common format applications. Journal ofOccupational & Organizational Psychology,73, 103–118. Hermelin,E., &Robertson, I.T.(inpress). Acritiqueand standardiza tion of meta-analyticvalidity coeYcientsin personnelselection. Journal ofOccupational and Organizational Psychology . Hogan, J.,&Rybicki,S. L.(1998). Performanceimprovement characteris ticsjob analysis .Tulsa, OK:Hogan Assessment Systems. Hough, L.M.(1998).E Vectsof intentionaldistortion in personalitymeasurement andevaluation of suggested palliatives. Human Performance,11, 209–244. Hough, L.M.,&Oswald,F. L.(2000).Personnel selection: looking toward the future— remembering the past. Annual Review ofPsychology ,51, 631–664. Huang, T.(2000).Humans resourcemanagement practicesat subsidiariesof multinationalcorpor- ations andlocal Ž rms inTaiwan. InternationalJournal ofSelection and Assessment, 8, 22–33. HuVcutt,A. I.,Roth, P.L.,&McDaniel,M. A.(1996).A meta-analyticinvestigation of cognitive abilityin employment interview evaluation s: moderatingcharacteri stics andimplicatio ns for incrementalvalidity. Journal ofApplied Psychology, 81, 459–473. Hunt,S. T.(1996).Generic work behaviour: an investigationintothe dimensions of entry-levelhourly jobperformance . Personnel Psychology,49, 51–83. Hunter,J. E.,&Hirsch,H. R.(1987).Applicatio ns of meta-analysis. In C.L.Cooper& I.T. Robertson (Eds.), Internationalreview ofind ustrialand organizationalpsychology, 2, 321–357. Hunter,J. R.,&Schmidt,F. L.(1990). Methodsofmeta-analysi s: correctingerror and biasin researchŽ ndings . NewburyPark, CA: Sage. Hunter,J. E.,Schmidt,F. L.,Rauchenberger,J. M.R.,&Jayne,M. E.A.(2000).Intelligenc e, motivationand job performance .In C.L.Cooper& E.A.Locke(Eds.), Industrialand organizational psychology:linking theory with practice .Oxford:Blackwell . Karas,M., &West, J.(1999).Construct-or ientedbiodata developmen tfora selectionto a diVerentiatedperformance domain. InternationalJournal ofSelection and Assessment, 7, 86–96. Kroeck,K. G.,&Magnusen, K.O.(1997).Employer and job candidate reactions to videoconference jobinterviewi ng. InternationalJournal ofSelection and Assessment, 5, 137–142. Landis,R. S.,Fogli,L., &Goldberg,E. (1998).Future-orie ntedjob analysis: a descriptionof the process andits organizationalimplicatio ns. InternationalJournal ofSelection and Assessment, 6,3, 192–197. McDaniel,M. A.,Whetzel,D. L.,Schmidt,M. L.,&Maurer,S. (1994).The validity of employment interview:a comprehensivereview and meta-analysi s. Journal ofApplied Psychology ,79, 599–617. Mathews, B.P.,&Redman,T. (1998).Managerial recruitme nt advertisements—just how market orientedare they? InternationalJournal ofSelection and Assessment, 6, 240–248. Maurer,T., Solaman, J.,&Troxtel,D. (1998).Relationshi pof coachingwith performancein situationalemployment interviews . Journal ofApplied Psychology, 83 , 128–136. Moscoso, S.(2000).Selection interview: a reviewof validityevidence ,adverseimpact and applicant reactions. InternationalJournal ofSelection and Assessment, 8, 237–247. Mount, M.K.,Witt, L.A.,&Barrick,M. R.(2000).Incrementa lvalidityof empiricallykeyedBiodata scalesover GMA andthe Ž vefactor personality constructs. Personnel Psychology,53, 299–323. Murphy,K. R.(2000).Impact of assessments of validitygeneralisat ionand situational speciŽ cally on thescience and practice of personnelselection. InternationalJournal ofSelection and Assessment, 8, 194–215. 470 IvanT. RobertsonandMike Smith Newell,S. (2000).Selection and assessment inthe Knowledge Era. InternationalJournal ofSelection and Assessment, 8, 1–6. Olea,M. M.,&Ree,M. J.(1994).Predicting pilot and navigator criteria: not much morethan g. Journal ofApplied Psychology, 79, 845–851. Ones, D.S.,&Visweveran,C. (1996). Whatd opre-employmentcustomer service scales measure? Explorations inconstructvalid ityand implications forpersonnel selection .Presentedat AnnualMeeting Society Industrial andOrganization alPsychology, San Diego, CA. Ones, D.S.,&Visweveran,C.(1998).Gender, age, and race di Verenceson overtintegrity tests: results across fourlarge-scale job applicant data sets. Journal ofApplied Psychology, 83, 35–42. Ones, D.S.,Visweveran,C.,&Schmidt,F. L.(1993).Comprehensi vemeta-analysisof integritytest validities:Žndings andimplication sof personnelselection and theories of jobperformance . Journal ofApplied Psychology, 78, 679–703. Peterson,N. G.,Mumford,M. D.,Borman, W.C.,Jeanneret,P. R.,&Fleishman,E. A.(1999). An occupationalinformatio nsystemfor the 21st century: the d evelopment ofO*NET .Washington, DC:American PsychologicalAssociation. Ployhart,R. E.,&Ryan,A. M.(1998).Applicants’ reactions to thefairness of selectionprocedures : the eVectsof positiverule violation and time of measurement. Journal ofAppliedPsychology, 83, 3–16. Porter,M. E.(1985). Competitivead vantage .New York:Free Press. Purcell,K., &Purcell,J. (1998).In-sourcing, out-sourcing andthe growth of contingent labouras evidenceof exibleemployment strategies. European Journal ofWork and OrganizationalPsychology ,7, 39–59. Raymark,P. H.,Schmit, M.J.,&Guion, R.M.(1997).Identifying potentially useful personality constructs foremployees’ selection. Personnel Psychology,50, 723–736. Ree,M. J.,Earles,J. A.,&Teachout,M. S.(1994).Predictin gjobperformance :not much morethan g. Journal ofApplied Psychology ,79, 518–524. Robertson, I.T.,Baron, H.,Gibbons, P.,MacIver,R., &NyŽeld, G. (2000).Conscientio usness and managerialperformance . Journal ofOccupationa land Organizational Psychology,73, 171–180. Robertson, I.T.,&Callinan,M. (1998).Personality and work behaviour. European Journal ofWorkand Organizational Psychology,7, 321–340. Robertson, I.T.,&Kinder,A. (1993).Personality and job competences :thecriterion-r elatedvalidity of some personalityvariables. Journal ofOccupational and Organizational Psychology,66, 225–244. Rolland,J. P.,&Mogenet, J.L.(1994). Manuel d’application. SystemeD5D d ’aidea`l’´e valuationd es personnes . Paris:Les Editiones du Centrede Psychologie Applique. Rothstein, H.R.,Schmidt,F. L.,Erwin,F. W.,Owens, W.S.,&Sparks, P.P.(1990).Biographica ldata inemployment selection: can validitie sbemade generalisab le? Journal ofApplied Psychology ,75, 175–184. Rynes,S. L.,&Connelly,M. L.(1993).Applicant reactions to alternativeselection procedure s. Journal ofBusiness and Psychology,7, 261–277. Salgado,J. F.(1998).Big Fivepersonality dimensions andjob performance in army and civil occupations:a Europeanperspective . Human Performance,11, 271–288. Salgado,J. F.(1999).Personnel selection methods. In C.L.Cooper& I.T.Robertson (Eds.), InternationalReview ofInd ustrial& Organizational Psychology .New York:Wiley. Salgado,J. F.,&Moscoso, S.(2000). Construct validityof employmentinterview .Underreview. Quoted by Moscoso. Sanchez,J. L.(2000).Adapting work analysis to afast-pacedand electronic business world. InternationalJournal ofSelection and Assessment, 8, 207–215. Sanchez,J. L.,&Fraser,S. L.(1992).On thechoice of scalesfor task analysis. Journal ofApplied Psychology,77, 545–553. Sandberg,J. (2000).Understandi ng human competenceat work:an interpretativeapproach. Academy ofManagement Journal, 43, 9–25. Schmidt,F. L.,&Hunter,J. E.(1998).The validity and utility of selectionmethods inpersonnel psychology:practical and theoretica limplications of 85years of researchŽ ndings. Psychological Bulletin, 124, 262–274. Schmidt,F. L.,&Rader,M. (1999).Exploring the boundary conditions for interview validity: meta-analyticvalidityŽ ndings fora new interviewtype. Personnel Psychology,52, 445–464. Personnelselection 471 Schmitt, N.,&Chan,D. (1998). Personnel selection:a theoreticalapproach .Thousand Oaks, CA:Sage Publications. Schmitt, N.,&Gilliland,S. W.(1992).Beyond di Verentialprediction :fairnessin selection. In D. Saunders(Ed.), New approachesto employee management .Greenwich,CT: JAIPress. Schmitt, N.,Rogers, W.,Chan,D., Sheppard,L., &Jennings, D.(1997).Adverse impact and predictivee Yciencyof variouspredictor combinations. Journal ofApplied Psychology, 82, 719–730. Schneider,R.J.,Hough, L.M.,&Dunnette, M.D.(1996).Broadsided by broadtraits, or how tosink sciencein Ž vedimensions orless. Journal ofOrganizatio nal Behavior,17, 639–655. Scholarios,D., &Lockyer,C. (1999).Recruiting and selecting professional s: contexts, qualitiesand methods. InternationalJournal ofSelection and Assessment, 7, 142–169. Scholz,G., &Schuler,H. (1993).Das nomologische netzwekdes assessment centers:eine metaanlyse [Thenomological network of theassessment centre:a meta-analysis]. Zeitschriftfu ¨rArbeitsund Organizationspsychologie, 37, 73–85. Schuler,H. (1989).Construct validity of amulti-modelemployment interview. In B.J.Fallon,H. P. PŽster, & J.Brebner(Eds.), Advances inInd ustrialand Organizational Psychology .New York:North Holland. Schuler,H., &Funke,U. (1989).The interview as amultimodalprocedure .InE.W.Ederand G. R. Ferris(eds.) Theemployment interview: Theory research and practice .NewburyPark, California :Sage. Schuler,H., Moser,K., Diemond,A., &Funke,U. (1995)Valida ¨teinesEinstellun gssinterviewszur Prognose desAusbildungse rfolgs. Zeitscriftfur Pa ¨dergoischePsychologi e, 9 , 45–54. Shackleton,V., &Newell,S. (1997).Internation alassessment andselection. In N.Anderson& P. Herriot(Eds), Internationalhand bookof selection and assessment .Chichester,UK:Wiley. Shafer,E. W.P.(1982).Neural adaptabilit y:a biologicaldeterminan tof behaviouralintelligenc e. InternationalJournal ofNeuroscience, 17, 183–191. Shafer,E. W.P.(1985).Neural adaptabili ty:a biologicaldeterminant of gfactorintelligenc e. Behaviouraland BrainSciences, 8, 240–241. Shafer,E. W.P.,&Marcus,M. M.(1973).Self-stimula tion altershuman sensory brainresponses. Science,181, 175–177. Silvester,J., Anderson,N., Haddleton,E., Cunningham-Snell,N., &Gibb, A.(2000).A cross-modal comparison oftelephonedand face-to-face selection interviews in graduaterecruitmen t. International Journal ofSelection and Assessment, 8, 16–21. Smith, M.(1994).A theoryof thevalidity of predictorsin selection. Journal ofOccupational and Organizational Psychology,67, 13–31. Smith, M.,&Robertson, I.T.(1993). Systematicpersonnel selection .London:Macmillan. Stephens, D.B.,Watt, J.T.,&Jobbs, W.S.(1979).Getting through theresume preparation maze: some empiricallybasedguideline sforresume format. TheVocational Guid ance Quarterly,27, 25–34. Sternberg,R. J.,&Wagner,R. K.(1986). Practicalintelligence .Cambridge:Cambridge University Press. Sternberg,R. J.,&Wagner,R. K.(1995).Testing common sense. AmericanPsychologis t,50, 912–927. Stokes, G.S.,&Searcy,C. A.(1999).SpeciŽ cation of scalesin Biodata from development: rational versusempirical and global versus speciŽ c. InternationalJournal ofSelection and Assessment, 7, 72–96. Strickler,L.J.,&Rock,D. (1998).Assessing leadershippotential with abiographicalmeasureof personalitytraits. InternationalJournal ofSelection and Assessment, 6, 164–184. Sue-Chan,C., Latham, M.G.,Evans,M. G.,&Rotman, J.L.(1997). Theconstruct valid ityof thesituation and patternedbehaviour d escriptioninterviews: cognitive ability, tacit knowled ge and self-e Ycacyas correlates . Unpublishedmanuscript, Faculty Management, Universityof Toronto, Canada. Task Forceon Assessment CenterGuideline s(1989).Guideline sandethical considerati ons for assessment centreoperations. PublicPersonnel Management, 18, 457–470. Terpstra,D. E.,Mohammed, A.A.,&Kethley,R. B.(1999).An analysisof FederalCourt cases involvingnine selection devices. InternationalJournal ofSelection and Assessment, 7, 26–33. Tett, R.P.,Jackson,D. N.,&Rothstein, M.(1991).Personality measures as predictorsof job performance:ameta-analyticreview. Personnel Psychology,44, 703–742. Thorsteinson,T.J.,&Ryan,A. M.(1997).The e Vectof selectionratio on perceptionsofthefairness of aselectiontest battery. InternationalJournal ofSelection and Assessment, 5, 159–168. Tonidandel,S.,&Quinones, M.A.(2000).Psychologica lreactionsto adaptivetesting. International Journal ofSelection and Assessment, 8, 7–15. 472 IvanT. RobertsonandMike Smith Trepesta,D. E.,&Rozell,E. J.(1993).The relationshi pof sta Yng practicesto organizationallevel measures of performance. Personnel Psychology,46, 27–48. US OYceof PersonnelManagement (1987). Thestructured interview .Washington, DC:O Yce of ExaminationDevelopment ,Divisionof AlternativeExaminingProcedure s. Viswesvaran,C.(1993). Modelingjob performance:Is there a general factor? UnpublishedPhD. Iowa City: Universityof Iowa. Viswesvaran,C.,&Ones, D.S.(2000).Perspectiv esofmodelsof jobperformance . InternationalJournal ofSelection and Assessment, 8, 216–225. Watkins, L.M.,&Johnston, L.(2000).Screening of jobapplicants: the impact of physical attractiveness andapplicatio nquality. InternationalJournal ofSelection and Assessment, 8, 77–84. West, J.,&Karas,M. (1999).Biodata: meeting clients’ needs for a betterway of recruitingentry-level staV. InternationalJournal ofSelection and Assessment, 7, 126–131. Westoby, J.B.,&Smith, J.M.(2000). The16PFS job spec .Windsor,UK: Assessment andSelection in Employment(ASE). Wilkinson,L. J.(1997).Generalisab leBiodata? Anapplicationto thevocational interests of managers. Journal ofOccupational & Organizational Psychology,70, 49–60. Wright, P.M.,&McMahan, G.C.(1992).Theoretica lperspectives forstrategic human resource management. Journal ofManagement, 18, 295–320.