<<

Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2005 Training, Warning, and Media Richness Effects on Computer-Mediated and Its Detection Patricia Ann Tilley

Follow this and additional works at the FSU Digital Library. For more , please contact [email protected]

THE FLORIDA STATE UNIVERSITY

COLLEGE OF BUSINESS

TRAINING, WARNING, AND MEDIA RICHNESS EFFECTS ON

COMPUTER-MEDIATED DECEPTION AND ITS DETECTION

By

PATRICIA ANN TILLEY

A Dissertation Submitted to the Department of Management Information Systems in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Degree Awarded: Summer Semester, 2005

The members of the Committee approve the dissertation of Patricia Ann Tilley defended on June 24, 2005.

______Joey F. George Professor Directing Dissertation

______Gerald R. Ferris Outside Committee Member

______David B. Paradice Committee Member

______Michael H. Dickey Committee Member

______Pamela L. Perrewe Committee Member

______

E. Joe Nosari, Interim Dean, College of Business

The Office of Graduate Studies has verified and approved the above named committee members.

Dedicated to Rick for all his loving support. His help and understanding are gratefully appreciated.

iii

ACKNOWLEDGEMENTS

There are several people I would like to thank for their time and assistance. First, I would like to thank all the members of my dissertation committee for their invaluable comments and guidance. I would especially like to thank my dissertation chair, Professor Joey George for his superb guidance and wisdom. I would also like to thank Gabe Giordano for his assistance in collecting data for my dissertation. He unselfishly contributed many hours to help with conducting the experiment at the same time that he was working on his own dissertation and teaching. His help was greatly appreciated. I would also like to thank Brian Keane for helping me with my data collection. Another person I would like to thank is Cate Serino for her support as a cohort in going through the doctoral program with me. I appreciate all the experiences and discussions we shared together. Finally, I would like to thank my son, Rick, who brightened my life and gave me encouragement. His love and friendship were instrumental in helping me through this process. I appreciate his contributions to making these years a rich experience. I would also like to thank my parents for their love and support, which helped me immensely in completing my research.

iv

TABLE OF CONTENTS

List of Tables vii List of Figures vii Abstract ix

1. INTRODUCTION 1

2. LITERATURE REVIEW 5

Media Richness Theory 5 8 Deception Literature and Theory 9 Interpersonal Deception Theory and Computer-Mediated 18 Individual Differences in Social Skill and Political Skill 23 Media Richness Theory, Social Presence Theory, and Deception 25 Effects of Warning on Deception Detection 29 Effects of Training on Deception Detection 30

3. RESEARCH MODEL 34

Research Model of Training, Warning, and Media Richness on Deception Detection When Using Computer- 34 Variable Descriptions, Hypothesis Development and Model Operationalizations 35

4. METHODOLOGY 40

Method Selection 40 Study Design 41

v

The Matrix Design 43

5. RESULTS 53

Analysis of Control Variables 53 Session Duration, , and Detection Accuracy 56 Training Manipulation Check 58 Tests of Hypotheses 59

6. DISCUSSION 62

Media Richness and Deception Detection Accuracy 63 Warning and Deception Detection Accuracy 64 Training and Deception Detection Accuracy 65 Interaction of Training with Warning and Deception Detection Accuracy 66 Summary 67

7. CONCLUSION 68

Summary of Findings 68 Strengths 71 Limitations 72 Implications for Future Academic Research 73 Management Implications 75 Summary 76

APPENDICES 78

REFERENCES 109

BIOGRAPHICAL SKETCH 119

vi

LIST OF TABLES

2.1. Reliable deception cues from Zuckerman and Driver, 1985 13 2.2. Significant deception cues from DePaulo et al. (2003) 15 2.3. Reliable deceptive indicators across media from Rao & Lim (2000) 26 2.4. Significant deceptive indicators from DePaulo et al. (2003) across media 27 2.5. Training for deception cue recognition 32 3.1. Lean (e-mail) and rich (audio over Internet chat relay) media differences based on media richness theory 37 5.1. Descriptive Statistics for Control Variables 54 5.2. Reliability Statistics for Control Variables 54 5.3. Correlations 55 5.4. ANOVA for Medium, Warning, Training, and Lies 57 5.5. Descriptive Statistics of Lies for Medium, Warning, and Training 57 5.6. Detection Accuracy Rates 58 5.7. Descriptive Statistics of the Training Pre-test and Post-test 59 5.8. ANCOVA results for Medium, Warning, Training, and Receiver on Deception Detection Accuracy 60 5.9. Descriptive Statistics for Medium, Warning, and Training on Deception Detection Accuracy 61 6.1. Summary of Findings 63

vii

LIST OF FIGURES

2.1. Interpersonal Deception Theory 17 2.2. A Model of Deceptive Communication and Its Detection from Carlson, George, Burgoon, Adkins & White (2004) 19 2.3. The Deceptive Communication Event from Carlson, George, Burgoon, Adkins & White (2004) 22 3.1. Training, Warning, and Media Richness on Deception Detection Accuracy 35 4.1. Laboratory Experiment Model 44

viii

ABSTRACT

Although deception research in the communication field has a long history, it is a relatively new topic of research in management information systems. Deception detection research has expanded to include lies transmitted via computer-mediated communication. Recent studies have only begun to look at the influence of media richness, training, and warning on deception detection accuracy. Studies on the effect of training on deception cue recognition with cross-media comparison are scarce. In addition, few studies have been conducted on the effects of training with warning on deception detection. This study examines the effects of media richness, training, warning, and the combination of training and warning on deception detection accuracy. To test the hypotheses, a laboratory experiment, in which deceivers were interviewed based upon deceptive information in their enhanced resumes, was conducted. Results of the study indicate that training in deceptive cue recognition improves deception detection success.

ix

CHAPTER 1

INTRODUCTION

Deception has a long history in the human experience, from the earliest mythical religious stories of the fallen Archangel lying to Adam and Eve about what would happen to them if they took a bite out of an apple to today’s surrounding terrorism and the buildup of weapons of mass destruction. Various communication transmit stories of corruption and lies in business and everyday life. It is hard to view television or read a newspaper without learning of someone who created harm through distorting the truth.

People have been interested in learning about deception and its detection for many years (Trovillo 1939). Various topics related to deception have been studied, including deception in business practices. Employees’ personal beliefs and attitudes toward lying and the organizational ethical climate are some factors that can influence the prevalence of lies in business (Leonard et al. 2001). Also, people tell more lies when they want to appear likeable or competent, both important aspects for success in business (Feldman et al. 2002). Since deception in business tends to hurt productivity and profitability (Prater et al. 2002), any insights that researchers obtain from the study of deception may have beneficial consequences. For example, auditors can develop a set of heuristics to help them detect financial fraud (Johnson et al. 2001).

One area essential to business success is the ability of businesses to hire the most qualified applicants. But research has shown that from 25% to 67% of applicants on their resumes and defend these lies in job interviews (Prater et al. 2002). This can be an enormous problem when firms are looking to hire the most qualified applicants (Snell et al. 1999). Detection of deceptive information on resumes is important to improve the interviewing and hiring processes. False information on resumes tends to pertain to the most job relevant items (Becker et al. 1992). For this to be true, applicants must be aware that many companies do not conduct extensive background checks. It is often common knowledge that to compete for jobs, applicants need to present themselves in a positive 1

light. Often, as seen from the above statistics about lies on resumes, applicants cross the line between presenting a positive image and presenting a deceptive image.

One example of the perils of real life resume deception is that of Notre Dame football coach George O’Leary who fabricated information on his resume. He said that he had been a letterman and had a football career at the University of New Hampshire, and held a masters’ degree from New York University. After Notre Dame found out about these fabrications, he resigned after only five days on the job (Hughes et al. 2003).

Although the above example was a high-profile case, other cases of deception occur that can affect many people and organizations. Hi-technology applicants have more resume fraud than applicants in other industries (Prater et al. 2002). Since technology is prevalent in society and organizations, this deception can have far-reaching implications. Hi-technology workers that do not have the skills necessary for the job because they obtained their positions through deception may detract from project completion and implementation. This non-optimal work force has the potential to negatively affect profits. Organizations will benefit and have the potential to hire the most qualified employees when interviewers are better able to detect the false information given by job applicants.

Various forms of electronic interviewing have expanded the traditional face-to- face interview method. Some electronic formats that could be used for interviewing include e-mail and Internet options such as Internet chat-relay with or without audio and video. Interviewing with these electronic media brings different problems in detecting lies from face-to-face interviewing because of the nature of the various electronic interview media (Cober et al. 2000; Pearce et al. 2001). E-mail can structure information content and limit overload that interviewers may experience if presented with all of the job applicants face-to-face, but potential benefits can be lost in this lean media (Hiltz et al. 1985). It has been shown that media choice when interviewing can influence the potential for lie detection due to variations of media richness that influence the transmission of verbal and non-verbal deception cues (Miller et al. 1993).

Detecting lies is often difficult. Many people overestimate their natural ability to catch lies (Elaad 2003; Vrij et al. 2004). In reality, the chances of detecting lies is either

2

chance (around 50%) or lower (Feeley et al. 1995; Zuckerman et al. 1985). Often people expect others to tell the truth more than they actually do tell the truth (McCornack et al. 1990; McCornack et al. 1986). This bias towards expecting the truth to be told can result in missing cues that liars may send out. Another factor that contributes to the generally low level of lie detection is inability of people to detect reliable cues of deception. But even if they are looking for these cues, their heightened suspicion may backfire and lead them to label truth tellers as liars (Burgoon et al. 2003b).

Because it is so difficult to detect the majority of lies people tell, efforts have been made to improve deception detection. Research has discovered some reliable indicators of deception (Zuckerman et al. 1985). Training to detect these cues can be given to people for improving detection accuracy. However, past research has produced mixed results on the effects of training on deception detection (Fiedler et al. 1993; Kassin et al. 1999; Meissner et al. 2002; Porter et al. 2000; Vrij 1994; Zuckerman et al. 1984) . Incorporating good training methods that include giving explicit instructions, practice, and feedback has been shown to be important (Vrij 1994). A recent literature review of training effects for deception detection shows that overall, training improves accuracy (Frank et al. 2003). In addition, people can be warned that lies may be occurring, potentially making them more alert to deceptive cues. When people are warned that deception may happen, they may be more attentive to the leakage of cues (Biros et al. 2002). Although not all studies have found that warnings increase detection accuracy (DePaulo et al. 1989; Toris et al. 1984), one reason for the lack of results may have been from inadequate methodology (McCornack et al. 1990). Many other experiments have shown reduction in the effect of truth bias if receivers are warned that lies may be occurring (Biros et al. 2002; George et al. 2004a; Stiff et al. 1992). Since past research has produced mixed results for both training and warning on deception detection accuracy, some questions remain. Can training and warning improve people’s naturally low ability for deception detection? And does the media in which lies are transmitted affect detection rates?

This study is concerned with those questions. Based upon the existing literature in deception research and computer-mediated communication research, a conceptual framework is developed for understanding how training, warning, and the richness of 3

computer-mediated communication media affect deception detection accuracy. From the conceptual framework, a research model and series of hypotheses is presented. The model and hypotheses were tested and used to extend existing theory and examine the following research questions:

1. Does the richness of the computer-mediated communication medium affect how well people detect deception?

2. Does training make people better at detecting deception?

3. Does warning people make them better at detecting deception?

4. Does a combination of training and warning make people better at detecting deception?

This study tested hypotheses concerning the questions above. An experiment was conducted where deceivers were interviewed based upon the information in their enhanced resumes that included deceptive information. Resume interview scenarios have been used in the past for deception research (George et al. 2004a; Snell et al. 1999) and have been proven to work well for the purpose of testing interactive deceptive communication that models real-life deceptive .

The following chapters provide the background and conceptual framework for understanding deception, media richness, and social presence of media. Chapter Two reviews and synthesizes the literature on media richness, social presence, deception including truth bias and leakage and cues, and interpersonal deception theory. Also in Chapter Two, the relationships between media richness, social presence, and deception are elaborated upon. Chapter Two ends with a review and synthesis of the effects of warnings on deception detection and training on deception detection. In Chapter Three, a research model is developed depicting the relationships of media richness, warning, training, and the combination of training and warning on deception detection accuracy. Chapter Four outlines a laboratory experiment designed to test the research model and hypotheses.

4

CHAPTER 2

LITERATURE REVIEW

The theoretical foundation for this research is drawn from a combination of media richness theory, social presence theory, and theories of deception including interpersonal deception theory. First media richness theory will be discussed. An understanding of how the comparative richness of computer-mediated communication affects the communication process and the potential for deception detection can be gained from media richness theory and other related theories that have expanded upon media richness theory. Then, social presence theory and how it relates to media richness theory will be discussed. Since this study is about deception detection, it is important to present the previous literature on deception. After the deception literature discussion, a well- established deception theory, interpersonal deception theory, combined with computer- mediated communication, will be elaborated upon. Then, to develop an understanding of how the above theories are related, media richness theory and social presence theory as applied to deception research will be discussed. The last sections of Chapter 2 will discuss the research literature on the effect of warning and training on deception detection.

Media Richness Theory

Media richness theory was developed in an attempt to explain the relationship between types of communication and appropriate electronic communication medium choices (Daft et al. 1986; Daft et al. 1987). Specifically, the amount of uncertainty and equivocality of the communication influences the appropriateness of electronic communication media to convey those . According to media richness theory, electronic communication media can be characterized in terms of varying degrees of richness. Media richness is based upon four criteria: feedback, multiple cues,

5

variety, and personal focus (Daft et al. 1986; Daft et al. 1987). Face-to-face is the richest media and unaddressed written communications is the least rich media. An example of a media-rich electronic medium is Internet chat relay with audio and video. E-mail has a lower degree of media richness than Internet chat relay with audio and video. Media high in richness are more appropriate for communication high in equivocality and uncertainty (Daft et al. 1986; Daft et al. 1987). Research on the selection of the media based upon its richness has focused on relating message requirements to media capabilities, in terms of both richness and social presence. The assumption is that the most successful communication will occur when there is "fit" between the degree of uncertainty and equivocality and the richness of communication medium (Daft et al. 1987).

Media richness theory, although adding much knowledge and understanding to the study of computer-mediated communication, has been criticized and expanded upon by other researchers (Rice 1992). Some empirical research on media richness had results that did not support the ideas behind media richness theory (Dennis et al. 1998; El- Shinnawy et al. 1997; Rice 1993). In one experiment, matching media richness to task equivocality did not improve performance (Dennis et al. 1998). Also, choice of media is not always consistent with the ideas of media richness and task fit (El-Shinnawy et al. 1997). In an extension of media richness theory, some empirical research suggests that variations in task processes and communication media act to mediate task performance (Mennecke et al. 2000). Other empirical research results show that task and media richness fit increase effectiveness and productivity, supporting media richness theory (Andres 2002; Daft et al. 1987). Perhaps part of the reason for the mixed results is due to a lack of longitudinal research.

Some researchers have suggested that media richness perceptions can change over time. Experience over time can change the perceived benefits of task technology fit (Burke et al. 2001). Both experience with the communication medium (Carlson et al. 1999; King et al. 1997) and the communication style of the recipient (Markus 1994) are factors that have been shown to influence media richness perceptions and media choice for communication. One theory that has been useful, along with media richness theory, for research in deception and its detection is channel expansion theory.

6

Channel Expansion Theory The perceived richness of communication media can be enhanced by experience with the media over time (Carlson et al. 1999). Channel expansion theory (Carlson et al. 1999) considers the factors of increased experience with the medium, the topic of , the context surrounding the communication, and the communicative partner. Thus, media richness perception is not solely based upon the objective characteristics of the medium. Instead, as participants develop experience with each other, the channel, the message topic, and the communication context, they will perceive the channel as being better able to handle rich, equivocal, socio-emotional messages and will potentially be able to achieve a richer dialogue than inexperienced users. Experiments have supported channel expansion theory and have shown that repeated interaction between communicative partners can reduce moderate amounts of ambiguity and equivocality (Burke et al. 1999; Dennis et al. 1998). Experience over time also diminishes the effect of the media richness of the chosen communication channel on perceived group cohesion and process satisfaction (Burke et al. 2001).

Deception detection research has incorporated ideas related to experience from channel expansion theory. For example, the degree of shared and overlapping experiences of a deceiver and a receiver can create a common bond between them that can influence the communication event (Buller et al. 1996a). A receiver who has less shared and overlapping experience with a deceiver may perceive the deceiver as more deceptive than another deceiver with more shared and overlapping experiences. Also, the deceiver’s experience with the co-participants, the topic, the communication contexts, and the electronic media may also affect the likelihood of deceptive success. It has been proposed that if the deceiver has more experience with the task, topic, communication contexts or the electronic media than the receiver, the chance for deception success improves (Carlson et al. 2004b).

This expansion of media richness theory is one way of understanding the complexities of computer-mediated communication in deception research. Another related influence on computer-mediated communication is the perceived social presence of the medium. The next section discusses social presence theory.

7

Social Presence Theory

Another factor to consider that is somewhat similar to the media richness of the computer-mediated communication medium is the social presence of the medium. Social presence is the degree of salience of the other person in an interaction (Miller et al. 1993). The perception of social presence depends on the visual nonverbal cues transmitted, the perceived distance of communication partner, and the perception of the social distance between the communication partners (Short et al. 1976).

Social presence is also a quality of a communication medium (Short et al. 1976). It reflects the social immediacy or intimacy of a communication medium. For example, the ability to transmit , posture, and non-verbal cues contributes to the social presence of the communication medium (Short et al. 1976). In general, electronic media that are high in media richness are also high in social presence. Electronic communication media vary in their degree of social presence, with e-mail having less social presence than video. It has been shown that social presence theory is a good predictor of media choice (Straub et al. 1998). For example, people tend to choose media high in social presence for tasks that are interpersonally involving, such as in (Straub et al. 1998).

For those choosing socially rich media such as videoconferencing that mimics face-to-face communication, one potential advantage is that cues and indicators of status are kept in the communication process (Sproull et al. 1986). Those using socially leaner media, such as e-mail, may perceive more feelings of isolation due to the medium’s screening of the more personal cues such as facial and tone of (Lea et al. 1991). E-mail can be perceived as being socially leaner than business memoranda, and telephony socially richer than videoconferencing (Rice 1993)

To measure the perception of social presence, researchers consider established factors such as the perceived warmth, sensitiveness, personalness, and sociability of the communicative process via the electronic media (Short et al. 1976). It would seem that ranking media in terms of social presence would naturally match the same rankings with respect to media richness, but they do not necessarily correspond. Perceived social

8

presence when using various electronic media can be objectively measured for these qualities with a standardized and well tested instrument (Miller et al. 1993).

Both media richness and social presence can influence the deception and detection process. As discussed above, the richness of the computer-mediated communication can influence the feedback, multiple cues, language variety, and personal focus the communication partners perceive to be present (Daft et al. 1986). In addition, the perceived social presence of the communication medium can influence the degree of involvement in the communication interaction. The influences of these theories on deception and detection will be discussed in a following section. But first, the ideas and theories of deception research will be presented.

Deception Literature and Theory

People do not tell the complete truth in many instances every day. An example is when people are asked, “How are you?” and they reply “Good.” In many cases these people do not really feel good but give this answer because it is the socially accepted and expected answer. Another example of common lies is the “white lie.” Someone may say that they like a co-worker’s new haircut, when they really think it looks less than flattering. Another common type of lie is when someone is asked what he or she did today and the person responds with only part of what happened to him or her during that day. This is an example of a lie of omission. It would be often tedious and boring to describe and listen to every detail of a person’s day. The above examples of socially accepted and relatively harmless lies serve a function in people’s lives and are not included in most of the academic research on deception and its detection.

For a lie to be considered an act of deception, the communicative exchanges between people must involve perceptions by one or more of the people involved that there is an intent to deceive (Miller et al. 1993). A widely-used definition for the term deception and the one that will be used for this study is “a message knowingly transmitted by a sender to foster a false belief or conclusion by the receiver” (Buller et al. 1996a): p.205. Thus, deceptive communication consists of messages and information

9

knowingly transmitted to create a false conclusion (Miller et al. 1993). Deception not only includes outright lies. Evasions of the truth, equivocations, exaggerations, misdirections, deflections, and concealments are also considered deception. These forms of deceit are more common than outright lies (DePaulo et al. 1996; Turner et al. 1975). Thus, deception can be conducted in many ways with the purpose of and motivation for personal gain.

Although the motivation to deceive and get away with the deception may be strong, not all deceivers are successful. However, the communication literature on deception shows that the average person is not very good at detecting deception. Many experiments on deception detection have reported either chance or lower than chance rates of deception detection accuracy (Feeley et al. 1995; Zuckerman et al. 1985). In empirical studies, detection accuracy rates peak between 40 and 60 percent (Miller et al. 1993). To counteract these problems with low deception detection accuracy, researchers have investigated methods of detecting more lies. Some factors that may inhibit deception detection accuracy are receivers’ tendencies for information-processing biases that lead to assuming people will tell the truth, tendencies for receivers to rely on indicators that may not be reliable cues, and tendencies of heightened suspicion that can lead to “false positives” or judging true statements as lies (Anolli et al. 1997; Biros et al. 2002; Buller et al. 1996b; Feeley et al. 1995; Fiedler et al. 1993; Levine et al. 2000) .

Truth bias, or the tendency to accept other’s statements as truthful ones, has been looked at as one of the hindrances to deception detection success. In addition, a major focus in deception research in trying to improve deception detection accuracy is the understanding of cues that the deceiver leaks in the deceptive communication process. The next sections will discuss truth bias and deception cues.

Truth Bias Truth bias is a person’s natural disposition to believe communication is truthful until that person has reason to believe otherwise (McCornack et al. 1990; McCornack et al. 1986; Miller et al. 1993). Empirical research has shown truth bias is strongly correlated with perceptions of truthfulness (Miller et al. 1993). This overestimation of statements judged as truthful when they are not is a key problem in deception detection

10

accuracy (Kalbfleisch 1992; Miller et al. 1993; Vrij 2000). Part of the reason that truth bias exists is the human tendency to want to reduce uncertainty and to maintain social cooperativeness (McCornack et al. 1990; McCornack et al. 1986; Miller et al. 1993) .

Truth bias exists between people who have various degrees of personal familiarity with each other. Romantically-involved couples are particularly susceptible to truth bias (McCornack et al. 1990). Although truth bias has been found to be stronger for those people more familiar with each other (Buller et al. 1994; Stiff et al. 1992), it exists in communicative partners who are complete strangers (McCornack et al. 1990). Additionally, truth bias is also stronger in face-to-face interaction than it is in other situations where communication is mediated by a given electronic format (Burgoon et al. 2001). In some computer-mediated communication that deviates from face-to-face communication, truth bias may be reduced to making the conditions for deception detection more likely.

Truth bias can prevent receivers from asking scrutinizing questions that are important for the detection of false information. One of the ways truth bias can be somewhat mitigated is though giving receivers warnings that deceptive statements may occur. The effect of warnings on deception detection will be investigated in one of the following sections. But first, a discussion of leakage and cues is presented.

Leakage and Cues Leakage theory postulates that during deception, cues are leaked that, if detected, may alert people to the fact that deception is occurring (Ekman et al. 1969). This leakage may occur often during the deceptive communication (Colwell et al. 2002; Ekman 1992). Some researchers believe that leakage occurs only during certain circumstances. Buller and Burgoon (1996) emphasize the importance of the deceiver’s motivation. Motivated liars may be successful with verbal presentation but may over-control their nonverbal behaviors (DePaulo et al. 1988). Also, liars who are motivated by self-interest may experience more detection apprehension than liars who are motivated by other factors, such as the avoidance of relationship problems (Buller et al. 1996a). Those liars may engage in more strategic behaviors but may also leak more cues (Buller et al. 1996a). Other conditions that may influence the leakage of cues include communicator

11

competence and skills in conveying believability and masking their discomfort related to dishonesty (Buller et al. 1996a).

Deception detection research is based upon identifying and understanding this process of cue leakage (deTurck et al. 1985; Ekman et al. 1969; Feeley et al. 1995; Stiff et al. 1990). Part of the reason why cues of deception are leaked is because deception is a more cognitively complex task than telling the truth, and thus it is hard for the deceivers to totally control their verbal and nonverbal behavior (Buller et al. 1996a; Miller et al. 1993). Cues such as increased voice pitch, excessive blinking, or nervous body gestures are leaked and may or may not be noticed by the receiver (DePaulo et al. 2003; Frank et al. 2004; Zuckerman et al. 1985). It has also been shown that receivers rate aberrant behavior as less honest than normative behavior (Levine et al. 2000). A meta-analysis of 45 studies that looked at 24 different visual, , verbal, and general behaviors commonly associated with deception found that 14 of those 24 were reliable cues for deception detection (Zuckerman et al. 1985). A list of those reliable cues is presented in Table 2-1.

Visual cues can be available for the receiver’s detection if the deceptive communication process occurs face-to-face or via computer-mediated communication, such as videoconferencing, that allows the receiver to see the sender. Pupil dilation and blinking can be measured to access the pupil’s diameter and the frequency of blinks. An increase in pupil dilation and a high frequency of blinks may cue the receiver that the message is deceptive. Facial segmentation is the number of units or segments in the stream of behavior measured by naïve judges (Zuckerman et al. 1985). Adaptors consist of self-grooming activities such as scratching. Body segmentation is similar to facial segmentation except that the whole body is considered.

Paralanguage cues are optional vocal effects such as tone of voice that may communicate . A shorter response length between messages than would normally be expected is an indicator of the possibility that a deceptive message is being sent. errors can occur in deceptive communication due to the higher cognitive requirements that accompany deception. Speech hesitations and the uttering of “ahs,”

12

“ers” and “uhms” may also occur when the deceiver is trying to invent deceptive statements. A higher pitch of voice may also be a cue that deception is occurring.

Verbal cues include negative statements, irrelevant information, immediacy and leveling. Immediacy indicators include messages that seem more direct. An example of immediate indicator is “he bores me,” whereas “his manner bores me” is non-immediate (Zuckerman et al. 1985). Leveling refers to over-generalized statements which include such as every, all and nobody.

Table 2.1. Reliable deception cues from Zuckerman and Driver, 1985 Category of Behavior Behavior Visual Pupil dilation Blinking Facial segmentation Adaptors Body segmentation Paralanguage Response length Speech errors Speech hesitations Pitch Verbal Negative statements Irrelevant information Immediacy Leveling General Discrepancy

13

A more recent meta-analysis of cues was completed by DePaulo and associates (DePaulo et al. 2003). They analyzed more than twice as many studies as were in the 1985 Zuckerman and Driver meta-analysis. The DePaulo et al. meta-analysis looked at 120 studies on deceptive cues. Significant cues from this updated study are listed in Table 2-2.

Some of the cues from Table 2-1 such as pupil dilation, facial cues (pressed lips, etc.), adaptors (fidgeting), body cues (chin raises, etc.), response length, pitch, negative statements, immediacy, are included in Table 2-2. The cues from Table 2-1 that were tested in the meta-analysis by DePaulo et al. (2003) but not found to be significant and therefore not included in Table 2-2 are blinking, speech hesitations (silent, filled & mixed pauses), and leveling (generalizing terms). Speech errors such as grammatical errors and sentence incompletion are listed in Table 2-1 but not in the meta-analysis cues. cues from Zuckerman and Driver (1985) also include non-fluencies, and/or sentence repetition, and slips of the . These latter aspects of speech errors may have been included as aspects of some of the cues in the meta-analysis, such as pauses for non-fluencies. However, they are not specifically listed in the meta-analysis list of cues.

Most of the earlier research on deceptive cues and deception detection has been conducted in a relatively static form by videotaping deceivers and later having subjects view the videotapes or read transcripts of deceptive to try to detect the deception. Although this research has produced informative information in the study of deception, it has been criticized as not truly reflecting how deception actually takes place in real life. Research has shown that the most reliable means of detecting deception is by questioning the truth of the deceptive message (Levine et al. 1999). Asking the deceiver direct questions has been effective in reducing some of the lies deceivers may tell (Schweitzer et al. 1999). It makes sense that this questioning would be more effective if the deceiver were available for the receiver to question. One branch of deception research that addresses the interactivity of deceptive communication is interpersonal deception theory (Buller et al. 1996a). The following section describes interpersonal deception theory and its ideas concerning the more realistically based interactivity of the deceptive process (Burgoon et al. 2001).

14

Table 2.2. Significant deception cues from DePaulo et al. (2003)

Category Behavior Are Liars Less Forthcoming Than Less talking time Truth Tellers? Fewer details More pressed lips Do Liars Tell Less Compelling Less plausibility Tales Than Truth Tellers? Less logical structure More discrepancies and ambivalence Less verbal and vocal involvement Fewer illustrators Less verbal immediacy (all categories) Less verbal and vocal immediacy (impressions) More verbal and vocal uncertainty (impressions) More chin raises More word and phrase repetitions Are Liars Less Positive and Less cooperative Pleasant Than Truth Tellers? More negative statements and complaints Less facial pleasantness Are Liars More Tense Than Truth More nervous and tense (overall) Tellers? More vocal tension Higher pitch frequency More pupil dilation More fidgeting Do Lies Include Fewer Ordinary Fewer spontaneous corrections Imperfections and Unusual Less admitted lack of memory Contents Than Truths? More related external associations

15

Some of the cues from Table 2-1 such as pupil dilation, facial cues (pressed lips, etc.), adaptors (fidgeting), body cues (chin raises, etc.), response length, pitch, negative statements, immediacy, are included in Table 2-2. The cues from Table 2-1 that were tested in the meta-analysis by DePaulo et al. (2003) but not found to be significant and therefore not included in Table 2-2 are blinking, speech hesitations (silent, filled & mixed pauses), and leveling (generalizing terms). Speech errors such as grammatical errors and sentence incompletion are listed in Table 2-1 but not in the meta-analysis cues. Speech error cues from Zuckerman and Driver (1985) also include non-fluencies, word and/or sentence repetition, and slips of the tongue. These latter aspects of speech errors may have been included as aspects of some of the cues in the meta-analysis, such as pauses for non-fluencies. However, they are not specifically listed in the meta-analysis list of cues.

Most of the earlier research on deceptive cues and deception detection has been conducted in a relatively static form by videotaping deceivers and later having subjects view the videotapes or read transcripts of deceptive conversations to try to detect the deception. Although this research has produced informative information in the study of deception, it has been criticized as not truly reflecting how deception actually takes place in real life. Research has shown that the most reliable means of detecting deception is by questioning the truth of the deceptive message (Levine et al. 1999). Asking the deceiver direct questions has been effective in reducing some of the lies deceivers may tell (Schweitzer et al. 1999). It makes sense that this questioning would be more effective if the deceiver were available for the receiver to question. One branch of deception research that addresses the interactivity of deceptive communication is interpersonal deception theory (Buller et al. 1996a). The following section describes interpersonal deception theory and its ideas concerning the more realistically based interactivity of the deceptive process (Burgoon et al. 2001).

Interpersonal Deception Theory Interpersonal deception theory looks at deceptive communication as an ongoing, interactive process between the deceiver and receiver. In this interactive process, the theory states that deceivers will make strategic changes to both message content and their behavior depending upon the reactions of the receiver (Buller et al. 1996a). Interpersonal

16

deception theory predicts that deceivers conceive of and deliver their future messages based upon the perceived social cues, immediacy, engagement, conversational demand and spontaneity of the receiver. These perceptions and changes are on-going for the whole duration of the deceptive interactive process. See Figure 2-1 below.

Time

Initial Perceived Behavioral Sender Message Success Adaptation

Receiver Interpretation Behavioral Discern Truth / & Adaptation Deception Judgment

Figure 2.1. Interpersonal Deception Theory

People tend to monitor feedback from others in normal communication, but as can be seen from interpersonal deception theory, this feedback monitoring is even more critical to attend to in the deceptive communication process. When a deceiver judges the reactions of the receiver to the deceptive message, information can be assessed

17

concerning the perceived success or lack of success of the deception. If it is perceived that the deception is not as successful as intended, the deceiver can modify the delivery style and message. The deceiver can try to approximate what appears to be normal and appropriate conversation to improve chances of success while at the same time trying for limited information release (Burgoon et al. 1999).

Interpersonal deception theory is a large part of the theoretical foundation for this proposed study. In researching deception and its detection, it is important to try to reflect what happens in real-life deceptive communication processes as much as possible. While much of the earlier research in deceptive communication has ignored the importance of understanding the interactivity between the deceiver and receiver, this study will not eliminate that important aspect of deception. As will be seen in the following sections, this study proposes testing hypotheses concerning the deceptive process based upon the ideas espoused in interpersonal deception theory. A more detailed description of interpersonal deception theory as applied to computer-mediated communication is presented in the following section.

Interpersonal Deception Theory and Computer-Mediated Communication

According to interpersonal deception theory, deception and detection follow iterative interaction patterns where the receiver’s response influences the deceiver’s next communication. Figure 2-2 (Carlson et al. 2004b) illustrates the complexity of an expanded model of deception and detection based upon interpersonal deception theory with the addition of computer-mediated communication.

18

Figure 2.2. A Model of Deceptive Communication and Its Detection from Carlson, George, Burgoon, Adkins & White (2004)

As can be seen in the above model, the receiver’s confidence in his or her deception detection ability is influenced by a number of factors.

Deceiver and Receiver. A deceiver sends a deceptive message during a communicative event to a receiver of the message via a selected communication medium. In this communication, the deceiver and receiver have a shared context in that they are communicating about the same subjects. The relationship of the deceiver and the receiver is a factor in the communication event. The degree of shared and overlapping experiences of the deceiver and receiver can create a common bond between them that

19

can influence the communication event. A receiver who has less shared and overlapping experience with a deceiver may perceive the deceiver as more deceptive than another deceiver with more shared and overlapping experiences.

Motivation to deceive is a key factor of the deceptive process. Producing and defending deceptive messages is a cognitively intense mental process and requires more effort than producing truthful messages (Miller et al. 1993). This extra effort needed to produce deceptive messages will probably not be supplied if the deceiver does not have sufficient motivation to deceive. Similarly, the receiver needs some motivation to detect deceptive messages. In some cases, the receiver may not want to interpret deceptive messages as being deceptive. In situations where the receiver perceives that detection of deception will produce more loss than gain, the receiver may lack motivation to detect deceptive messages. One example of such a situation could be in friendships where the receiver does not want to confront a friend and potentially lose a friendship. Another example could be in an employee and supervisor relationship where the employee has the potential to lose his or her job. In some job interviewing situations, interviewers may have a low number of candidates to choose from and may not want to offend applicants at the risk of not being able to fill the position. If the receiver is strongly inclined to be deceived as in the examples above, even the most flagrant lies of the deceiver will often be persuasive (Miller et al. 1993).

Other factors that influence deceivers and receivers in deception are intrinsic ability to deceive or detect deception, the deceptive task characteristics, experience in deception and detection, and and affect (Carlson et al. 2004b). Some people pick up on deceptive clues or hide deceptive clues better than others. For example, sociopaths and con people can use their abilities to tell and maintain lies to get things they want from others. Varying task characteristics also affects deceptive communication. It has been suggested that the more complicated and varied the task characteristics, the less likely the deceiver will be able to produce successful deceptive statements (Carlson et al. 2004b). The deceiver’s experience with the co-participants, the topic, the communication contexts, and the electronic media also affect the likelihood of deceptive success. It has been proposed that if the deceiver has more experience with the task, topic, communication contexts or the electronic media than the receiver has, the 20

chance for deception success improves (Carlson et al. 2004b). Finally, some people are apprehensive when deceiving while others may feel the thrill of getting away with deception. Those deceivers who feel apprehensive are generally dealing with the cognitive dissonance created when going against their moral values. Deceivers may experience negative affect from the fear of getting caught (Buller et al. 1996a). It has been proposed that deceivers who experience more cognitive and/or affective dissonance associated with deceit will be less successful (Carlson et al. 2004b).

Relationship between the deceiver and the receiver. The amount of shared and overlapping experiences between the deceiver and the receiver will most likely affect the communication event and outcome. Even if the deceiver does not personally know the receiver, they could have shared and overlapping experiences from having the same cultural background, similar types of education, employment, or family situations. It has been proposed that the more experiences the deceiver and receiver share, the more likely the deceiver will be able to successfully deceive the receiver (Carlson et al. 2004b).

Relational closeness also can affect the deceptive communication process. When people form relationships, they often expect the other person to tell the truth most of the time. Generally when close relationships are formed, the people in the relationship have had the experience of receiving many truthful statements from the other person in the relationship. This experience of truthful statements forms the basis for a truth bias (Miller et al. 1993). Receivers with truth bias will not be as vigilant in deception detection. Another influence of relational closeness in deception and its detection occurs when the receiver does not want to confront the sender because it may negatively affect the relationship. In those cases, the receiver will have less motivation to detect deception. It has been proposed that a deceiver’s perception of relational closeness to the receiver is associated with lower levels of deception success (Carlson et al. 2004b).

Communication Event. The deceptive communication event can be viewed in more detail in Figure 2-3.

21

Figure 2.3. The Deceptive Communication Event from Carlson, George, Burgoon, Adkins & White (2004)

When a deceiver sends a deceptive message, the receiver may either consciously or unconsciously search for deception cues to try to apprehend whether the message is truthful or not. These cues can be verbal and nonverbal. Some examples of verbal cues of deception include the number of statements of personal responsibility, other responsibility, and mutual responsibility. The number of factual, hypothetical, and opinion statements also varies in deceptive communication vs. non-deceptive communication. Examples of non-verbal cues of deception include number of eye blinks, smile duration, posture shifts, pauses and response length (Miller et al. 1993). The receiver assesses and cognitively manages the incoming deceptive cues and may exhibit some suspicion display to the deceiver. This alerts the deceiver, who either consciously or unconsciously searches for cues from the receiver to estimate the success of the deception. When the deceiver receives the suspicion cues from the receiver, the deceiver adapts communication efforts and attempts to manage to send fewer deceptive cues. Thus, the deceptive communication event is iterative (Buller et al. 1996a).

This iterative communication process can be intensified if the receiver is alerted before the communication process begins that there is a probability the communication

22

partner will be deceptive. In this case, the receiver is aroused to the potential for deception. Heightened arousal is thought to hinder the success of most deceivers (Miller et al. 1993).

Communication Medium. The type of communication medium can influence the outcome of the deceptive communication process. Different communication media can lead to varying degrees of cue leakage (Miller et al. 1993). For example, communication via e-mail, a relatively lean medium, eliminates some communicative cues such as tone of voice, facial expression, and posturing. Other electronic media that have more richness include Internet chat relay with audio and video, and videoconferencing. These electronic communication media include many cues that appear in face-to-face communication. The qualities of various electronic communication media are elaborated upon by media richness theory and social presence theory.

In addition to the above communication influences, individual differences of both the deceiver and the receiver may need to be understood.

Individual Differences in Social Skill and Political Skill

Not everyone is exactly the same. Because of this, researchers have tried to determine how people differ. Two skills that most people need to use in everyday life and that can influence their communication with others are social skill and political skill. How people differ in these two skills is discussed in this section.

Socially skilled individuals may have an advantage in making more favorable impressions on others (Riggio 1986). Social skill has both emotional and social dimensions (Riggio 1986). The emotional dimensions include emotional expressivity, emotional sensitivity, and emotional control. The social dimensions include social expressivity, social sensitivity, and social control. Emotional expressivity is the skill with which individuals communicate nonverbally with emotional messages, and includes the nonverbal expression of attitudes, dominance, and interpersonal orientation. People who are highly expressive emotionally

23

are able to arouse or inspire others by their ability to transmit feelings. Emotional sensitivity deals with the skill in receiving and interpreting the nonverbal communications of others. People with high emotional sensitivity may attend to and accurately interpret the subtle emotional cues of others. Emotional control is the ability to control and regulate emotional and nonverbal displays. With this skill, people can show particular on cue and hide behind an assumed mask. For example, they may laugh appropriately at a joke or put on a cheerful face when they are sad. Social expressivity is skill in verbal expression and the ability to initiate and guide conversations. Sometimes socially expressive persons may not pay much to monitoring the content of their speech. Social sensitivity is the ability to interpret the verbal communication of others and understand the norms governing appropriate . Socially sensitive people are generally concerned with appropriateness in communication. Social Control is the skill in role-playing and social self-presentation. People with high social control skill are are generally adept, tactful, and self-confident in social situations. They can be adept in guiding the direction and content of communication. Another skill that varies in individuals and can influence the communication process is political skill. In one study, political skill was related significantly to recruitment interviewer ratings and evaluations of job applicants (Higgins 2000). Political skill can consist of four dimensions: social astuteness, interpersonal influence, networking ability, and apparent sincerity (Ferris et al. 2004).

Social astuteness refers to the capacity to identify with others to obtain things by presenting one’s behavior in the best possible light. In interpersonal deceptive situations, socially astute people may have the ability to read situations and people, and use that information to attempt to influence others. The second dimension of political skill, interpersonal influence, is the ability individuals have in adapting and calibrating their behavior to situations in order to elicit particular responses from others to achieve one’s goals. The third dimension, networking ability, refers to individuals who are adept at developing and using diverse networks of people. Individuals who score high in networking ability are often highly skilled negotiators and dealmakers. The final dimension, apparent sincerity, is when individuals appear to others to possess high levels

24

of integrity, authenticity, sincerity, and genuineness. Even in deceptive situations, these individuals may appear to be honest. This dimension of political skill may be especially important to control for in deception detection, since influence attempts are successful only when the individual communicating the deceptive messages is perceived as possessing no ulterior motives (Jones 1990).

Media Richness Theory, Social Presence Theory, and Deception

A synthesis of media richness theory, social presence theory, and deception theory is discussed in this section. Early deception detection research was generally not concerned with deception over computer-mediated communication. However, since deceptive information is conveyed more and more over electronic media, additional cognitive biases may appear from people placing undue trust on information delivered via computers or mass media (George et al. 1999b; Nass et al. 2000). It is now important to understand how deception takes place not just in face-to-face communication, but also in other various communication media.

Media richness theory and social presence theory have both been applied to deception detection research. The transmission of deceptive cues, as has been discussed, is an important concept in deception research. The richness of the communication medium influences the types of cues transmitted. The social presence perceived by receivers may influence the degree of realistic dialogue they choose to engage in. If it is perceived that the communicative partner is not fully present, there may be less motivation to engage in normal dialogue.

The variety of cues transmitted across the different degrees of richness of the communication medium varies from the highest in face-to-face to a much lower degree in pure text. Nonverbal cues can be filtered out if the medium does not allow for communicative partners to see each other. Also, if social presence is low in a communication medium, the receiver may pay less attention to the deceiver and not pick up on many of the remaining cues that do exist in lower-richness, or leaner media (Short et al. 1976)

25

To understand the effects that media richness has on deception detection, researchers have identified reliable deceptive indicators by their detectability across video, audio, and text-based media (Rao et al. 2000). Table 2-3 shows these indicators that are based upon the reliable cues from Zuckerman and Driver (1985).

Table 2.3. Reliable deceptive indicators across media from Rao & Lim (2000) Behavior Video Audio Written Modes Visual Pupil Dilation Detectable Blinking Detectable Facial Segmentation Detectable Adaptors Detectable Body Segmentation Detectable Paralanguage Response Length Detectable Detectable Detectable Speech Errors Detectable Detectable Detectable Speech Hesitations Detectable Detectable Interactive may be detectable. Pitch Detectable Detectable Verbal Negative Statements Detectable Detectable Detectable Irrelevant information Detectable Detectable Detectable Immediacy Detectable Detectable Detectable Leveling Detectable Detectable Detectable General Discrepancy Detectable Partially Detectable Partially Detectable

As can be seen from the above table, video has the highest number of reliable deception indicators. Audio contains fewer reliable indicators than video but more than written modes of communication. Since deception detection relies on transmission of cues, detection success should vary with the communication media used. For example, when comparing videoconferencing with e-mail based communication, more cues such as blinking, facial segmentation, and voice pitch will be available to the receiver using videoconferencing than would be to the receiver using e-mail. Also, transmission medium can affect the perception of cues to the extent that the same deceptive statements

26

are perceived as having higher richness in detail, more completeness, more logical structure, and more plausibility when reviewed on videotape than when read from written transcripts (Stromwall et al. 2003) .

The same comparison of behavioral cues to detectability across media for the more current DePaulo et al. (2003) meta-analysis is presented below in Table 2-4. Again, video transmits the most cues, audio transmits less than video, and written modes transmit the least amount of deceptive cues.

Table 2.4. Significant deceptive indicators from DePaulo et al. (2003) across media Behavior Video Audio Written Less talking time Detectable Detectable Less details Detectable Detectable Detectable More pressed lips Detectable Less plausibility Detectable Detectable Detectable Less logical structure Detectable Detectable Detectable More discrepancies and ambivalence Detectable Detectable Detectable Less verbal and vocal involvement Detectable Detectable Less illustrators Detectable Detectable Detectable Less verbal immediacy (all categories) Detectable Detectable Detectable Less verbal and vocal immediacy (impressions) Detectable Detectable More verbal and vocal uncertainty (impressions) Detectable Detectable More chin raises Detectable More word and phrase repetitions Detectable Detectable Less cooperative Detectable Detectable Detectable More negative statements and complaints Detectable Detectable Detectable Less facial pleasantness Detectable More nervous and tense (overall) Detectable Detectable More vocal tension Detectable Detectable Higher pitch frequency Detectable Detectable More pupil dilation Detectable More fidgeting Detectable Less spontaneous corrections Detectable Detectable Less admitted lack of memory Detectable Detectable Detectable More related external associations Detectable Detectable Detectable

27

Another difference in various electronic communication media is the quality of synchronicity, or the amount of time delay between the time information is sent and the time it is received. For example, real-time video and audio have immediate information exchange (synchronous) while e-mail transmission creates a time-lapse between the message creation and availability for the receiver (asynchronous). With asynchronous media, both deceivers and receivers have the chance to spend time planning and editing their next reply (Carlson et al. 2004a; Dennis et al. 1999) . Longer response time is one cue to look for in deception (Walczyk et al. 2003). The deceiver has an advantage when using an asynchronous computer-mediated communication medium since the additional time taken to prepare deceptive messages may not be noticed due to the medium’s inherent transport delay time (Carlson et al. 2004a). However, this advantage may be cancelled out by the fact that the receiver can re-read and re-analyze the written messages of the deceiver. In fact, deceivers tend not to prefer written communication for this reason (Carlson et al. 2004a). In one survey, participants indicated that they would prefer to use the telephone rather than e-mail for both deception and detection (George et al. 1999a). In another study of media richness and deception detection, subjects who had enhanced their resumes with deceptive information were interviewed across e-mail, Internet chat relay, Internet chat-relay with audio, and audio only over Internet chat relay. From results of that experiment, it was concluded that in future studies it would be better to focus on e-mail, the leanest medium used, versus audio over Internet chat relay, the richest media used, rather than looking again at incremental increases of electronic media richness (George et al. 2004a).

As can be seen from the above discussion, deception detection research becomes more complex when computer-mediated communication is used. Researchers have also tried to find a more simple way to improve deception detection accuracy. Giving receivers advanced warning that deception may be likely to occur has had some success with improving detection. The following section discusses this line of research.

28

Effects of Warning on Deception Detection

In normal communication, people are prone to truth bias in that they are more likely to believe other people are telling the truth than they are to look for lies (McCornack et al. 1990; McCornack et al. 1986). However, if people are aroused to the possibility of deception, they may be more likely to detect lies. Arousal theory looks at the heightened arousal that may be produced with the presence of nonverbal cues the deceiver leaks (Knapp et al. 1974). Liars may try to monitor their communicative partners’ reactions to determine if arousal, or suspicion, exists (Buller et al. 1996a). According to interpersonal deception theory, if deceivers detect suspicion, they may adjust their next message to try to appear more believable (Buller et al. 1996a). If deceivers have the chance to plan and rehearse their deceptive message, the level of arousal typically associated with deception may decrease (Miller et al. 1993). When people are warned that deception may happen, they may be more attentive to the leakage of cues (Biros et al. 2002). Although truth bias may not be totally eliminated, its effects can be reduced by alerting the receiver to the possibility of deception. Simple warnings alone may be enough to counter the effects of truth bias (Biros et al. 2002).

Empirical research has been conducted using warnings to alert receivers of the possibility of deception. Although not all studies have found that warnings increase detection accuracy (DePaulo et al. 1989; Toris et al. 1984), one reason for lack of results may have been from lack of rigorous methodology (McCornack et al. 1990). Timing also may be important in warning research, since if the task is too long, vigilance diminishes over time, thus reducing the effect of the warning (Parasuraman 1984). Many other experiments have shown reduction in the effect of truth bias if receivers are warned that lies may be occurring (Biros et al. 2002; George et al. 2004a; Stiff et al. 1992). Manipulations for warning can be very simple, such as just telling the deceiver to be aware that deception may occur or relating statistics concerning the probability of deceptive messages. Improvements in deception detection success have occurred by introducing simple warnings that deception may be present (Biros et al. 2002). One example of the positive effects of simple warning on deception detection is an experiment concerning falsification of resumes and the detection of those falsifications (George et al. 29

2004a). Subjects in that study were told that around 40% of people lie on their resumes. Warned receivers identified 15% of deceptive items compared with 2% identified by those not warned.

Thus, warning for the possibility of deception can be an effective and easy way to improve deception detection accuracy. Another possible way to improve deception detection accuracy is to give people training in deception cue recognition so they can potentially benefit from the knowledge researchers have amassed about deceptive cues. The final section in this chapter discusses deception and training.

Effects of Training on Deception Detection

Deception detection accuracy rates have not been impressive. Many people, if asked, will estimate their ability to catch lies as far above their actual ability (Elaad 2003; Vrij et al. 2004). However, most studies have shown that successful detection happens only about half the time or less (Feeley et al. 1995; Miller et al. 1993; Navarro et al. 2001). Even people who have experience in deception detection may not do better than chance at out lies (Burgoon et al. 1994; Ekman et al. 1991). One way to try to improve deception detection accuracy is to provide training in deception cue recognition. If people are trained to be aware of reliable cues for deception, it may be possible that they could more readily pick up on those cues and improve their chances of catching lies.

As discussed in previous sections, researchers have identified various cues that can help receivers detect deception. Reliable cues such as higher voice pitch, higher rates of blinking, more speech hesitations, and more pauses can be taught to receivers before they try to decide if senders are lying or not (DePaulo et al. 2003; deTurck et al. 1985; Ekman et al. 1991; Kalbfleisch 1992; Vrij 2000; Zuckerman et al. 1985). If people are not aware of these reliable cues, they may rely instead on past faulty information of what they thought would be good cues or on individual heuristics that may not be valid (Fiedler et al. 1993). Even some police manuals contain non-valid cues, such as gaze aversion and fidgeting, that when used have been shown to decrease deception detection

30

accuracy (Mann et al. 2004). It seems natural that giving people training in reliable cues would improve deception detection accuracy. However, past research has produced mixed results on the effects of training on deception detection (Fiedler et al. 1993; Kassin et al. 1999; Meissner et al. 2002; Porter et al. 2000; Vrij 1994; Zuckerman et al. 1984). In some studies, training resulted in just chance deception detection accuracy rates, triggered more false alarms, and actually impaired performance (Kassin et al. 1999; Meissner et al. 2002). In other studies, training in nonverbal cues improved deception detection accuracy (Fiedler et al. 1993; Porter et al. 2000; Zuckerman et al. 1984). Incorporating good training methods that include giving explicit instructions, practice, and feedback has been suggested and has been effective for some situations including planned interview conditions and viewing hand cues but not for all conditions (Vrij 1994). Although there have been mixed results in the past, a recent literature review of training effects for deception detection shows that over-all, training improves accuracy (Frank et al. 2003). In addition, training has been shown to lower the faulty cognitive heuristic of truth bias to improve detection rates (Stiff et al. 1992). Also, an experiment asking participants to first assess the frequency of verbal and nonverbal cues before making judgments for lies produced higher than chance deception detection accuracy (Vrij et al. 2004) . Some recent research has focused on the timing of training (Biros et al. 2002; George et al. 2004b). It has been suggested that giving training right before a task, or just-in-time training, may be more effective than traditional training that contains a time lapse between the training and the task because memory can decrease over time (Biros et al. 2002; Globerson et al. 2001). However, others have suggested that training be given ahead of time so the subjects can digest the training over a period of time (Frank et al. 2003). Most of the research in training for deception cue recognition has used just-in- time training rather than earlier training (see Table 2-5).

31

Table 2.5. Training for deception cue recognition Article When Type of training Who How long Pre-tests & Feedback trained administered post-tests deTurck Just before Behavioral cues in writing and demonstrated by Research 30 minutes No Yes (1991) researcher assistant deTurck et Just before Written cues either visual only, vocal only, or visual and Experimenter 30 minutes No Yes al. (1997) vocal, then practiced by watching 5 videotaped examples of deception deTurck et No Yes al. (1990)* deTurck & Same Miller (1991) experiment as deTurck (1991) Fiedler & Just before Nonverbal cues in writing Researcher Short – No Yes Walka (Walka) (1993) time Kassin & Just before Viewed videotapes on verbal and nonverbal cues, then Female 1 hr Post-test, same No Fong (1999) written summaries of key points experimenter test given to (Fong?) untrained Kohnken 1 hr or No No (1987)* more Vrij (1994) Just before Hand and finger movement cues in writing then examples Uniformed Not given: No Yes shown police officer reading and male time + 6 experimenter videos Vrij & Just before High and low public self-consciousness, good and bad Not stated Short – No No Graham actor, and related hand movements in writing reading (1997) time Zuckerman Just before Feedback on their detection accuracy on 4 messages Not stated Short – No Yes et al. (1984) before more experiment messages feedback time for 4 messages Zuckerman Just before Feedback on their detection accuracy after each message Not stated Short – No Yes et al. (1985) detection feedback

32

Since just-in-time research has been studied extensively, this study will not take that view but instead will look at the less studied early training. As will be shown in the following chapter, a research model will be proposed for studying the influence of training, warning, and media richness on deception detection when using computer- mediated communication. In addition, the dependent and independent variables will be defined and hypotheses related to the model will be presented.

33

CHAPTER 3

RESEARCH MODEL

The existing theories of media richness, social presence, and interpersonal deception theory form the basis for the following proposed research model and hypotheses. The purpose of the model and hypotheses is to advance research in the improvement of deception detection in computer-mediated communication environments. This chapter describes the model and hypotheses.

Research Model of Training, Warning, and Media Richness on Deception Detection When Using Computer-Mediated Communication

This section will introduce a research model, define the necessary variables, and establish hypotheses for studying the influence of training, warning, and media richness on deception detection when using computer-mediated communication. The influence of media richness theory on deception detection using computer-mediated communication has been previously documented and supported with empirical testing. Similarly, factors such as training and warning in deception detection have undergone empirical testing and have been introduced in the supporting literature. The following research model, illustrated in Figure 3-1, depicts the influence of these constructs on deception detection accuracy.

The model illustrates the relationship of the various independent variable constructs of media richness, training, and warning on deception detection accuracy. The following sub-sections will define the model’s variables, introduce the hypotheses, and operationalize the relationships between the variables.

34

Media • Lean • Rich H1

Warning H2 Deception • Present • Absent Detection H4 Accuracy

Training H3 • Present • Absent

Figure 3.1. Training, Warning, and Media Richness on Deception Detection Accuracy

Variable Descriptions, Hypothesis Development and Model Operationalizations The one dependent variable is deception detection accuracy. The objective of this research was to see if training, warning, and communication via richer electronic media would improve deception detection accuracy, so deception detection accuracy is appropriate as the dependent variable. Deception detection accuracy is defined as correctly detecting deceptive messages when using computer-mediated communication.

The independent variables are media richness (lean vs. rich), warning (warned vs. not warned), and training. In addition, the interaction of training with warning will be included in this study. These variables are all potential factors that could influence the

35

receivers’ deception detection accuracy. These variables have been empirically tested in previous research, some with mixed results as to their influence in deception detection accuracy. Media richness will be presented as the first independent variable.

The first hypothesis is concerned with the effects of media richness of the computer-mediated communication medium on deception detection accuracy. Successful deception detection is related to the evaluation of deceptive cues by the receiver. The amount of cues available is related to the richness of the communication media.

Visual cues of deception may limit the processing of verbal and vocal information and restrict detection accuracy (Miller et al. 1993). Therefore, video media may transfer too many cues for the most accurate deception detection. At the other extreme of media richness, e-mail users may not be able to communicate richly with a new partner or about a new topic (Carlson et al. 1999). A media that is in between video and e-mail in media richness may help transfer just the right amount of cues for the most accurate deception detection.

Preliminary research has indicated that audio may contain just the right amount of cues and have the highest potential, compared with other electronic communication media, for deception detection (Burgoon et al. 2003a). The focus for this research is to compare relatively rich media to relatively lean media to determine what differences in deception detection should be expected due to media differences.

Table 3-1 compares two computer-based media, e-mail (relatively lean) to voice over Internet chat relay (relatively rich) using media richness theory. The ratings of each computer medium on the four characteristics of media richness theory are taken from George, Marett & Tilley (2004) and are consistent with other rankings (El-Shinnawy et al. 1997; Markus 1994). These ratings show a difference in variety of cues, variety of language, and variety of feedback for e-mail and audio over Internet chat relay.

36

Table 3.1. Lean (e-mail) and rich (audio over Internet chat relay) media differences based on media richness theory.

E-mail Audio over Internet chat relay Variety of cues Low Moderate Variety of language Low Moderate Speed of feedback Low Immediate Personal focus Moderate Moderate

Audio over Internet chat relay allows more variety of cues, more variety of language, and faster feedback than e-mail. Since richer media are theorized to “leak” more cues to deception than leaner media, there is more potential for receivers to detect cues and be more accurate in deception detection with richer media than with leaner media. Thus, for media richness:

H1: Deception detection accuracy will be greater for receivers who use richer media than for those who use leaner media. The richness of media alone does not tell the whole story of the factors that could lead to improved deception detection accuracy. Another factor relates to the strength of the truth bias in human-to-. Most people lean toward the bias that others will naturally tell the truth, even if in reality that is not the case (McCornack et al. 1990). Because of this truth bias, it would be expected that individuals not warned of the potential for deceptive communication would not be very good at detecting deception, regardless of the communication medium used. It would be expected that simple warnings would partially counteract the effect of truth bias enough to improve deception detection accuracy. Receivers who are not warned would still be affected by truth bias. Past research in the effect of warnings on deception detection generally support this idea

37

(Biros et al. 2002; Stiff et al. 1992). Also, deception is a cognitively complex task and because of this, cues of deception are leaked (Miller et al. 1993). Since people typically do not consciously remember every detail of their interactions, receivers can miss some of these deceptive cues. If receivers’ suspicion of deception is aroused, they may be more attentive to these deceptive cues. The presence of cues and perception of these cues produced by heightened arousal has been shown to facilitate detection accuracy (Biros et al. 2002; McCornack et al. 1990). Thus, for the effect of warnings:

H2: Deception detection accuracy will be greater for receivers who are warned of the potential for deceptive communication than for those who are not warned. Another factor that may influence deception detection accuracy is training. Although research results for training have been mostly positive, research has also produced mixed results in the effectiveness of training on deception detection accuracy (George et al. 2004b; Kassin et al. 1999; Vrij 1994; Zuckerman et al. 1984). Recent research has focused on the timing of training (Biros et al. 2002; George et al. 2004b). Most of the research in training for deception cue recognition has used just-in-time training rather than earlier training (Frank et al. 2003). Some researchers have suggested that training be given ahead of time so the subjects can digest the training over a period of time (Frank et al. 2003). Since just-in-time research has been studied extensively, this study will look at the less studied early training. Further testing of early training on deception detection accuracy is needed to understand the effects of timing on training. We expect that if people are trained to understand the variety of verbal and nonverbal cues that are leaked in deceptive communication and have time to reflect on those cues, accuracy will improve. Thus:

H3: Deception detection accuracy will be greater for receivers who are trained in deception cue recognition than for those who are not trained.

If training and warning are factors that can influence deception detection accuracy, then it makes sense that the two combined together may have an even greater effect. This leads to the following hypothesis:

38

H4: Deception detection accuracy will be greater for receivers who are trained in deception cue recognition and warned of the potential for deceptive communication than for those who are trained only, warned only, or not trained or warned.

The goal of this research is to examine the influence of training, warning, and media richness on deception detection accuracy. In order to do this, the research must test the effects of training vs. no training, warning vs. no warning, training both with and without warning, and rich media vs. lean media. Media richness theory (Daft et al. 1986) and interpersonal deception theory (Buller et al. 1996a) provide the theoretical foundation for examining influences on deception detection. In addition, previous research on deception and deceptive cues, warning, and training play a key role in understanding the factors related to deception detection accuracy. A research model and hypotheses were developed in order to examine these influences on deception detection accuracy.

The next step was to test these hypotheses with an appropriate methodology. To control for individual differences in communication, control variables were considered in the experimental design. Questionnaires measuring deceiver motivation, deceiver social skill, deceiver political skill, receiver motivation, receiver social skill, and receiver political skill were given to subjects, as will be elaborated later. In the following chapter, a methodology is presented for the testing of the above hypotheses.

39

CHAPTER 4

METHODOLOGY

An empirical investigation design for testing the preceding hypotheses is described in this chapter. Conditions for deception to occur were an important consideration for this study. As has been discussed in the previous literature chapter, deception on resumes is relatively common in the United States. Therefore, a study involving deception and its detection for resume falsification can provide the necessary conditions for deception to occur. Following is a detailed analysis of the methodology chosen to test the hypotheses. First, method selection is discussed. Then the study design is presented. After that, the matrix design is illustrated. Details concerning the experiment including interview scenario, control and treatment groups, and measurement scales are then presented. Finally, the method of analysis is discussed.

Method Selection

A laboratory experiment was conducted to test the hypotheses. This experiment was solidly based upon theories of media richness theory, social presence theory, and interpersonal deception theory. This solid theoretical basis is an important requirement for experimental internal validity (Cook et al. 1979). A laboratory experiment is the most appropriate methodology for this research subject because deception and detection are very difficult to study in a non-controlled field context. To accurately study deception and detection, one must know if there really is deception and which statements are deceptive. This is very difficult to pinpoint in a field study. A laboratory experiment can be designed to control for the accurate knowledge of deceptive statements.

There is also an ethical concern about researching deception in field studies. Subjects’ employment may be compromised if they admit to being deceptive. Subjects would most likely not reveal many of their deceptive statements, thus compromising the

40

study. Similarly, survey methodology may not be the most accurate method for studying deception and detection, since respondents may not feel comfortable admitting to deception.

A laboratory design offers the chance for comparative measurement and separation of effects (DeSanctis 1989). The varying effects of media richness, warning, training, and training and warning together can be seen more clearly in an experimental study than in other types of studies. This experiment can be used for both theory building and theory testing (Dubin 1978) for the hypotheses concerning deceptive communication and its detection.

Study Design

A laboratory experiment was conducted to capture necessary data for Hypotheses 1 through 4. Augmenting questionnaires were used to obtain additional data for obtaining the independent variables and dependent variable measurements. Additional questionnaires were administered to the subjects who were in the training treatments to obtain measurements of their knowledge of deception and detection both before and after the training sessions. So as not to arouse the participants to the true nature of the study, they were told that the laboratory experiment was concerned with interviews using electronic media. Specific details about the experiment and questionnaires of this study are provided in the sections below.

The Experiment The experiment was based upon previous experiments conducted concerning deception detection looking at the factors of media richness, warning, and training. Specifically, the two previous research experiments that this study expanded upon are a resume study on deception and detection conducted at a large southeastern university (George et al. 2004a), and a training experiment at a military facility conducted by Biros and associates (Biros et al. 2002; George et al. 2003). This experiment extended the ideas of those previous experiments, especially testing the construct of training in more depth.

41

This experiment required students to enhance their resumes and defend those enhancements when communicating with an interviewer via either lean or rich electronic media. “Applicant faking” is a common practice that many job applicants engage in to give a false impression of having more qualifications than they really possess so that they will look good for the position they are applying for (Kluger et al. 1993; Rynes et al. 1990). Since students are concerned with getting the best job possible for themselves when they graduate, this task concerning resumes was relevant to them. Their motivation to participate may have been related to their desire to practice presenting themselves in the best view possible both through creating impressive looking resumes and practicing their interviewing techniques. The task of interviewing and resume faking has proven in the past to lend itself well to an experiment on interactive deceptive communication (George et al. 2004a).

Subject and Site Selection Subjects for the experiment were selected from junior and senior level business students at a large southeastern university. Business students tend to be focused on getting good jobs after they graduate (Prater et al. 2002). So, it makes sense that they are working toward that goal and want to prepare to present themselves in the best manner possible. This made them ideal subjects for the resume enhancement study. Many business students had already prepared resumes, so this is a task in which they had some previous experience. Most business students in a prior study had no problem with the task of resume enhancement (George et al. 2004a). The biggest challenge in previous resume enhancement experiments was to get the students to show up for the experiments. This study drew from a larger population than prior studies to increase the pool of potential subjects. The subject pool included junior and senior business majors.

This study examined e-mail (relatively lean media) and voice over Internet chat relay (relatively rich media). Most business students use e-mail extensively. They are also very familiar with voice communication media such as talking over the telephone. The experience of communicating via audio over Internet chat relay is very similar to the experience of communicating over the telephone. The experiential base that business

42

students have of communicating through these media was beneficial for standardization and experimental control.

It has been suggested that greater informational familiarity should enable receivers to better recognize departures from the typical (Buller et al. 1996a). To control for informational familiarity that in this study pertained to the differences in knowledge of classes required for a major, students were paired with others from their same major.

For this experiment, we needed a site where the dyad partners did not see each other for purposes of anonymity. A Business School interview suite that contained separate rooms was used for this experiment so that subjects could not identify their dyad partners. Another advantage of using the interview rooms was that students were familiar with the fact that these rooms are used for real-life job interviews. Thus, this site gave even more credence to the proposed experimental mock interview.

The Matrix Design

Based on the research model in Chapter 3, a 2 X 2 X 2 factorial design with conditions of induced suspicion (warning or no warning), training (either training or no training), and two types of media, specifically lean (e-mail) or rich (audio over Internet chat relay) was used. The model of the laboratory experiment is shown below in Figure 4-1.

43

Media Rich Lean

Yes Warning

No

Yes No

Training

Figure 4.1. Laboratory Experiment Model

Subjects were randomly assigned to the treatments and control groups. For sufficient statistical power, there were a total of 160 subjects, or 80 dyads. For each cell there were 10 dyads, or 20 subjects. Within each condition, half of the receivers communicated via e-mail (lean media) and the other half communicated via audio over Internet chat relay (rich media).

The subjects in each treatment were required to interview their dyad partners based upon the information in the enhanced resumes their partners electronically transmitted.

All of the interview interactions were unscripted, and therefore followed the requirements for research based upon interpersonal deception theory. Although the experiment would have had more control with the same number of deceptive statements in each cell, since this experiment was based upon interpersonal deception theory, deceptive statements were spontaneous and modeled what occurs in real-life deceptive communication. When lies are scripted and rehearsed, they are not subject to leakage. 44

Therefore, using scripted and rehearsed lies would not have adequately tested interpersonal deception theory and were not used in this experiment.

Pilot Study A pilot test was conducted to detect and correct problems with the design. Since this experiment was partially based upon a previous resume experiment, lessons from that previous experiment served as a partial pilot study for this experiment. The pilot study consisted of four sessions: audio with warning and training, audio with training only, e- mail with warning and training, and e-mail with training only. Subjects worked in dyads, for a total of 8 subjects in the pilot study. This pilot study tested the training treatment, since training was not a factor in the previous resume experiment. Also, two new researchers were involved in the proposed experiment. The pilot test was useful for the new researchers to get acquainted with the procedures. It also was a good time to get the electronic media and test equipment to run smoothly before the experiment commenced. A description of the interview scenario is presented in the next section.

Experiment and Interview Scenario Subjects for this proposed experiment were junior and senior-level business students enrolled in courses at a large southeastern university. The researcher contacted students in their classrooms, briefly explained the study without revealing the true purpose of the study, and asked for volunteers. Students were only told that the experiment involved resumes and interviewing practice. They were promised extra credit for participation. An additional incentive was a $10 gift card for each student who completed the experiment. Random subject assignment was accomplished by having the students indicate their available times on their sign-up sheets. Then, the subjects were matched by major and randomly assigned one of their available times for the sessions. Students did not know ahead of time what role they would be playing for the experiment, thus adding to the experimental control. The researcher signed up students in multiple classes, since the minimum subject requirement was 160 for sufficient statistical power.

One major problem encountered in previous resume experiments was the large number of no-shows. Since the experiment used dyads, this wasted two subjects instead

45

of one if only one of the partners did not show. To alleviate a similar potential problem for this experiment, students were over-booked. Instead of two students occupying each interview time slot, there were at least three students. The extra students who showed up were told that they would still get their extra credit, but that they would not be needed at that time. Those students were asked if they would be willing to be on a waiting list for a possible future time opening. The researcher called the students on the waiting list when other participants notified the researcher ahead of their experiment time and said they would not be able to attend.

Arrival times for subjects were controlled for the sake of anonymity, so that subjects did not see their dyad partners. The subject assigned the deceiver role was scheduled to arrive at the College of Business interview suite fifteen minutes before the other subjects. Subjects assigned to the deception-cue training treatment arrived a week before their scheduled interviews to view a 20 minute training video on deception cue recognition. Subjects took pre-tests and post-tests for deception cue knowledge. Copies of the training pre-test and training post-test are in Appendixes H and I respectively. The training is discussed in more detail in the next section of this chapter.

The researchers involved in conducting the experiment alternated between being with the interviewer and applicant to reduce bias. The researcher who greeted the applicant (deceiver) took the preliminary resume. Then, the applicant filled out the questionnaires for social skill and political skill. A copy of the questionnaire is in Appendix F. A copy of the political skills questionnaire is in Appendix G. Then, the researcher instructed the applicant to fill out a blank resume template on a computer. The instructions given to the deceivers are contained in Appendix A. The applicant was asked to do whatever it took to look like the best student for the purpose of setting standards for a scholarship. The template design was standard for all applicants. A standard and fictitious applicant name, home address, and phone number was on the template so that there would be no identifiable personal information in order to protect the privacy of the applicant. The template included places to put course names and grades along with grade point average, past and present employment, and community service.

46

After the applicant completed the electronic resume, the researcher asked the applicant what items were enhanced. The researcher noted the enhanced items on a standard blank template printout. This captured the differences between the applicant’s truthful statements and the enhanced statements on the resume in order to capture the deceptive information. Then the applicant was told that there really was no scholarship at the time and that the experiment was mainly about their enhanced resume and the interview that they would have based upon the enhanced (deceptive) resume. The applicants were told that during the interview they should be as convincing as possible in defending the information in the enhanced resume. Before the interview, the applicant completed a motivation to deceive questionnaire. A copy of the motivation to deceive questionnaire is in Appendix D.

The interview was conducted for up to 20 minutes over one of the two media, either e-mail for the lean media treatments or audio over Internet relay chat for the rich media treatments. Subjects using lean media used a web-based e-mail provider, Hotmail, with accounts created specially for the experiment. For audio over Internet relay chat, subjects communicated using microphones and headphones after sending their resume via Microsoft NetMeeting. All of the audio conversations were recorded with high quality audio equipment. In addition, all of the e-mail transcripts were saved and archived. After the interview, the applicant completed the post-experiment questionnaire (see Appendix J for the post-experiment questionnaire).

When the subjects in the interviewer role (receiver) arrived, the other researcher greeted them and took them into an interviewing room at the opposite end of the suite from the applicant. The applicants filled out the questionnaires for social skill and political skill. The researcher told the interviewers that they would be interviewing a student applying for an academic scholarship and that the interview would take place electronically and last for up to 20 minutes. Instructions given to receivers are in Appendix C. The faked resume was sent electronically from the deceiver’s room to the receiver’s room. The two participants never saw each other face-to-face and never knew the real identity of the dyad partner.

47

The receiver was randomly assigned to either the lean or the rich computer- mediated communication media. If the interviewer was in one of the warning treatments, the researcher conveyed the statistic that about 40% of job applicants lie on their resumes and to be aware of that statistic when interviewing (Prater et al. 2002). The interviewer asked questions of his or her choice for up to 20 minutes. The questions were not pre- determined or scripted by the researchers. Opportunities for deceivers to plan, rehearse, and edit their messages before they were transmitted may have placed receivers at a disadvantage (Greene et al. 1985), but it also mirrored what happens in real-life deceptive scenarios. Interviewers’ questions also were not scripted. Interviewers created their own questions that they asked the applicant in order to allow for the spontaneous interactive communication that most closely resembles the interpersonal deception theory interactive process. This spontaneous question generation by the interviewer mirrored what happens in the interactive communication process more than if the questions had been scripted. The researcher stayed quietly in the background during the interview so as not to bias the communicative sessions. After the interview, the interviewer completed a post-interview questionnaire and a motivation to detect deception questionnaire. A copy of the post- interview questionnaire is in Appendix J. A copy of the receiver’s motivation to detect deception questionnaire is in Appendix E. Before leaving, subjects were asked not to discuss the experiment with anyone.

Treatment and Control Groups The subjects receiving the warning treatments followed the experiment scenario as described above and also were told that about 40% of people lie on their resumes. This statistic was conveyed both during the general explanation of the experiment and also right before they conducted the interview. Inserting the deception statistic immediately before the interview started was done with the intention to keep the warning fresh in the interviewers’ minds. Subjects in the warning cells used either e-mail (lean media) or audio over Internet chat relay (rich media) for the interviewing process.

Subjects in the training cells attended a deception-cues training session one week prior to their scheduled experiment date. The subjects took a pre-test to ascertain their prior knowledge of deception and its detection. The pre-test was previously used at a

48

military base for deception cue recognition training (George et al. 2004b). The pre-test contained questions previously validated in that training experiment. Subjects then viewed a 20-minute video clip on deception and its detection. This video clip was an edited version of the training used in earlier studies (George et al. 2004b), but it still contained reliable cues essential for deception cue recognition training. This training had been previously validated (George et al. 2004b). After the subjects viewed the video clip, they took a post-test to assess knowledge gained. The post-test had also been validated (George et al. 2004b). Then subjects were reminded of their experiment date and ended the training. When they returned for the experiment, they were not reminded of their training or of the content they received in the training session. The experiment took place as described in the standard scenario. Subjects in the training cells used either e-mail (lean media) or audio over Internet chat relay (rich media) for the interviewing process. The subjects not in the deception-cues training condition attended training sessions where they watched a video on interviewing skills and took a pre-test and post-test. This interview-skills training was conducted so that all subjects would have both a training session and an experiment session. However, since only the deception-cues training were relevant for the experimental model, data was collected for only the deception-cues training.

There also was a treatment that combined training with warning. Participants followed the above procedures for the deception-cues training. At the time of the experiment, they were told that about 40% of people lie on their resumes, just as in the treatment with warning only. Then the experiment was conducted. Subjects in one training with warning cell used e-mail (lean media) and subjects in the other used audio over Internet chat relay (rich media) for the interviewing process.

The control groups followed the experiment scenario and had no deception-cue training and no warning. As control groups, they were a base-line for looking at the difference in deception detection accuracy that may take place when people are warned, trained, or both warned and trained.

For all of the above treatments, subjects completed their respective post-interview questionnaires immediately following their interviews. Subjects were then given their

49

gift cards, and asked not to discuss the experiment with anyone else. The following section describes the measurement scales.

Measurement Scales Since the methodology was an experiment, there was a lot of control. There was a standard resume form for the subjects to enter in their enhanced resume data. Since this form was in electronic format, it was easily saved and stored. The interviews were all conducted via electronic media, so transcripts of both the audio and e-mail communication were saved for future review. The audio sessions were recorded with high quality recording equipment. Questionnaires were completed and stored in a safe place for data collection. The standards for good research require confidentiality of the subjects’ experimental interactions and responses to questionnaires. For confidentiality, all materials were sanitized so that the identities of the students were eliminated.

Data was collected on individuals interacting with other individuals. So, the levels of analysis were the individual level and a limited group level (dyads). Data was collected from the subjects’ original resumes, enhanced resumes and their deceptive statements, and their social skill questionnaires, political skill questionnaires, motivation questionnaires and post-interview questionnaires. The post-interview questionnaires included standardized, widely accepted instruments, such as the measurement for media richness. A motivation measure for deceivers (Burgoon et al. 2000) was intended to measure a person's pre-interaction goals. This 18-item questionnaire was administered as pre-interaction measure and can reinforce experimentally induced motivation. A motivation measure for receivers (Burgoon et al. 1995a) was administered as a post- interaction questionnaire. This 9-item questionnaire measured the receiver’s vigilance, effort, and suspicious beliefs. Transcripts of subjects’ interactions were saved for further analysis. In addition, data from the training sessions was collected from the pre-tests and post-tests. Demographic information was also collected from all subjects.

The specific information gathered in the experiment included the exact differences between the original resumes and enhanced resumes (deceptive statements), and subjects’ perceptions of their dyad partner, the communications medium including its media richness, and their perception of deception and detection. The post-interview

50

questionnaires for both the deceiver and the receiver included items measuring the communication style and expected behavior of their conversational partner, their own style and behavior, and comparisons between the two (Burgoon et al. 1995b). The post- interview questionnaire for deceivers included 16 questions about the communication experience and nine questions about the communication media. The post-interview questionnaire for receivers had 15 items about the communication experience, and nine items about the communication media. Items dealing with the communication experience were measured on a 5-point scale from 1 = strongly disagree to 5 = strongly agree. Items dealing with the communication medium were measured with 7-point semantic differential scales. The appendices contain the questionnaires used for data collection.

For the dependent variable, deception detection accuracy, data was gathered from an item response on the receiver’s questionnaire. The receiver was asked, “Do you believe that person you interviewed was being dishonest?” The receiver was asked to write down the specific items they thought the applicant was lying about. The researcher verified each lie by asking the deceiver, right after the “enhancement” of the resume, exactly which statements were falsified. Then, deception detection accuracy was measured by comparing the statements the receiver indicated they perceived the sender was being dishonest about with the actual dishonest statements on the sender’s enhanced resume.

Since this research focused on training, warning, and media richness, to control for individual differences, questionnaires measuring social skill and political skill were given. Social skill was measured with the Social Skills Inventory (SSI) (Riggio 1986), a widely-used validated measure. Social skill was measured on a thirty-item scale. Subjects were asked questions such as: ‘I can fit in with all kinds of people, young and old, rich and poor’ on a 5 point scale from 1 = ‘not like me at all’ to 5 = ‘exactly like me.’ The SSI assesses both verbal and nonverbal social communication skill dimensions that compose global social skill or social competence. Social competence can be categorized into three classes: skills in sending, skills in receiving, and skills in regulating, or controlling, the communication of interpersonal behavior. The Social Skills Inventory is shown in Appendix F. Political skill was measured by the Political Skill Inventory (PSI) (Ferris et al. 2004). The Political Skill Inventory consists of 18 items. Subjects were 51

asked questions such as: ‘I always seem to instinctively know the right things to say or do to influence others’ on a 7 point scale from 1 = ‘strongly disagree’ to 7 = ‘strongly agree.’ The Political Skill Inventory is shown in Appendix G. Data Analysis After the experiment was completed, the data collected was analyzed. Each hypothesis in the previous research model was tested. The analysis and results of the experiment are presented in Chapter 5.

52

CHAPTER 5

RESULTS

This chapter presents results of the experiment. A total of 160 subjects participated in this experiment. The first section reports the results of reliability analyses performed on the control variables. It also provides results of the correlation analysis of the control variables. The second section discusses the statistical analysis for session duration, lies, and detection accuracy. The third section discusses the analysis of the training manipulation. The last section explains the statistical procedures used to analyze the data and reports results of the hypotheses testing.

Analysis of Control Variables

Scale Reliability Analysis

Questionnaires to control for social skill, political skill, and motivation were completed by both the deceivers and receivers. The data was recorded and analyzed in SPSS for scale reliability. For all of the scale values, the averaged score was computed and used for the analysis. Descriptive statistics for the control variables are shown below in Table 5.1. Reliability statistics for the control variable scales follow as shown in Table 5.2.

The scales, in general, had good reliability. The only scale that was slightly below the standard Cronbach’s alpha = 0.70 for reliability (Nunnally 1978) was receiver social skill, having a Cronbach’s alpha = 0.690.

53

Table 5.1. Descriptive Statistics for Control Variables Standard Control Variable Mean Minimum Maximum Range Deviation Deceiver motivation 6.937 5.608 7.278 1.671 .372 Deceiver social skill 3.359 1.810 4.278 2.468 .480 Deceiver political skill 5.512 4.863 6.263 1.400 .442 Receiver motivation 4.300 2.375 5.763 3.388 1.051 Receiver social skill 3.401 1.713 4.175 2.463 .540 Receiver political skill 5.540 4.713 6.375 1.662 .518

Table 5.2. Reliability Statistics for Control Variables Cronbach’s Alpha Based on Control Variable Cronbach’s Alpha Standardized Items N of Items Deceiver motivation .949 .954 18 Deceiver social skill .728 .718 30 Deceiver political skill .889 .886 18 Receiver motivation .757 .747 9 Receiver social skill .690 .683 30 Receiver political skill .851 .845 18

Correlation Analysis To determine if the motivation of the deceiver, the social skill of the deceiver, the political skill of the deceiver, the motivation of the receiver, the social skill of the receiver, or the political skill of the receiver was related to the dependent variable, deception detection accuracy, data from the respective questionnaires was analyzed to see if there was any correlation with these control variables and the dependent variable. Results are shown in Table 5.3.

54

Table 5.3. Correlations

Receiver Receiver Deceiver social political Deceiver Deceiver political Lies skill skill motivation social skill skill Training Warning Medium Lies detected Receiver .259* .061 -.113 .130 -.036 .363** .339** -.004 -.158 .285* motivation Receiver .475** -.025 .154 .117 .055 .005 -.102 .084 .091 social skill Receiver -.018 .131 .062 -.049 .026 -.009 .143 -.049 political skill Deceiver .052 -.105 -.110 -.116 .108 -.024 -.041 motivation Deceiver .617** .012 .049 -.116 -.009 .035 social skill Deceiver -.054 -.113 -.120 -.047 -.024 political skill Training -.014 .013 -.178 .295** Warning -.014 -.025 .190 Medium -.139 .179 Lies .076 *p<0.05 ** p<0.01

55

Receiver motivation was significantly correlated with the dependent variable, lies detected (.285, p<0.05). None of the other control variables tested was correlated with the dependent variable. Thus, in the following analyses of the hypotheses, only receiver motivation to detect deception was used as a covariate.

The following section discusses the statistical analysis for session duration, lies, and detection accuracy. training sessions that were conducted a week before the task for subjects in the deception-cue training treatment. Then, hypotheses testing and results are discussed.

Session Duration, Lies, and Detection Accuracy

Statistical analyses for the session durations for medium, overall average lies per session and for each independent variable, and overall detection accuracy rate and rates for each independent variable were performed. Results are listed in Tables 5.4, 5.5, and 5.6. The average duration of the audio session (M = 6.68, S.D. = 2.546) was significantly different than the e-mail session average duration (M = 21.23, S.D. = 3.059), t(39) = 25.96, p < 0.001. Thus, the e-mail sessions, on average, were about 15 minutes longer than the audio session. Much of this difference is due to the quality of the medium in that e-mail messages, in general, take longer to transfer than audio messages. The ANOVA results for comparing the difference in lies between medium, warning, and training show there is no statistical difference in the number of lies for the independent variable treatments. The average number of lies recorded for each treatment is shown in Table 5.5. The overall detection rate and detection rates for each treatment are shown in Table 5.6. Although overall average for lies was relatively high (M = 8.89, S.D. = 3.383), the overall detection rate was very low (M = 0.05, S.D. = 0.147).

56

Table 5.4. ANOVA for Medium, Warning, Training, and Lies

Dependent Variable: lies Type III Sum Source of Squares df Mean Square F Sig. Corrected Model 51.338(a) 3 17.113 1.525 .215 Intercept 6319.013 1 6319.013 563.238 .000 medium 21.013 1 21.013 1.873 .175 warned .313 1 .313 .028 .868 train 30.013 1 30.013 2.675 .106 Error 852.650 76 11.219 Total 7223.000 80 Corrected Total 903.987 79 a R Squared = .057 (Adjusted R Squared = .020)

Table 5.5. Descriptive Statistics of Lies for Medium, Warning, and Training Dependent Variable: lies medium warned trained Mean Std. Deviation N audio 0 0 10.10 3.814 10 1 8.50 2.718 10 Total 9.30 3.326 20 1 0 10.30 3.401 10 1 8.70 3.831 10 Total 9.50 3.620 20 Total 0 10.20 3.518 20 1 8.60 3.235 20 Total 9.40 3.433 40 e-mail 0 0 8.40 3.438 10 1 8.80 3.293 10 Total 8.60 3.283 20 1 0 9.20 2.530 10 1 7.10 3.900 10 Total 8.15 3.376 20 Total 0 8.80 2.966 20 1 7.95 3.620 20 Total 8.38 3.295 40 Total 0 0 9.25 3.640 20 1 8.65 2.943 20 Total 8.95 3.281 40 1 0 9.75 2.971 20 1 7.90 3.851 20 Total 8.83 3.522 40 Total 0 9.50 3.289 40 1 8.27 3.404 40 Total 8.89 3.383 80

57

Table 5.6. Detection Accuracy Rates

Mean Standard Deviation Audio 0.02 0.077 E-mail 0.07 0.192 Warned 0.08 0.195 Not warned 0.02 0.059 Trained 0.07 0.178 Not trained 0.02 0.102 Overall 0.05 0.147

The following section discusses the manipulation check for the training sessions that were conducted a week before the task for subjects in the deception-cue training treatment.

Training Manipulation Check

Subjects were trained a week before their respective interviews. A pre-test consisting of 15 questions testing knowledge of deceptive cues was given. Then, the subjects watched a 20-minute video on deception-cue training. After the training video was viewed, subjects completed a post-test consisting of the same 15 questions given in a different order. Data from the pre-test and post-test was analyzed with a t-test in SPSS to see if there was a statistically significant improvement between the pre-test scores and the post- test scores. The results showed a significant improvement in deception-cue knowledge, t(39) = 15.08, p < 0.001. Improvement descriptive statistics shown in Table 5.7 include the minimum, the lowest improvement score for a subject and the maximum, the highest improvement score for a subject. The negative score for the minimum statistic for improvement shows that the subject with the lowest improvement between the pre-test and post-test had one more question correct in the pre-test than in the post-test. The mean improvement from the training was 4.82 more questions on average answered correctly out of 15 total questions on the post-test than on the pre-test.

58

Table 5.7. Descriptive Statistics of the Training Pre-test and Post-test N Minimum Maximum Mean S.D.

Pre-test 40 3 9 6.25 1.532

Post-test 40 5 14 11.08 2.153

Improvement 40 -1 9 4.82 2.024

In the final sections, the hypotheses testing and results are discussed.

Tests of Hypotheses

All of the hypotheses in this research were designed to be tested using analysis of variance (ANOVA). Initially, ANOVA was considered the best test for analysis because comparison of means between groups was the best method for seeing the difference between the effects of the treatments of training, warning, training with warning, and medium on deception detection accuracy. After the analysis of the questionnaires revealed that receiver motivation was a covariate with lies detected, the dependent variable, it was decided that ANCOVA was needed for the hypotheses testing since ANOVA cannot handle covariates. Results of the hypotheses testing will be discussed in the following sections. Testing of the Model An ANCOVA was run in SPSS on the dependent variable (number of lies detected), with the independent variables (training, warning, and medium), interaction variables (training and warning), and the covariate (receiver motivation to detect deception). Since ANCOVA analysis is sensitive to outliers, data that was outside of two standard deviations from the mean for the dependent variable, lies detected, was not included in the ANCOVA analysis. Three outliers for the dependent variable were found

59

and taken out of the analysis. Results of the ANCOVA are shown in Table 5.8. The model was significant at F(5,71) = 3.153, p = 0.013. Descriptive statistics for all of the experimental treatments were run in SPSS, as shown in Table 5.9. As can be seen from the ANCOVA table, the only significant difference is for training at a p < .044 level, with an F(1,71) of 4.211. Thus, Hypothesis Three was supported. There was a main effect for training in that deception detection accuracy was greater for receivers who were trained in deception cue recognition (M = 0.32, S.D. = 0.57) than for those who were not trained (M = 0.05, S.D. = 0.22). Omega squared was calculated to determine the strength of association for training. The result, ω2 = 0.031, indicates that there was a small association. All of the other hypotheses were not supported. The results of the statistical analysis will be discussed in the next chapter.

Table 5.8. ANCOVA results for Medium, Warning, Training, and Receiver Motivation on Deception Detection Accuracy

Dependent Variable: lies identified on questionnaire Type III Sum of Source Squares df Mean Square F Sig. Corrected Model 2.808(a) 5 .562 3.153 .013 Intercept .056 1 .056 .312 .578 trained .750 1 .750 4.211 .044 warned .267 1 .267 1.496 .225 medium .487 1 .487 2.733 .103 receiver motivation .303 1 .303 1.702 .196 trained * warned .131 1 .131 .737 .393 Error 12.646 71 .178 Total 18.000 77 Corrected Total 15.455 76

60

Table 5.9. Descriptive Statistics for Medium, Warning, and Training on Deception Detection Accuracy Mean Std. Deviation N Audio .11 .333 9 Trained and warned E-mail .78 .833 9 Trained and warned Audio .20 .422 10 Trained, not warned E-mail .20 .422 10 Trained, not warned Audio .10 .316 10 Warned, not trained E-mail .11 .333 9 Warned, not trained Audio .00 .000 10 Not trained or warned E-mail .00 .000 10 Not trained or warned Audio total .10 .307 39 E-mail total .26 .544 38 Trained total .32 .574 38 Not trained total .05 .223 39 Warned total .27 .560 37 Not warned total .10 .304 40 Total 77

61

CHAPTER 6

DISCUSSION

This study was concerned with the effects of training, warning, and media richness on deception detection accuracy. The relevant research literature on media richness, deception, training and warning was reviewed in order to form the hypotheses for this research. Since the transmission of deceptive cues depends upon the degree of richness of the communication medium (Rao et al. 2000), it was hypothesized that the richer the media used for communication, the greater the chance for the receiver to catch those cues and detect deception. In past research, giving a warning that deception may be occurring has been influential for increasing deception detection success (Biros et al. 2002; Stiff et al. 1992). These findings for the positive effect of warning on deception detection were also hypothesized to occur in this study. Another method for improving detection deception success that has been researched in the past is to train people in deception cue recognition so they would have the potential to have a better understanding of when deceptive messages are communicated. Although past research in training has had mixed results, more studies have found training to be successful than not successful (Frank et al. 2003). It has been suggested that researchers investigate the influence of giving training some time ahead the deception detection task so that the subjects would have time to reflect on the training (Frank et al. 2003). This study took that challenge and investigated the effects of giving training a week ahead of the detection task with the hypothesis that training would positively influence deception detection accuracy. Finally, this study tested the hypothesis that training and warning together would be more effective at improving deception detection than training alone, warning alone, and the absence of training or warning. The experiment results of the hypotheses testing are summarized in Table 6.1. Hypothesis 3, concerning the relationships between training and deception detection accuracy, was supported. Hypothesis 1, concerning the richness of the communication medium was not supported. Hypotheses 2 and 4 were also not supported.

62

Table 6.1. Summary of Findings

Hypothesis Finding H1: Deception detection accuracy will be Not supported. The richness of the greater for receivers who use richer media communication medium was not than for those who use leaner media. significant at the p<0.05 level.

H2: Deception detection accuracy will be Not supported. Warning was not found to greater for receivers who are warned of the have a significant effect on deception potential for deceptive communication than detection accuracy. for those who are not warned.

H3: Deception detection accuracy will be Supported. Training was positively related greater for receivers who are trained in to deception detection accuracy. deception cue recognition than for those who are not trained.

H4: Deception detection accuracy will be Not supported. The combination of greater for receivers who are trained in training and warning together did not have deception detection and warned of the a significant effect on deception detection potential for deceptive communication than accuracy. for those who are trained only, warned only, or not trained or warned.

The following sections discuss and interpret findings for each of the four

hypotheses.

Media Richness and Deception Detection Accuracy

Hypothesis 1 was concerned with how the richness of the communication medium affects deception detection. It was hypothesized that since deception detection relies on transmission of cues, detection success should vary with the richness of the communication media used, with richer media being better for deception detection. The richness of the communication medium was not found to significantly influence deception detection at the p<0.05 level.

63

Although fewer cues are generally transmitted via leaner media (Rao et al. 2000), this quality of the medium did not appear to have an impact in this study. Past studies have had mixed results for media richness influencing deception detection (Burgoon et al. 2003a; George et al. 2004a). Past research has suggested that audio may be the best medium for deception detection (Burgoon et al. 2003a). Since there was no significant finding for medium, this study did not lend support to the Burgoon et al. assertion that audio may be a better communication medium than e-mail for deception detection. Perhaps just looking at the richness of the media alone is not sufficient to understand why there was not a significant effect for media in this study. Although audio had the advantage of allowing more deception cues to be transmitted, e-mail also had an advantage in that the receiver had the ability re-read and re-analyze the deceptive messages because the communication was text based. When deceptive communication was transmitted via audio, the receiver had to rely on memory to recall what cues were transmitted. Since human memory is not perfect, most likely only a portion of the cues recognized may have been recalled. Thus, there may not have been a great difference in the amount of cues recalled from audio communication and the amount of cues available for review from e-mail communication. This potential leveling out of the amount of cues recalled in richer media vs. leaner media could help explain why there have been mixed results for media in past deception studies.

Warning and Deception Detection Accuracy

Hypothesis 2 posited that giving a warning that deception may be occurring during the communication process would improve one’s ability to detect deception. However, in this study, warning that deception may occur did not significantly influence deception detection accuracy. Since other studies (Biros et al. 2002; George et al. 2004a; McCornack et al. 1990) have mostly been favorable in showing that warning for deception can influence detection, the results for this hypothesis were somewhat unexpected.

64

Warning was given to try to mitigate the effects of truth bias. Since most people tend to believe that others are mostly telling the truth rather than mostly lying in everyday conversations (McCornack et al. 1986), it was hoped that giving a warning to the subjects before their deception detection task would alert them to be more diligent in detecting lies. In fact, the warning that around 40% of people lie on their resumes was given twice in the experiment, once during the initial instructions and another time immediately preceding the interview. One could speculate that in this study the effect of truth bias was so strong that a simple warning, even though given twice, was not enough to compensate for the desire to believe that others were telling the truth.

Training and Deception Detection Accuracy

Hypothesis 3 was concerned with the effect that training in deceptive cues had on deception detection accuracy. Results were supportive of Hypothesis 3. Thus, in this study, training in deceptive cues positively influenced deception detection accuracy. Those subjects trained in deception cue recognition detected more lies (M = 0.32, S.D. = 0.57) than those who were not trained (M = 0.05, S.D. = 0.22). These results are in line with previous research. In general, training has been influential in improving deception detection (Frank et al. 2003). In addition, training has been shown to lower the faulty cognitive heuristic of truth bias to improve detection rates (Stiff et al. 1992). In this study, subjects were trained a week before their interview sessions. In most other studies, subjects were trained just before the experimental session. It has been suggested that training with time given to reflect on the training may be more effective than training right before the detection task (Frank et al. 2003). This study is one of the few that has investigated training with reflection time.

Subjects were trained with a validated deception cues training video that had been used in previous studies. A pre-test and post-test was administered to each subject in the deception-cues training treatment. These tests were shorter versions of previously validated tests. The subjects overall improved in detection knowledge with an average pre-test score of 6 out of 15 questions correct and an average post-test score of 11 out of 15 questions correct. Positive results for training from the hypothesis testing showed that

65

these subjects were then able to retain this knowledge and apply it to the deception detection task, giving them a significant advantage over the other subjects.

Interaction of Training with Warning and Deception Detection Accuracy

The final hypothesis, Hypothesis 4, stated that deception detection accuracy would be greater for receivers who were trained in deception cue recognition and warned of the potential for deceptive communication than for those who were trained only, warned only, or not trained or warned. This hypothesis was not supported. Instead, subjects in the training treatment were significantly better detectors than subjects with both training and warning. This result is counter-intuitive, since it seems that if training improves deception detection accuracy, then adding warning to the training would improve detection accuracy even more. It is possible that training in itself served as warning, since subjects in the experiment who viewed a training video on deception cue recognition may have suspected that there would be deception occurring in the remainder of the experiment. In the warning treatment, even two warnings were not enough to significantly improve deception detection accuracy. Therefore, if training were considered to substitute as a third warning, again that additional warning may have not been strong enough to overcome truth bias. It is possible that subjects who had both training and warning had more cognitive overload than those who just had the training, accounting for part of the reason that the training only group was more successful at deception detection. Another preliminary study that examined both training and warning on deception detection ability had similarly disappointing results for the interaction of training and warning on deception detection (Biros 1998). However, in that study warning alone was significant, but not training alone. The differences in the results of this experiment and the previously mentioned study suggests that further research may be necessary to better understand the reasons for lack of significance when training and warning are combined.

66

Summary

The main goals of this research were to examine the influence of training, warning, and media richness on deception detection accuracy. This chapter discussed and attempted to explain the results of the hypothesized relationships. A significant effect was found for only one of the hypotheses, that training improves deception detection accuracy. The other hypotheses were not supported.

The final chapter will summarize what has been learned from this study. It will also discuss the implications of this research for both researchers and managers. Finally, the strengths and limitations of this study will be discussed and suggestions for future research direction will be addressed.

67

CHAPTER 7

CONCLUSION

This research was motivated by the need to understand how to increase deception detection ability, especially when deception is communicated via electronic media. Since research on deception detection via electronic media research is still in its beginning stages, this research was attempted to add more basic understanding to that research line. The goal of this study is to understand how training, warning, and media richness affect deception detection ability. First, by training people to be aware of the cues that are leaked in the deceptive communication process, it was hoped that awareness of these cues would lead to greater detection success. Secondly, by warning people that deception may occur, it was hoped that detection success would increase. Thirdly, by giving people who had been trained in deceptive cue recognition a warning that deception may occur, it was hoped that an understanding of the interaction of these two potential influences on detection success would be more fully understood. Finally, by examining the difference in detection success using a communication medium high in media richness verses a communication medium low in richness, the effects of media richness on deception detection would be better understood.

A summary of the key findings in this study is presented in the first section below. The strengths of this research are discussed in the second section. The third section discusses limitations of this research. In the fourth section, implications for future academic research are discussed. The fifth section discusses management implications. The last section presents a summary of the study.

Summary of Findings

A key goal of this research was to learn how deception detection accuracy could be improved. In conducting this research, it was hoped that the researcher’s questions on

68

the effects of media, training, warning, and the combination of training and warning would be answered. This following is a discussion of how those questions were answered in this research.

The first research question asked if the richness of the computer-mediated communication medium would affect how well people detect deception. It was hypothesized that those subjects who used a communication medium higher in media richness would have better deception detection accuracy. This hypothesis was not supported at the p<0.05 level. Detection accuracy did not significantly differ based upon the richness of the communication medium.

The second research question asked if training would make people better at detecting deception. The hypothesis that training people in deception cue recognition would make them better detectors was supported. Training in deceptive cues positively influenced deception detection accuracy. Subjects who viewed a deception-cues training video a week before given the deception detection task were better at detecting deception than those who were not trained in deception cue recognition. Thus, the question of the effectiveness of training on deception detection accuracy was answered in this study in a positive direction.

The third research question asked if giving people warning that deception may be occurring would make them better at detecting deception. The hypothesis that warning would improve deception detection was not supported. Subjects who were warned before the start of the communication process that deception may occur did not significantly detect deception better than subjects who were naïve. The warned subjects were told that about 40% of people lie on their resumes. In a previous study, this warning was effective (George et al. 2004a). However, perhaps this was not a strong enough warning for this study, since 40% would insinuate that most people (60%) would not lie. Results may have been different if the warning were changed from 40% to 60% because that warning would imply that more people lie about their resumes than tell the truth. That stronger warning could possibly create a condition for subjects to be more alert to the possibility that lies would be occurring. This leaves the question of the effectiveness of warning on

69

deception detection accuracy unanswered and brings up more questions about the nature of the warning.

The last research question asked if a combination of training and warning would make people better at deception detection than people who would be trained only, warned only, or not trained or warned. The hypothesis that training with warning would improve deception detection accuracy more than training alone, warning alone, or no training or warning was not supported. Those subjects who received both the deception cue recognition training and a warning just before the interview commenced did not perform better in deception detection accuracy than the other groups. Previous research also found that the interaction of training and warning had no significant effect on improving deception detection ability (Biros 1998). It would seem that if training was significant in improving deception detection accuracy, then adding a warning that deception may occur would only serve to improve detection. However, perhaps the training was like a warning, since subjects who viewed the training video on deceptive cues would likely expect the rest of the experiment to be related to deception. This brings more questions than answers in the area of the interaction of training and warning on deception detection accuracy.

In summary, training in deception cue recognition was the only significant factor in this study that increased deception detection accuracy. Giving verbal warnings that deception may occur did not significantly improve detection accuracy. Also, the combination of training and warning was not significant in improving deception detection. Finally, the richness of the medium used for communication in conversations where deception occurred did not significantly influence deception detection accuracy. As with all research, this study had its strengths and limitations. The next sections will discuss some of the strengths and limitations of this research.

70

Strengths

The main strength of this research is the contribution to the training literature on deception. This study is one of the few that looked at the effects of training in deceptive cue recognition when that training was given some time before the experimental task. Most other studies on the influence of training on deception detection positioned the training immediately before the experimental task. It had been suggested that studies should be conducted on the timing of training (Frank et al. 2003), and this study examined the effect of training given a week ahead of the deception-detection task. Results showed that subjects trained a week ahead of the experimental task were significantly better at deception detection than those who were not trained. The positive results of this study may inspire other researchers to look more closely at the timing of training for deception detection.

Another strength of this research is that the experimental task had a high aspect of realism to it. Subjects made themselves look better than they actual were by enhancing their resumes with deceptive information. This enhancement of resumes with fake qualifications and skills happens often in the business world. Thus, the results of this experiment can be directly related to improving the hiring of employees by giving interviewers training in deception cue recognition.

Another strength of this research is the high level of control of the experiment. Subjects were grouped into defined treatments that left little room for ambiguity. Since subjects were devoted to the experimental session for a pre-defined time, they could focus exclusively on the experimental task without other distractions. All of the deceivers had the same resume template so that data collected could be easily compared. The tight experimental control made it easy for researchers to collect all of the data that was later coded and used for the analysis.

The final contribution of this research is that it mimicked what happens in actual deceptive communication. By using Interpersonal Deception Theory as a basis for the design of this study, subjects were able to interact more naturally without pre-defined

71

scripts. Since interviewers were able to ask questions of their choice, the interviews modeled what would happen in real life situations.

Limitations

While this study had a number of strengths, it was not without its limitations. One of the limitations of this study is that the methodology was an experiment. Although experiments are good for control, as discussed in the previous section, they are not very good for generalizing results to the general population. Therefore, the results of this experiment need to be viewed as occurring in one case and not generalizing to all cases of deception detection.

Another limitation of this study is that it used student subjects. The use of students as subjects has been criticized by researchers because of their general belief that students do not adequately represent the general population. However, students may be good representatives of the general population in their propensity to lie. Certainly, many professors have lamented about the deceptive statements they believe students give them about the reasons for not completing their work assignments. Also, it is difficult to study deception and its detection in field settings because of the potential consequences to subjects. For example, if people tell researchers that they lied on their resumes, their jobs may be at risk if that information gets back to their supervisors. Therefore, subjects in real life settings may not be as forthcoming about telling the researchers about their deceptive experiences, thus compromising the research results. As a result of this difficulty in the field, experimental research is a good method because of its inherent potential for tight control.

A final limitation of this study is that although subjects were paired by major, not all dyad partners had the same amount of experience in the major. In some cases, one of the dyad members was a senior having taken more classes in the major, while the other was just starting the major. This however did not create a large problem, since most students are aware of classes they will have to complete in the future and also talk with other students about their experience in classes. Also, in reality, not everyone involved in

72

the communication process that involves deception has the exact same amount of experiential background.

Even though this study, as is the case with most studies, had its limitations, there was knowledge gained that can be applied to academic and managerial situations. The following sections discuss implications for future academic research and managerial implications.

Implications for Future Academic Research

The findings of this study have some implications for future academic research. First, only one of the control variables was correlated with lie detection. There was some theoretical basis for thinking that social skill and political skill would have an influence on deception detection. However, this was not the case in this study.

People with high social skill tend to have the ability to control and regulate emotional and nonverbal displays and also can accurately interpret the subtle emotional cues of others. Since, in this study, communication occurred via electronic media and not face-to-face, it is possible that the media filtered out some of those emotional cues, such as facial expressions. Thus, the theories of social skill may not apply when communication is computer-mediated. Social skill theory also states that people with high social skill have the ability to initiate and guide conversations and understand the norms governing appropriate social behavior. In this study, the task of interviewing another person was not one that the subjects had a lot of experience with. It is possible that the subjects’ lack of experience lowered their social skill in this particular task. That may account for part of the reason that social skill did not correlate with deception detection ability.

The theoretical aspect of political skill that has the most pertinence to deception detection ability is the ability of people with high political skill to appear sincere even if they are not. It would seem that people with high political skill would be better liars, but that was not the case in this study, since there was no correlation between deceiver political skill and lie detection. It is possible that, as with social skill, political skill is less

73

effective with computer-mediated communication since influence cues may not be as effectively transmitted as in face-to-face communication. It is also possible that, since the subjects were students, they did not have enough life experiences to develop political skill sufficiently to significantly mask their deception.

Second, since this study was conducted using communication that was computer- mediated, one of the factors in question was the richness of the medium’s effect on deception detection success. Although the findings were not supportive of any difference due to the richness of the medium, this is not surprising, since research on media richness and deception detection are still in its early stage and past studies have had mixed results. More research in this area is suggested for better understanding of how the richness of the communication medium affects deception detection success.

Third, although giving a warning to alert people that deception may be occurring has had success in general in previous research, it was not successful in this study. This implies that it is important not to automatically assume that warnings will always improve deception detection. Improving deception detection accuracy may be more complex than just giving simple warnings. In future research it may be important to look at the wording of warnings given to alert subjects that deception may be occurring. In this research, subjects were told that around 40% of people lie on their resumes. However, since 40% is less than half, subjects may have interpreted this to mean that the majority of people would not lie on their resume. Future studies could examine the effects of changing warning percentages to 50% (equal chance of deception and no deception) or to over 50% for a higher level of warning.

Fourth, the success of training in improving deception detection accuracy in this study is a contribution to the literature. It adds to the general base of knowledge of training and deception detection and adds support to the effectiveness of training to improve deception detection. A main contribution of this study is the timing of the training. In past studies, most training in deceptive cue recognition has been given immediately preceding the experimental task. In this study, training was conducted a week before the experimental task, thus giving the subjects time to reflect upon the training. This reflection time may have contributed to the effectiveness of the training.

74

A suggestion for future research is to examine the effect of timing of the training on deception detection success. More studies could be conducted that experiment with the effectiveness of giving deception cue recognition training some time ahead of the experimental task vs. training just prior to the experimental task.

Finally, in this study, the interaction of training and warning did not significantly improve deception detection. This implies that training alone may be just as effective as training with warning. In fact, training alone may be better than just giving a simple warning for improving deception detection accuracy. It may be important in future research to tease out the effects of training and warning. It is not well understood why giving both training and warning does not improve deception detection ability over giving just either training or warning. More research is suggested to better understand this counter-intuitive finding.

In summary, as with most research, this study surfaces more questions for future research. Why does it seem theoretically plausible that social skill and political skill would influence deception detection, yet they had no effect in this study? Why was training alone more effective in influencing deception detection success than training and warning together? What is the best timing for deception cue recognition training? Why are warnings alone not always effective? What is the most effective computer-mediated communication medium for deception detection success? It is hoped that future research will address some or all of those questions.

Management Implications

These research results also have implications for managers. The experimental task in this research was resume enhancement and interviewing when deception occurs on resumes. This has important implications for managers, since applicants for jobs are not always truthful on their resumes and applications. Hiring people who do not have the skills they say they have can be costly to organizations.

When managers are interviewing applicants for jobs, they may need to be aware that cues to deception are not transmitted equally via different communication media. If

75

managers are interviewing via voice, such as in telephone interviews, fewer cues will be transmitted than if the conversation were face-to-face. Also, if conversation takes place via e-mail, even fewer cues will be transmitted. Although this research did not find that the richness of the communication medium had an effect on deception detection accuracy, managers still may be more effective if they understand that there is a difference in the medium.

This leads to the understanding that training in deceptive cue recognition may make interviewers more successful in detecting deception than if they were not trained. Part of the training should be about the difference in cue transmission across electronic media varying in richness. In this study, training was shown to increase deception detection success. Managers could potentially improve the quality of their new employees by training interviewers to detect applicants who are being deceptive, thus eliminating those less qualified applicants from the applicant pool.

Finally, in this study warning did not prove to be an effective way to improve deception detection. Managers may want to focus more on training interviewers in deceptive cue recognition than to just give them a general warning that some resumes may be deceptive. Although instigating a training program would be more costly in the short run, it may give be less costly in the long term by weeding out those applicants who are deceptive.

In summary, this research had practical implications for managers. It may not be effective enough just to give the interviewers a generic warning to be on the alert for deception. If managers invest in training people who will be interviewing potential employees to understand cues to detect deception, they will have the potential to weed out those applicants who really do not have the prerequisite skills needed.

Summary

As computer-mediated communication becomes more prevalent, it is important to understand how deceptive information can be detected when communicated over electronic media. The literature shows that there are potential ways of improving

76

deception detection when the deception is communicated via electronic media. This study adds to that knowledge base. The results of this study demonstrate that training in deceptive cue recognition before the communication process takes place is effective for increasing deception detection accuracy. In most past studies, subjects were trained immediately prior to the experimental task. In this study, subjects were trained a week ahead of the experimental task. This gave them time to reflect on the training and possibly to apply what they learned in their everyday experiences. In this study, training was the only treatment that improved deception detection accuracy. Treatments that warned of potential deception or varied the richness of the computer-mediated communication medium did not significantly improve deception detection. There are many opportunities for future studies to improve understanding in this emerging field of research.

77

APPENDIX A

INSTRUCTIONS TO THE DECEIVERS

78

Instructions to the Deceivers (Interviewees)

“Now, let me tell you about what we will be doing today. The College of

Business, which we are both a part of, is considering in the future to award a scholarship

to whomever they deem to be the ‘top business student.’ It is my understanding that it

would be geared toward students about to graduate and thinking about attending grad

school. But what does that mean, the ‘top business student?’ Are we talking grades, activities, job experience, a combination of all that? Hopefully you can help us

determine that. We need to be able to set a bar… minimum standards… for interested

people to meet in order to apply… you don’t necessarily want everyone applying.

You’ve been thru the program for the most part, you know what is valued around here, so

your opinion is actually pretty valuable.

The way we’re going to get your opinion is a little different though, that’s why the

computer is here. This (pulls up WordPad) is a sample application that someone

applying for the scholarship might fill out. What I’d like for you to do is to fill this out as

if you’re applying. Imagine yourself sitting out the kitchen table… that kind of thing.

There’s one catch though… when filling this out, what I’d like for you to do is to make

yourself appear as competitive as possible. The impression that you should leave after

filling this out would be for a judge or committee member to look at it and say, ‘Wow,

______has got this won hands down. No point at looking at any others.’

I’ll give you a few minutes and we’ll see what you come up with.”

79

APPENDIX B

INSTRUCTIONS TO RECEIVERS

80

Instructions to the Receivers (Interviewers)

“Today you’re going to get a chance to look at it from the other side of the interview table. We have someone who has just helped us out by applying for a scholarship that the college of business is considering giving in the future to the ‘top business student.’ That person is standing by right now, and we’d like for you to

interview him to see if he’s qualified.

Now, this is not going to be an interview like what you’re probably more

accustomed to, where you’re sitting across the table from someone asking you questions.

We’re going to have this interview across the computer using ______(pulls up

relevant software). Since the person you’ll be interviewing won’t be in the same room

with you, he has sent his application electronically. What you’ll need to do is look this

up and down, and try to find out as much additional information about his background as

you can.

• To suspicion-induced interviewers only: “Now, keep in mind, studies have

shown that up to 40% of all job applicants lie on the résumés or applications, it

doesn’t matter what field of work they’re in or how high up a position they’re

going for.”

You’ve got up to 20 minutes for this interview. Look over the application, and you can begin when you’re ready.”

81

APPENDIX C

BUSINESS SCHOLARSHIP APPLICATION FORM

82

Business School Scholarship Application

Name: Estimated Graduation Date: Street Address: City: Tallahassee State: FL Zipcode: Phone:

Academic History:

Major Core Courses: Grade Semester Completed

Overall GPA: Major GPA: Minor GPA: 2nd Major GPA:

Work Experience:

Related to Major:

Firm: Position: Duration:

Not Related to Major:

Firm: Position: Duration:

Skills Related to Major:

Community Experiences:

Offices Held in University Organizations:

Office: Organization: Duration:

Office: Organization: Duration:

Community Service:

Organization: Role: Duration:

Organization: Role: Duration:

83

APPENDIX D

MOTIVATION TO DECEIVE QUESTIONNAIRE

84

Dyad# _____

INTERVIEWEE PRE-INTERACTION QUESTIONNAIRE

In a few minutes, you will begin your discussion with Person B. Now that you have received your instructions, we would like to know what you expect the upcoming conversation to be like and what is important to you during it.

People may enter conversations with a lot of different goals in mind, some more important than others. For each of the following, please rate how important it is to you in the upcoming conversation by placing a number from 1 to 7 in the space provided. If “1" means it is not at all important to you and “7" means it is very important to you,

How important is it to YOU to:

14. succeed in making Person B believe you? _____ 15. keep Person B from discovering you are not telling the truth? _____ 17. give convincing answers? _____ 18. keep Person B from becoming suspicious? _____

[Note: these were embedded in the following set of items]

How important is it to YOU to:

1. keep the conversation moving along smoothly? _____ 2. make a good impression in this interaction? _____ 3. get along with the other person? _____ 4. be seen in a positive light? _____ 5. keep from embarrassing yourself? _____ 6. feel comfortable during the interaction? _____ 7. avoid awkward silences? _____ 8. make Person B feel comfortable during the discussion? _____ 9. keep from appearing nervous? _____ 10. appear poised and at ease? _____ 11. get the conversation over quickly? _____ 12. appear as normal as possible? _____ 13. establish a good relationship with Person B? _____ 14. succeed in making Person B believe you? _____ 15. keep Person B from discovering you are not telling the truth? _____ 16. appear involved and interested in what Person B is saying? _____ 17. give convincing answers? _____ 18. keep Person B from becoming suspicious? _____

85

APPENDIX E

MOTIVATION TO DETECT DECEPTION QUESTIONNAIRE

86

Dyad# _____

INTERVIEWER POST-INTERACTION QUESTIONNAIRE

You have finished your interviewing session. We would like to know what you experienced and expected from the conversation.

People may enter conversations with a lot of different goals in mind, some more important than others. For each of the following, please rate how strongly you disagree or agree by circling a number from 1 to 7 in the space provided. If “1" means strongly disagree and “7" means strongly agree,

1. I didn't pay any special attention to the interviewee’s communication. 1 2 3 4 5 6 7

2. I expected the interviewee to tell the truth. 1 2 3 4 5 6 7

3. I tried really hard to discover if the interviewee was lying. 1 2 3 4 5 6 7

4. I watched carefully to see what interviewee said and did. 1 2 3 4 5 6 7

5. I was suspicious of what the interviewee said. 1 2 3 4 5 6 7

6. It mattered to me whether the interviewee was being completely up-front and truthful. 1 2 3 4 5 6 7

7. I was more attentive to the interviewee’s communication than I would be in normal conversation. 1 2 3 4 5 6 7

8. I had a feeling that something was wrong with the interviewee’s answers. 1 2 3 4 5 6 7

9. I didn't try very hard to find out if the interviewee was telling the truth. 1 2 3 4 5 6 7

87

APPENDIX F

SOCIAL SKILLS INVENTORY

88

SELF-DESCRIPTION INVENTORY

Below are a series of statements that indicate an attitude or behavior that may or may not describe you. Read each statement carefully. Then, using the scale shown below, decide which response most accurately reflects your answer and circle that number following the statement. There are no right or wrong answers. It is important to respond to every statement. ______1 = Not at all like me 2 = A little like me 3 = Like me 4 = Very much like me 5 = Exactly like me ______

1. It is difficult for others to know when I am sad or depressed. 1 2 3 4 5 2. It is nearly impossible for people to hide their true feelings from me. 1 2 3 4 5 3. I am very good at maintaining a calm exterior, even when upset. 1 2 3 4 5 4. I enjoy giving parties. 1 2 3 4 5 5. I am greatly influenced by the moods of those around me. 1 2 3 4 5 6. I can fit in with all kinds of people, young and old, rich and poor. 1 2 3 4 5

7. I have been told that I have expressive eyes. 1 2 3 4 5 8. I dislike it when other people tell me their problems. 1 2 3 4 5 9. People can always "read" my feelings even when I'm trying to hide them. 1 2 3 4 5 10. It takes people quite a while to get to know me well. 1 2 3 4 5 11. What others think of my actions is of little or no consequence to me. 1 2 3 4 5 12. I am usually very good at leading group discussions. 1 2 3 4 5

13. I often laugh out loud. 1 2 3 4 5 14. I am easily able to give a comforting hug or touch to someone who is distressed. 1 2 3 4 5 15. I am able to conceal my true feelings from just about anyone. 1 2 3 4 5 16. I am usually the one to initiate conversations. 1 2 3 4 5 17. I can be strongly affected by someone smiling or frowning at me. 1 2 3 4 5 18. When in groups of people, I have trouble thinking of the right things to talk about.1 2 3 4 5

19. My facial expression is generally neutral. 1 2 3 4 5 20. When my friends are angry or upset, they seek me out to help calm them. 1 2 3 4 5 21. I am not very skilled at controlling my emotions. 1 2 3 4 5 22. At parties I enjoy talking to a lot of different people. 1 2 3 4 5 23. I would feel out of place at a party attended by a lot of very important people. 1 2 3 4 5 24. I am not very good at mixing at parties. 1 2 3 4 5

25. I rarely show my anger. 1 2 3 4 5 26. I am often told that I am a sensitive, understanding person. 1 2 3 4 5 27. I am easily able to make myself look happy one minute and sad the next. 1 2 3 4 5 28. I love to socialize. 1 2 3 4 5 29. There are certain situations in which I find myself worrying about whether I am doing things right. 1 2 3 4 5 30. I am often chosen to be the leader of a group. 1 2 3 4 5

89

APPENDIX G

POLITICAL SKILL INVENTORY

90

Political Skill scale

Instructions: Using the following scale, please place a number in the blank next to each item that best describes how much you agree with each statement about yourself in your work environment.

Strongly Slightly Slightly Strongly Disagree Disagree Disagree Neutral Agree Agree Agree 1------2------3------4------5------6------7

1. _____ I spend a lot of time and effort at work networking with others. 2. _____ I am able to make most people feel comfortable and at ease around me. 3. _____ I am able to communicate easily and effectively with others. 4. _____ It is easy for me to develop good rapport with most people. 5. _____ I understand people very well. 6. _____ I am good at building relationships with influential people at work. 7. _____ I am particularly good at sensing the and hidden agendas of others. 8. _____ When communicating with others, I try to be genuine in what I say and do. 9. _____ I have developed a large network of colleagues and associates at work who I can call on for support when I really need to get things done. 10. _____ At work, I know a lot of important people and am well connected. 11. _____ I spend a lot of time and effort at work developing connections with others. 12. _____ I am good at getting people to like me. 13. _____ It is important that people believe I am sincere in what I say and do. 14. _____ I try to show a genuine interest in other people. 15. _____ I am good at using my connections and network to make things happen at work. 16. _____ I have good intuition or “savvy” about how to present myself to others. 17. _____ I always seem to instinctively know the right things to say or do to influence others. 18. _____ I pay close attention to peoples’ facial expressions.

91

APPENDIX H

TRAINING PRE-TEST

92

Training Introduction Quiz

1. A simple way to define deception is: a) a message that is inaccurate in its content and assumptions b) a message that is purposely used to foster a false conclusion in others c) a message that contradicts the beliefs of the majority of society d) a message that blatantly breaks the norms of a society’s

2. Typically, people successfully detect deception about ______of the time. a) 20% b) 50% c) 80% d) 90%

3. The tendency for most human beings to believe other people are honest by default is known as the ______. a) trust bias b) truth bias c) lie bias d) gullibility bias

4. Which of the following would NOT directly lead to better detection accuracy? a) familiarity with the communicative sender b) experience using with the communicative medium c) familiarity with the topic of conversation d) experience in high-risk situations

5. Past studies of deception detection were: a) limited in the amount of interaction between communicators b) highly dynamic in nature c) conducted using large groups of people d) looked at deceptive communication of long periods of time

6. In response to the question “How much experience do you have driving commercial vehicles?, the dishonest response of “Yes, I have driven a dump truck” would be an example of what type of deception? a) fabrication b) concealment c) equivocation d) misconception

7. Studies have shown that up to ______of all job applicants, no matter what field or position, have lied on their resumes. a) 10% b) 25% c) 40% d) 75%

93

8. Which of the following is NOT a linguistic property? a) the use of pronouns b) submissive language c) temporal distancing d) voice pitch

9. The concept that deceivers are not able to control indicators pointing to their dishonesty is the idea behind: a) leakage theory b) interpersonal deception theory c) truth bias d) immediacy theory

10. Applying lie detection skills and staying focused for long periods of time is known as: a) leakage b) arousal c) vigilance d) truth bias

11. Interpersonal Deception Theory views lying as being a ______process. a) static b) dynamic c) enclosed d) stagnant

12. Lies contain more: a) positive language b) definite details c) imagery d) misspellings

13. What would be a reliable vocal indicator of deception? a) slowed rate of speech b) lower voice pitch c) relaxed laughter d) few pauses in speech

14. A truthful message is more likely to contain: a) larger words b) smaller words c) simple sentence structure d) lack of

15. Deception does NOT occur when people are communicating by: a) cell phone b) e-mail c) videoconferencing d) None. Deception occurs in any mode.

94

APPENDIX I

TRAINING POST-TEST

95

Post-training Quiz

1. Typically, people successfully detect deception about ______of the time. a) 25% b) 50% c) 75% d) 100%

2. A simple way to define deception is: a) a message that is inaccurate in its content and assumptions b) a message that contradicts the beliefs of the majority of society c) a message that is purposely used to foster a false conclusion in others d) a message that blatantly breaks the norms of a society’s culture

3. Which of the following would NOT directly lead to better detection accuracy? a) familiarity with the communicative sender b) experience using with the communicative medium c) familiarity with the topic of conversation d) experience in high-risk situations

4. The concept that deceivers are not able to control indicators pointing to their dishonesty is the idea behind: a) leakage theory b) interpersonal deception theory c) truth bias d) immediacy theory

5. In response to the question “How much experience do you have driving commercial vehicles?”, the dishonest response of “Yes, I have driven a dump truck” would be an example of what type of deception? a) fabrication b) equivocation c) concealment d) misconception

6. A truthful message is more likely to contain: a) smaller words b) larger words c) simple sentence structure d) lack of emotion

7. Applying lie detection skills and staying focused for long periods of time is known as: a) leakage b) vigilance c) arousal d) truth bias

96

8. Interpersonal Deception Theory views lying as being a ______process. a) static b) enclosed c) stagnant d) dynamic

9. The tendency for most human beings to believe other people are honest by default is known as the ______. a) truth bias b) trust bias c) lie bias d) gullibility bias

10. Lies contain more: a) positive language b) definite details c) imagery d) misspellings

11. Which of the following is NOT a linguistic property? a) voice pitch b) the use of pronouns c) submissive language d) temporal distancing

12. What would be a reliable vocal indicator of deception? a) lower voice pitch b) relaxed laughter c) slowed rate of speech d) few pauses in speech

13. Deception does NOT occur when people are communicating by: a) cell phone b) e-mail c) videoconferencing d) None. Deception occurs in any mode.

14. Studies have shown that up to ______of all job applicants, no matter what field or position, have lied on their resumes. a) 10% b) 25% c) 40% d) 75%

15. Past studies of deception detection were: a) highly dynamic in nature b) conducted using large groups of people c) limited in the amount of interaction between communicators d) looked at deceptive communication of long periods of time

97

APPENDIX J

POST-INTERVIEW QUESTIONNAIRES

98

Post-experiment questionnaire (person interviewed)

Below are a series of adjective pairs that are often used to evaluate interviewers. Each is on a 1 to 7 scale measuring degrees of the adjectives, with 1 representing a high degree of the adjective on the left and 7 representing a high degree of the adjective on the right.

Using the adjective pairs below, please select the number that best reflects your general impressions of the person who interviewed you. You may select 1, 2, 3, 4, 5, 6, or 7. If you are neutral or unsure, select a 4. Write the desired number in the appropriate box for each partner. Work quickly, indicating your first response.

Interviewer Very friendly 1 2 3 4 5 6 7 Very unfriendly Very trustworthy Very untrustworthy Very likable Not likable at all Very deceptive Very truthful Very credible Not at all credible Very unsociable Very sociable Very dishonest Very honest Very persuasive Not at all persuasive Very irresponsible Very responsible Very confident Very unconfident Very calm Very tense Lacked influence Very influential Very insightful Lacked insight Very experienced Very inexperienced Very sluggish Very energetic Very quiet 1 2 3 4 5 6 7 Very talkative Very uncomposed Very composed Very nervous Very relaxed Affected my decisions Did not affect my decisions Very inexpert Very expert Very incompetent Very competent Very active Very passive Very dominant Very submissive Very uncomfortable Very comfortable Not willing to listen Willing to listen Very similar to me Very different from me Highly involved Not at all involved Very distracted Very attentive Thinks like me a lot Doesn’t think like me Very understanding Not at all understanding Very detached 1 2 3 4 5 6 7 Very engaged Very much like me Very much unlike me Very bored Very interested Very open to my ideas Very closed to my ideas Very cold Very warm Very negative Very positive Created closeness Created a sense of distance Not at all accepting Very accepting Promoted cooperation Did not promote cooperation

99

Recall anything false that you intended to convince the interviewer to believe. Rate your performance by circling one of the values that indicate your success. 1 = strongly disagree 2 = disagree 3 = not sure 4 = agree 5 = strongly agree

I was already familiar with the interviewer. 1 2 3 4 5 I was able to make the false information believable. 1 2 3 4 5 The interviewer seemed to accept the false information. 1 2 3 4 5 The interviewer did not suspect me. 1 2 3 4 5 The interviewer did not question the false information I provided. 1 2 3 4 5 The interviewer agreed with all the input I provided. 1 2 3 4 5 The interviewer trusted me. 1 2 3 4 5 I gave evasive and ambiguous answers to questions. 1 2 3 4 5 I took a long time before responding to any questions. 1 2 3 4 5 I was wishy-washy in my answers. 1 2 3 4 5 My answers to the interviewer’s questions were consistent. 1 2 3 4 5 I felt relaxed and at ease during the interview. 1 2 3 4 5 I was very tense during the interview. 1 2 3 4 5 I gave very brief answers. 1 2 3 4 5 I was successful in making a good impression on the interviewer. 1 2 3 4 5 I successfully deceived the interviewer. 1 2 3 4 5

Finally, rate the communication process itself by writing the appropriate number in the blank.

The Process Very helpful 1 2 3 4 5 6 7 Very unhelpful Very unreliable Very reliable Very easy Very difficult Very effortful Very uneffortful Very satisfying Very unsatisfying Very useful Very useless Very unpleasant Very pleasant Very enjoyable Very unenjoyable Very expected 1 2 3 4 5 6 7 Very unexpected

Do you believe the interviewer was suspicious of your answers? _____ YES _____ NO

If yes, what makes you believe the interviewer was suspicious?

Did the interviewer publicly challenge your answers during the interview?

_____ YES ______NO

100

Please select the one number that best reflects your general impressions of the E-mail interview process. If you are neutral or unsure, select a 4. Circle the desired number on each scale below. Work quickly, indicating your first response.

To what extent would you characterize E-mail* as having the ability to:

Not To a at Very All Great Extent 1 2 3 4 5 6 7 give and receive timely feedback 1 2 3 4 5 6 7 transmit a variety of different cues beyond the explicit message (e.g., 1 2 3 4 5 6 7 nonverbal cues) tailor messages to your own or other personal circumstances 1 2 3 4 5 6 7 use rich and varied language 1 2 3 4 5 6 7 provide immediate feedback 1 2 3 4 5 6 7 convey multiple types of information (verbal and nonverbal) 1 2 3 4 5 6 7 transmit varied (e.g., words, numbers, pictures) 1 2 3 4 5 6 7 design messages to your own or others' requirements 1 2 3 4 5 6 7 *or audio depending upon the communication medium used

Your gender: ______Male ______Female

Your age: ______

Your primary ethnic, racial, or cultural background (choose all that apply):

______U.S. Caucasian ______American Indian/ Pacific Islander/ other U.S.

______African-American ______International Student (non-U.S.)

______U.S. Hispanic/ Latino ______U.S. Asian

Thank you for participating in the experiment. It is extremely important that you do not discuss the experiment with anyone until it is completed. If you have questions regarding your rights as a research subject, you may call Patti Tilley at 850-644-7676. Thanks for your cooperation!

101

Post-experiment questionnaire (interviewer)

Below are a series of adjective pairs that are often used to evaluate people who have been interviewed. Each is on a 1 to 7 scale measuring degrees of the adjectives, with 1 representing a high degree of the adjective on the left and 7 representing a high degree of the adjective on the right.

Using the adjective pairs below, please select the number that best reflects your general impressions of the person you interviewed. You may select 1, 2, 3, 4, 5, 6, or 7. If you are neutral or unsure, select a 4. Write the desired number in the appropriate box for each partner. Work quickly, indicating your first response.

Person Interviewed Very friendly 1 2 3 4 5 6 7 Very unfriendly Very trustworthy Very untrustworthy Very likable Not likable at all Very deceptive Very truthful Very credible Not at all credible Very unsociable Very sociable Very dishonest Very honest Very persuasive Not at all persuasive Very irresponsible Very responsible Very confident Very unconfident Very calm Very tense Lacked influence Very influential Very insightful Lacked insight Very experienced Very inexperienced Very sluggish Very energetic Very quiet 1 2 3 4 5 6 7 Very talkative Very uncomposed Very composed Very nervous Very relaxed Affected my decisions Did not affect my decisions Very inexpert Very expert Very incompetent Very competent Very active Very passive Very dominant Very submissive Very uncomfortable Very comfortable Not willing to listen Willing to listen Very similar to me Very different from me Highly involved Not at all involved Very distracted Very attentive Thinks like me a lot Doesn’t think like me Very understanding Not at all understanding Very detached 1 2 3 4 5 6 7 Very engaged Very much like me Very much unlike me Very bored Very interested Very open to my ideas Very closed to my ideas Very cold Very warm Very negative Very positive Created closeness Created a sense of distance Not at all accepting Very accepting Promoted cooperation Did not promote cooperation

102

Recall the interview you just participated in. Rate your performance by circling one of the values that indicate your agreement with the statement. 1 = strongly disagree 2 = disagree 3 = not sure 4 = agree 5 = strongly agree

I was already familiar with the person interviewed. 1 2 3 4 5 The person interviewed gave very brief comments. 1 2 3 4 5 The person interviewed seemed evasive and ambiguous. 1 2 3 4 5 The person interviewed took a long time before responding to my 1 2 3 4 5 questions. The person interviewed behaved unusually. 1 2 3 4 5 The person interviewed behaved the way I expect most people to 1 2 3 4 5 behave. The person interviewed engaged in normal conversational behavior. 1 2 3 4 5 I found the input from the person interviewed to be believable. 1 2 3 4 5 The person interviewed was very open and forthcoming with me. 1 2 3 4 5 The person interviewed was completely honest with me. 1 2 3 4 5 The person interviewed was not sincere in answering my questions. 1 2 3 4 5 I felt relaxed and at ease while asking questions. 1 2 3 4 5 I was very tense while asking questions. 1 2 3 4 5 I asked very brief questions. 1 2 3 4 5 I was successful in making a good impression to the person 1 2 3 4 5 interviewed.

Finally, rate the communication process itself, by writing the appropriate number in the blank.

The Process Very helpful 1 2 3 4 5 6 7 Very unhelpful Very unreliable Very reliable Very easy Very difficult Very effortful Very uneffortful Very satisfying Very unsatisfying Very useful Very useless Very unpleasant Very pleasant Very enjoyable Very unenjoyable Very expected Very unexpected

Do you believe that person you interviewed was being dishonest?

______YES ______NO

If so, specifically what answers that he or she provided do you believe were dishonest?

Did you explicitly challenge any answers as false during the interview? ______YES ______NO

103

Please select the one number that best reflects your general impressions of the E-mail interview process. If you are neutral or unsure, select a 4. Circle the desired number on each scale below. Work quickly, indicating your first response.

To what extent would you characterize E-mail* as having the ability to:

Not To a at Very All Great Extent 1 2 3 4 5 6 7 give and receive timely feedback 1 2 3 4 5 6 7 transmit a variety of different cues beyond the explicit message (e.g., 1 2 3 4 5 6 7 nonverbal cues) tailor messages to your own or other personal circumstances 1 2 3 4 5 6 7 use rich and varied language 1 2 3 4 5 6 7 provide immediate feedback 1 2 3 4 5 6 7 convey multiple types of information (verbal and nonverbal) 1 2 3 4 5 6 7 transmit varied symbols (e.g., words, numbers, pictures) 1 2 3 4 5 6 7 design messages to your own or others' requirements 1 2 3 4 5 6 7 *or audio depending upon the communication medium used

Your gender: ______Male ______Female

Your age: ______

Your primary ethnic, racial, or cultural background (choose all that apply):

______U.S. Caucasian ______American Indian/ Pacific Islander/ other U.S.

______African-American ______International Student (non-U.S.)

______U.S. Hispanic/ Latino ______U.S. Asian

Thank you for participating in the experiment. It is extremely important that you do not discuss the experiment with anyone until it is completed. If you have questions regarding your rights as a research subject, you may call Patti Tilley at 850-644-7676. Thanks for your cooperation!

104

APPENDIX K

CONSENT FORM AND HUMAN SUBJECTS APPROVAL

105

INFORMED FORM

I freely and voluntarily, and without element of force or coercion, consent to be a participant in the research project entitled “Resume Enhancement.”

This research is being conducted by Patti Tilley, who is a doctoral student in Management Information Systems, College of Business at Florida State University. I understand the purpose of the research project is to better understand the practice of resume enhancement. I understand that if I participate in the project, I may be asked questions about my resume and my educational and job experiences.

I understand I will be asked to work on my resume, and I may also be asked to participate in an interview with an MIS research assistant. The total time commitment would be about 75 minutes. If I participate in the interview, I will receive a $10.00 gift certificate in compensation for my time. Any questions I may have will be answered by the MIS research assistant, or she/he will refer me to a knowledgeable source.

I understand my participation is totally voluntary and I may stop participation at anytime. I understand that my conversations may be videotaped and/or tape recorded by the researcher. These tapes will be kept by the researcher in a locked filing cabinet. I understand that only the researcher will have access to these tapes and that they will be destroyed by August 10, 2010. All my answers to the interview questions will be kept confidential to the extent allowed by law and identified by a subject number. My name will not appear on any of the results. No individual responses will be reported. Only aggregate findings will be reported.

I understand there is a possibility of a minimal level of risk involved if I agree to participate in this study. I might experience anxiety when editing my resume or when talking about my resume. The MIS research assistant will be available to talk with me about any emotional discomfort I may experience while participating. I am also able to stop my participation at any time I wish.

I understand there are benefits for participating in this research project. I will be providing business academics with valuable insight into how students craft resumes. I may also be able to make changes to my resume that will benefit me, as a result of my participation.

I understand that this consent may be withdrawn at any time without prejudice, penalty or loss of benefits to which I am otherwise entitled. I have been given the right to ask and have answered any inquiry concerning the study. Questions, if any, have been answered to my satisfaction.

I understand that I may contact Patti Tilley (850) 644-7676, [email protected], Florida State University, College of Business, MIS Department or Dr. Joey George (850) 644-7449, Florida State University, College of Business, MIS Department, for answers to questions about this research or my rights. Aggregate results will be sent to me upon my request. If you have any questions about your rights as a subject/participant in this research, or if you feel you have been placed at risk, you can contact the Chair of the Human Subjects Committee, Institutional Review Board, through the Vice President for the Office of Research at (850) 644- 8633. I have read and understand this consent form.

______(Subject) (Date)

106

107

108

REFERENCES

Andres, H.P. "A comparison of face-to-face and virtual software development teams," Team Performance Management (8:1) 2002, pp 39-48.

Anolli, L., and Ciceri, R. "The voice of deception: Vocal strategies of naive and able liars," Journal of Nonverbal Behavior (21) 1997, pp 259-285.

Becker, T.E., and Colquitt, A.L. "Potential versus actual faking of a biodata form: An analysis along several dimensions of item type," Personnel (45) 1992, pp 389-406.

Biros, D. "The effects of truth bias on artifact-user relationships: an investigation of factors for improving deception detection in artifact produced information," in: Information and Management Sciences, Florida State University, Tallahassee, 1998, p. 188.

Biros, D., George, J., and Zmud, R. "Inducing sensitivity to deception in order to improve decision making performance: A field study," MIS Quarterly (26:2) 2002, pp 119- 144.

Buller, D.B., and Burgoon, J.K. "Interpersonal deception theory," (6:3) 1996a, pp 203-242.

Buller, D.B., Burgoon, J.K., Buslig, A., and Roiger, J. "Testing interpersonal deception theory: The language of interpersonal deception," Communication Theory (6:3) 1996b, pp 268-289.

Buller, D.B., Burgoon, J.K., White, C., and Ebesu, A. "Interpersonal deception: VII. Behavioral profiles of falsification, concealment, and equivocation," Journal of Language and Social Psychology (13) 1994, pp 366-395.

Burgoon, J., Buller, D., and Guerrero, L. "Interpersonal deception: IX. Effects of social skill and on deception success and detection accuracy," Journal of Language and Social Psychology (14:3) 1995a, pp 289-311.

Burgoon, J., and Floyd, K. "Testing for the motivation impairment effect during deceptive and truthful interaction," Western Journal of Communication (64) 2000, pp 243-267.

109

Burgoon, J.K., Buller, D.B., Dillman, L., and Walther, J. "Interpersonal deception: IV. Effects of suspicion on perceived communication and nonverbal behavior dynamics," Human Communication Research (27) 1995b, pp 503-534.

Burgoon, J.K., Buller, D.B., Ebesu, A., and Rockwell, P. "Interpersonal deception: V. Accuracy in deception detection," Communication Monographs (61) 1994, pp 303-325.

Burgoon, J.K., Buller, D.B., and Floyd, K. "Does participation affect deception success? A test of the interactivity principle," Human Communication Research (27) 2001, pp 503-534.

Burgoon, J.K., Buller, D.B., White, C., Afifi, W., and Buslig, A. "The role of conversational involvement in deceptive interpersonal interactions," Personality and Social Psychology Bulletin (25:6) 1999, pp 669-685.

Burgoon, J.K., Stoner, G., Bonito, J., and Dunbar, N. "Trust and deception in mediated communication," 36th Hawaii International conference on System Sciences, 2003a.

Burgoon, J.K., Marett, K., and Blair, J.P. "Detecting deception in computer-mediated communication," in: Computers in Society: Privacy, ethics, and the Internet, J. George (ed.), Prentice Hall, 2003b.

Burke, K., Aytes, K., and Chidambaram, L. "Media effects on the development of cohesion and process satisfaction in computer-supported workgroups: An analysis of results from two longitudinal studies," Information Technology & People (14:2) 2001, pp 122-141.

Burke, K., and Chidambaram, L. "How much bandwidth is enough? A longitudinal examination of media characteristics and group outcomes," MIS Quarterly (23:4) 1999, pp 557-580.

Carlson, J.R., and George, J.F. "Media appropriateness in the conduct and discovery of deceptive communication: The relative influence of richness and synchronicity," Group Decision and Negotiation (13:2) 2004a, pp 191-210.

110

Carlson, J., George, J., Burgoon, J.K., Adkins, M., and White, C. "Deception in computer-mediated communication," Group Decision and Negotiation (13:1) 2004b, p 5.

Carlson, J., and Zmud, R.W. "Channel expansion theory and the experiential nature of media richness perceptions," Academy of Management Review (42:2) 1999, pp 153-170.

Cober, R., Brown, D.J., Blumental, A.J., Doverspike, D., and Levy, P. "The quest for the qualified job surfer: It's time the public sector catches the ," Public Personnel Management (29:4) 2000, pp 479-496.

Colwell, K., Hiscock, C.K., and Memon, A. "Interviewing techniques and the assessment of statement credibility," Applied Cognitive Psychology (16) 2002, pp 287-300.

Cook, T.D., and Campbell, D.T. Quasi-Experimentation Houghton Mifflin Co., Boston, MA, 1979.

Daft, R.L., and Lengel, R.H. "Organizational information requirements: Media richness and structural design," Management Science (32:5) 1986, pp 554-571.

Daft, R.L., Lengel, R.H., and Trevino, L.K. "Message equivocality, media selection, and manager performance: Implications for information systems," MIS Quarterly (11:3) 1987, pp 355-366.

Dennis, A., and Kinney, S. "Testing media richness theory in the : The effects of cues, feedback, and task equivocality," Information Systems Research (9:3) 1998, pp 256-274.

Dennis, A., and Valacich, J.S. "Rethinking media richness: Towards a theory of media synchronicity," Proceedings of the 32nd Hawaii International Conference on Systems Science, Maui, HI, 1999.

DePaulo, B.M., Kashy, D.A., Kirkendol, S.E., Wyer, M.M., and Epstein, J.A. "Lying in everyday life," Journal of Personality and Social Psychology (70) 1996, pp 979- 995.

111

DePaulo, B.M., Kirkendol, S.E., Tang, J., and O'Brien, T. "The motivational impairment effect in the communication of deception: Replications and extensions," Journal of Nonverbal Behavior (12:3) 1988, pp 177-202.

DePaulo, B.M., Lindsay, J., Malone, B., Muhlenbruck, L., Charlton, K., and Cooper, H. "Cues to deception," Psychological Bulletin (129:1) 2003, pp 74-118.

DePaulo, P.J., and DePaulo, B.M. "Can deception by salespersons and customers be detected through nonverbal behavioral cues?," Journal of Applied Social Psychology (19:18) 1989, pp 1552-1577.

DeSanctis, G. "Small group research in information systems: Theory and method," in: The information systems research challenge: Experimental research methods, I. Benbasat (ed.), Harvard Business School, Boston, MA, 1989. deTurck, M.A., and Miller, G.R. "Deception and arousal: Isolating the behavioral correlates of deception," Human Communication Research (12) 1985, pp 181- 201.

Dubin, R. Theory Building, (Revised ed.) Free Press, New York, 1978.

Ekman, P. Telling lies: Clues to deceit in the marketplace, politics, and marriage W. W. Norton and Company, New York, 1992.

Ekman, P., and Friesen, W.V. "Nonverbal leakage and clues to deception," (32) 1969, pp 88-105.

Ekman, P., O'Sullivan, M., Friesen, W.V., and Scherer, K. "Invited article: face, voice, and body in detecting deceit," Journal of Nonverbal Behavior (15) 1991, pp 125- 136.

Elaad, E. "Effects of feedback on the overestimated capacity to detect lies and the underestimated ability to tell lies," Applied Cognitive Psychology (17) 2003, pp 349-363.

El-Shinnawy, M., and Markus, L. "The poverty of media richness theory: Explaining people's choice of electronic mail vs. voice mail," International Journal of Human-Computer Studies (46) 1997, pp 443-467.

112

Feeley, T.H., and deTurck, M.A. "Global cue usage in behavioral lie detection," Communication Quarterly (43) 1995, pp 420-430.

Feldman, R.S., Forrest, J.A., and Happ, B.R. "Self-presentation and verbal deception: do self-presenters lie more?," Basic and Applied Social Psychology (24:2) 2002, pp 163-170.

Ferris, G.R., Treadway, D.C., Kolodinsky, R.W., Hockwarter, W.A., Kacmar, C.J., Douglas, C., and Frink, D.D. "Development and validation of the political skill inventory," Working paper) 2004.

Fiedler, K., and Walka, I. "Training lie detectors to use nonverbal cues instead of global heuristics," Human Communication Research (20) 1993, pp 199-223.

Frank, M.G., and Ekman, P. "Appearing truthful generalizes across different deception situations," Journal of Personality and Social Psychology (86:3) 2004, pp 486- 495.

Frank, M., and Feeley, T. "To catch a liar: Challenges for research in lie detection training," Journal of Applied Communication Research (31:1) 2003, pp 58-75.

George, J., and Carlson, J. "Electronic lies: Lying to others and detecting lies using electronic media," Fifth Americas Conference on Information Systems, Milwaukee, WI, 1999a.

George, J., and Carlson, J. "Group support systems and deceptive communication," Proceedings of the 32nd Hawaii International Conference on Systems Sciences, Maui, HI, 1999b.

George, J., Marett, K., and Tilley, P. "Deception detection under varying electronic media and warning conditions," 37th Hawaii International Conference on System Sciences, 2004a.

George, J.F., Biros, D.P., Burgoon, J.K., and Nunamaker, J.F. "Training professionals to detect deception," NSF/NIJ Symposium on Intelligence and Security Informatics, Tucson, AZ, 2003.

113

George, J.F., Burgoon, J.K., Crews, J.M., Cao, J., Lin, M., Marett, K., and Biros, D.P. "Training to detect deception: An experimental investigation," 37th Annual Hawaii International Conference on System Sciences, 2004b.

Globerson, S., and Korman, A. "The use of just-in-time training in a project environment," International Journal of Project Management (19:5) 2001, pp 279- 285.

Greene, J.O., O'Hair, H.D., Cody, M.J., and Yen, C. "Planning and control of behavior during deception," Human Communication Research (1) 1985, pp 335-364.

Higgins, C.A. "The effect of applicant influence tactics on recruiter perceptions of fit," in: Department of Management and Organizations, University of Iowa, 2000.

Hiltz, S.R., and Turoff, M. "Structuring computer-mediated communication systems to avoid information overload," Communications of the ACM (28:7) 1985, pp 680- 689.

Hughes, A., and Wright, M.W. "Black men can't coach?: While the NCAA considers changing its game plan, many black football head-coaching candidates remain on the bench," Black Enterprise (33:12) 2003, pp 63-68.

Johnson, P.E., Grazioli, S., Jamal, K., and Berryman, R.G. "Detecting deception: adversarial problem solving in a low base-rate world," (25) 2001, pp 355-392.

Jones, E.E. Interpersonal perception W.H. Freeman, New York, 1990.

Kalbfleisch, P.J. "Deceit, distrust and the social milieu: Application of deception research in a troubled world," Journal of Applied Communication Research (20) 1992, pp 308-334.

Kassin, S., and Fong, C. "'I'm innocent!': Effects of training on judgments of truth and deception in the interrogation room," Law and Human Behavior (23) 1999, pp 499-516.

King, R.C., and Xia, W. "Media appropriateness: Effects of experience on communication media choice," Decision Sciences (28:4) 1997, pp 877-910.

114

Kluger, A., and Colella, A. "Beyond the mean bias: The effect of warning against faking on biodata item variances," Personal Psychology (46) 1993, pp 763-780.

Knapp, M.L., Hart, R.P., and Dennis, H.S. "An exploration of deception as a communication construct," Human Communication Research (1) 1974, pp 15-29.

Lea, M., and Spears, R. "Computer-mediated communication, de-individuation, and group decision making," International Journal of Man-Machine Studies (34) 1991, pp 283-301.

Leonard, L.N.K., and Cronan, T.P. "Illegal, inappropriate, and unethical behavior in an information technology context: A study to explain influences," Journal of the Association for Information Systems (1:12) 2001, pp 1-28.

Levine, T.R., Park, H.S., and McCornack, S. "Accuracy in detecting truths and lies: Documenting the 'veracity effect'," Communication Monographs (66) 1999, pp 125-144.

Levine, T.R., Park, H.S., and McCornack, S. "Norms, expectations, and deception: A norm violation model of veracity judgments," Communication Monographs (67) 2000, pp 123-137.

Mann, S., Vrij, A., and Bull, R. "Detecting true lies: Police officers' ability to detect suspects' lies," Journal of Applied Psychology (89:1) 2004, pp 137-149.

Markus, L. "Electronic mail as the medium of managerial choice," Organization Science (5:4) 1994, pp 502-527.

McCornack, S., and Parks, M. "Deception detection and relationship development: The other side of trust," in: Communications Yearbook 9, McLaughlin (ed.), Sage Publications, Beverly Hills, CA, 1986.

McCornack, S., and Levine, T. "When lovers become leery: The relationship between suspicion and accuracy in detecting deception," Communication Monographs (57) 1990, pp 219-230.

Meissner, C.A., and Kassin, S. ""He's guilty!": Investigator bias in judgments of truth and deception," Law and Human Behavior (26:5) 2002, pp 469-480.

115

Mennecke, B.E., Valacich, J.S., and Wheeler, B.C. "The effects of media and task on user performance: A test of the task-media fit hypothesis," Group Decision and Negotiation (9:6) 2000, pp 507-529.

Miller, G.R., and Stiff, J.B. Deceptive Communication Sage Publications, Newbury Park, CA, 1993.

Nass, C., and Moon, Y. "Machines and mindlessness: Social responses to computers," Journal of Social Issues (56) 2000, pp 81-104.

Navarro, J., and Schafer, J.R. "Detecting deception," FBI Law Enforcement Bulletin (70:7) 2001, pp 9-14.

Nunnally, J.C. Psychometric Theory, (2nd ed.) McGraw-Hill, New York, 1978.

Parasuraman, R. "Sustained attention in detection and discrimination," in: Varieties of Attention, R. Parasuraman and D.R. Davies (eds.), Academic Press, Inc., London, 1984, pp. 243-266.

Pearce, C.G., and Tuten, T.L. "Internet recruiting in the banking industry," Quarterly (64:1) 2001, pp 9-18.

Porter, S., Woodworth, M., and Birt, A.R. "Truth, lies, and videotape: An investigation of the ability of federal parole officers to detect deception," Law and Human Behavior (24:6) 2000, pp 643-658.

Prater, T., and Kiser, S.B. "Lies, lies, and more lies," SAM Advanced Management Journal (Spring) 2002, pp 9-36.

Rao, S., and Lim, J. "The impact of involuntary cues on media effects," Paper presented at the 33rd Hawaii International Conference on System Sciences, 2000.

Rice, R. "Task analyzability, use of new media, and effectiveness: A multi-site exploration of media richness," Organization Science (3:4) 1992, pp 475-500.

Rice, R. "Media appropriateness: Using social presence theory to compare traditional and new organizational media," Human Communication Research (19:4) 1993, pp 451-484.

116

Riggio, R. "Assessment of basic social skills," Journal of Personality & Social Psychology (51) 1986, pp 649-660.

Rynes, S., and Barber, A. "Applicant attraction strategies: An organizational perspective," Academy of Management Review (15:2) 1990, pp 286-310.

Schweitzer, M.E., and Croson, R. "Curtailing deception: The impact of direct questions on lies and omissions," The International Journal of Conflict Management (10:3) 1999, pp 225-248.

Short, J., Williams, E., and Christie, B. The social psychology of Wiley, New York, 1976.

Snell, A.F., Sydell, E.J., and Lueke, S.B. "Towards a theory of applicant faking: Integrating studies of deception," Human Resource Management Review (9:2) 1999, pp 219-242.

Sproull, L., and Kiesler, S. "Reducing social context cues: Electronic mail in organizational communication," Management Science (32:11) 1986, pp 1492- 1512.

Stiff, J.B., Hale, J.L., Garlick, R., and Rogan, R.G. "Effects of cue incongruence and social normative influences on individual judgments of honesty and deceit," Southern Communication Journal (55) 1990, pp 206-229.

Stiff, J.B., Kim, H., and Ramesh, C. "Truth biases and aroused suspicion in relational deception," Communication Research (19:3) 1992, pp 326-345.

Straub, D., and Karahanna, E. "Knowledge worker communications and recipient availability: Toward a task closure explanation of media choice," Organization Science (9:2) 1998, pp 160-175.

Stromwall, L.A., and Granhag, P.A. "Affecting the perception of verbal cues to deception," Applied Cognitive Psychology (17) 2003, pp 35-49.

Toris, C., and DePaulo, B.M. "Effects of actual deception and suspiciousness of deception on interpersonal perceptions," Journal of Personality and Social Psychology (47:5) 1984, pp 1063-1073.

117

Trovillo, P.V. "A history of lie detection," Journal of Criminal Law and Criminology (29) 1939, pp 848-881.

Turner, R.E., Edgley, C., and Olmstead, G. "Information control in conversations: Honesty is not always the best policy," Kansas Journal of Speech (11) 1975, pp 69-89.

Vrij, A. "The impact of information and setting on detection of deception by police detectives," Journal of Nonverbal Behavior (18) 1994, pp 117-136.

Vrij, A. Detecting lies and deceit: The psychology of lying and implications for professional practice John Wiley and Sons, Chichester, 2000.

Vrij, A., Evans, H., Akehurst, L., and Mann, S. "Rapid judgments in assessing verbal and nonverbal cues: Their potential for deception researchers and lie detection," Applied Cognitive Psychology (18) 2004, pp 283-296.

Walczyk, J.J., Roper, K.S., Seemann, E., and Humphrey, A.M. "Cognitive mechanisms underlying lying to questions: Response time as a cue to deception," Applied Cognitive Psychology (17:7) 2003, pp 755-774.

Zuckerman, M., and Driver, R. "Telling lies: Verbal and nonverbal correlates of deception," in: Nonverbal Communication: An Integrated Perspective, A.W. Siegman and S. Feldstein (eds.), Erlbaum, Hillsdale, NJ, 1985, pp. 129-147.

Zuckerman, M., Koestner, R., and Alton, A.O. "Learning to detect deception," Journal of Personality and Social Psychology (46:3) 1984, pp 519-528.

118

BIOGRAPHICAL SKETCH

Patricia Ann Tilley came into this world in Kentfield, California, a beautiful town at the base of Mount Tamalpais in Marin County. She received her Bachelor of Arts

Degree in Religious Studies from The University of California at Berkeley in May of

1984. She completed a Masters of Library and Information Studies at The University of

California at Berkeley in May 1990 and a Masters of Science in Computer Information

Systems and Quantitative Business Methods at California State University at Hayward in

June of 1998. She was conferred the degree of Doctor of Philosophy in Business

Administration with a concentration in Management Information Studies from Florida

State University in June, 2005. Dr. Tilley has a position as an associate professor at the

School of Business, Management Information Systems Department, at Central

Connecticut State University in New Britain, Connecticut.

119