UNIVERSITY OF CINCINNATI
Date:______
I, ______, hereby submit this work as part of the requirements for the degree of: in:
It is entitled:
This work and its defense approved by:
Chair: ______
Detection-Action Sequence in Vigilance: Effects on Workload and Stress
A dissertation submitted to the
Division of Research and Advanced Studies of the University of Cincinnati
in partial fulfillment of the requirements for the degree of
DOCTORATE OF PHILOSOPHY (Ph.D.)
In the Department of Psychology of the College of Arts and Sciences
2007
by
Kelley S. Parsons, M.A.
B.S. University of Cincinnati, 1996 M.A. University of Cincinnati, 2001
Committee Chair: Dr. Joel S. Warm, Ph.D.
Abstract
In a recent study by Gunn et al. (2005), participants simulating UAV controllers
monitored a vigilance display for signals alerting them to the presence of enemy aircraft which they could then track down and destroy. A key finding in that study was the low workload rating that the participants gave on the NASA-TLX to the combined vigilance/tracking task. That finding contrasts with many previous investigations
indicating that the cost of mental operations in vigilance is substantial, as reflected in
high workload ratings on the NASA-TLX scale (Warm, Dember, & Hancock, 1996). In
the previous studies, participants were typically required to detect signals without the need for subsequent action based upon their detections. The Gunn et al. (2005) study was the first to feature a detection/action link in which performance on a vigilance task had immediate consequences for actions to follow. The vigilance task could have taken on
greater importance in this more dynamic context leading to lower ratings of perceived
workload: A possibility that accords with the view that the manner in which participants
interprets a situation may have impact upon the perceived workload of vigilance
assignments (Hancock & Warm, 1989). This study tested that possibility along with
three others under conditions in which UAV controllers were or were not afforded an
opportunity to search/destroy enemy threats on the basis of successful signal detection on
a monitored warning display.
The results did not confirm Gunn et al.’s (2005) findings. Workload ratings in
this study were in the upper level of the NASA-TLX scale in all cases except one, that in
which participants only rated the search/destroy component of the detection/action
composite. Nevertheless, the detection/action link did have significant effects upon the
iii quality of vigilance performance and upon task-induced stress as measured by the
Dundee Stress State questionnaire (Matthews et al., 2002). Participants who were
afforded a detection/action link detected significantly more signals in the vigilance task,
were more task-engaged, and reported less distress than those whose assignment was
limited to vigilance alone. It would appear imperative that detection/action linkages be an integral part of future vigilance studies.
iv
v ACKNOWLEDGEMENTS
First and foremost I need to thank my mother and my grandmother for always
believing in me and providing endless support throughout this entire process. Their love
(and occasional ‘soft’ boot in the pants) kept me going when I no longer thought I could.
I would also like to thank my little Khyla. Without knowing it, she has made me a better
person. My relationship with her has inspired me to be the best role model I can be.
I would like to thank Dr. Joel Warm for his belief in me when I expressed my
desire to attend graduate school. He provided me with the opportunity to pursue my
dream and for that I will be forever grateful. His guidance and understanding (and
occasional ‘not-so-soft’ boot in the pants) came when I needed them most.
I would also like to thank my committee, Dr. Gerald Matthews, Dr. Mike Riley,
and Dr. Todd Nelson for their expertise and assistance during this project.
I also must thank some very special people, Vicki, Christina, and Traci for always
being there when I needed a shoulder to lean on or lunch and Dr. Donald Schumsky, not
only for his shoulder but also for his time, knowledge, and words of wisdom.
Finally, I want to thank Dr. William Dember who passed away before I could tell
him how his kind and gentle words of encouragement always meant so much to me, thank you Dr. Dember.
vi DEDICATION
For Khyla, my little ‘sister’
Determine your dream and pursue it with spirit and passion
Never forget, the power is yours to accomplish anything to which you set your mind
vii TABLE OF CONTENTS
Page
ABSTRACT…………………………………………………………… ii
TABLE OF CONTENTS……………………………………………… 1
LIST OF TABLES..…………………………………………………… 5
LIST OF FIGURES…………………………………………………… 6
CHAPTER 1 Introduction…………………………………………………… 7
THE VIGILANCE QUANDARY……………………………… 7 Vigilance and Automation……………………….…… 7 The Origins of Vigilance Research…………………… 8
VIGILANCE AND WORKLOAD…………………………… 10 The Arousal and Mindlessness Models………………. 10 The NASA-TLX Studies……………………………... 11
CHALLENGES TO THE WORKLOAD EVIDENCE……… 14 Boredom……………………………………………… 14 The Gunn et al. (2005) Study………………………… 15
VIGILANCE AND STRESS………………………………… 18 An Additional Component in the Fabric of Vigilance.. 18 Stress Defined………………………………………… 18 Physiological Measures of Stress in Vigilance….……. 22 Mood Measures of Stress in Vigilance…………….…. 23 Stress and the DSSQ………………………………..… 23 The Gunn et al. (2005) Study Revisited………………. 25
CHAPTER 2…………………………………………………………... 26 Method………………………………………………………… 26 Participants……………………………………………. 26 Design…………………………….…………………… 26 Apparatus……………………………………………… 27 Procedure……………………………………………… 32
viii CHAPTER 3…………………………………………………………... 34 Results………………………………………………………… 34
VIGILANCE PERFORMANCE…………………………….. 34 Correct Detections…………………………………… 34 False Alarms…………………………………….…… 36
WORKLOAD: NASA-TLX……………………………….… 37 Global Workload…………………………………….. 37 Weighted Workload Ratings………………………… 38
STRESS: DSSQ……………………………………………... 40
CHAPTER 4………………………………………………………… 45 Discussion…………………………………………………… 45
References………………………………………………………….... 52
APPENDIX A: NASA-Task Load Index (NASA-TLX)…………… 68
APPENDIX B: Instruction Sets for all Experimental Conditions….. 72
APPENDIX C: Informed Consent Form…………………………… 106
APPENDIX D: The Dundee Stress State Questionnaire (DSSQ)….. 109
APPENDIX E: Summary Tables of Statistical Analysis…….……... 119
E1. Analysis of Variance for Practice Detection Rate Scores………………………………………………… 120
E2. Analysis of Variance for Percent Correct Detections within the Detection/Action Composite Group……… 120
E3. Analysis of Variance for Percent Correct Detections within the Detection-Only Composite Group………… 120
E4. Analysis of Variance for Correct Detection Scores….. 120
E5. Analysis of Variance for the NASA-TLX Global Workload Scores……………………………………… 121
E6. Analysis of Variance for the NASA-TLX Subscale Scores…………………………………………………. 121
ix E7. Analysis of Variance for Groups within the NASA-TLX Mental Demand Subscale……………….. 121
E8. Analysis of Variance for Groups within the NASA-TLX Temporal Demand Subscale…………… 121
E9. Analysis of Variance for Groups within the NASA-TLX Performance Subscale…………………. 122
E10. Analysis of Variance for Groups within the NASA-TLX Effort Subscale………………………… 122
E11. Analysis of Variance for Groups within the NASA-TLX Frustration Subscale…………………… 122
E12. Analysis of Variance for Groups within the Pre-DSSQ Worry Dimension………………………… 122
E13. Analysis of Variance for Groups within the Pre-DSSQ Engagement Dimension………………….. 122
E14. Analysis of Variance for Groups within the Pre-DSSQ Distress Dimension………………………. 123
E15. Analysis of Variance of the Change Scores for the Detection/Action Composite Group Members within the DSSQ Worry Dimension…………………. 123
E16. Analysis of Variance of the Change Scores for the Detection/Action Composite Group Members within the DSSQ Engagement Dimension…………… 123
E17. Analysis of Variance of the Change Scores for the Detection/Action Composite Group Members within the DSSQ Distress Dimension………………... 123
E18. Analysis of Variance of the Change Scores for the Detection-Only Composite Group Members within the DSSQ Worry Dimension………….………. 123
E19. Analysis of Variance of the Change Scores for the Detection-Only Composite Group Members within the DSSQ Engagement Dimension………….... 124
x E20. Analysis of Variance of the Change Scores for the Detection-Only Composite Group Members within the DSSQ Distress Dimension………………… 124
E21. Analysis of Variance for the Composite Groups on the DSSQ Dimensions……………………………. 124
E22. Analysis of Variance for the Composite Groups on the DSSQ Worry Dimension…………………….. 124
E23. Analysis of Variance for the Composite Groups on the DSSQ Engagement Dimension………………. 125
E24. Analysis of Variance for the Composite Groups on the DSSQ Distress Dimension…………………… 125
xi LIST OF TABLES
Table Page
1. Mean percentages of correct detections and standard errors for all groups within each period of watch ……………………. 34
2. Mean percentages of false alarms and standard errors for all groups within each period of watch …………………….. 37
3. Means and standard errors for the weighted subscales of the NASA-TLX for all experimental conditions………………. 39
4. Mean standardized pre-vigil scores for all experimental groups on each DSSQ dimension……………………….….….. 41
5. Mean standardized change scores for all experimental groups on each DSSQ dimension..…………………………….. 42
xii LIST OF FIGURES
Figure Page
1. Examples of neutral (safe) and critical (threat) signals………..… 28
2. Example of a search/destroy screen………………………..……. 29
3. Search/destroy screen following a correct detection………..…… 29
4. Search/destroy screen following target acquisition and destruction………………………………………………..…. 29
5. Search/destroy screen following an incorrect guess at missile launcher acquisition……………………………….…..… 30
6. Examples of knowledge of results (correct detection and miss)... 31
7. Mean percentages of correct detections for the detection/action and detection-only groups as a function of periods of watch…………………………………………..….. 35
8. Mean global workload ratings on the NASA-TLX for the seven experimental groups…………………………….………... 38
9. Mean workload ratings for the subscales of the TLX for the seven experimental groups.……………………………… 40
10. Mean standardized change scores for the detection/action and detection-only groups on the dimensions of the DSSQ……. 43
xiii
CHAPTER 1
Introduction
The Vigilance Quandary
Vigilance and Automation. Vigilance, or sustained attention, concerns the ability
of participants to detect critical stimulus events occurring in an infrequent and
unpredictable manner over a protracted period of time (Davies & Parasuraman, 1982;
Warm, 1984; Warm & Dember, 1998). Operator performance during tasks requiring
sustained attention is a key area of interest for human factors/ergonomic specialists
because of the role that vigilance plays in many automated human-machine systems
including military surveillance, air-traffic control and cockpit operations, industrial
quality control, cytological screening, and instrument monitoring during surgery (Gill,
1996; Howell, 1993; Proctor & Van Zandt, 1994; Satchell, 1993; Warm, Dember, &
Hancock, 1996; Weinger & Englund, 1990; Wickens, 1992). More recently, with
increased threats to homeland security and public safety, the study of the factors that influence human performance is of principle importance. This is particularly true of
performance during sustained attention tasks because of the role that vigilance plays in airport baggage inspection and passenger screening (Hancock & Hart, 2002).
The fallibility of the human operator often increases as the information-processing load of a task increases (Matthews, Davies, Westerman, & Stammers, 2000). Therefore, one of the presumed benefits of the development and utilization of automation in the acquisition, storage, and processing of information is a reduction of the information- processing demands placed upon the operator. As Sheridan (1970; 1987) has pointed out, automation has transformed the role of human operators from active controllers to that of
1
executive/supervisors serving in a fail-safe capacity who are only required to intervene in the event of system failure. However, this “peripheralization of the operator” (Satchell,
1993) can lead to an over-reliance on mechanized system safeguards and operator complacency, often with dire consequences (Molloy & Parasuraman, 1996). Accordingly, the need to understand those aspects of the operational environment and the physical stimuli that facilitate optimum vigilance performance remain especially critical today.
The Origins of Vigilance Research. The term “vigilance” was first formally used in 1923 when the British neurologist, Sir Henry Head, adopted it to describe a state of extreme preparedness of the central nervous system to detect and react to unpredictably occurring critical cues in the environment (Davies & Parasuraman, 1992;
Matthews et al., 2000; Parasuraman, 1984; Parasuraman, Warm, & See, 1999; Warm,
1984; Warm & Dember 1998). However, experimentally controlled studies of vigilance did not truly begin until World War II when well-trained and highly motivated British airborne radar operators on anti-submarine patrol were found after about only 30 minutes on watch to begin to miss radar signals indicating the presence of German submarines positioned on the surface of the sea below. As a consequence, the enemy was left free to prey upon allied ships (Warm, 1984).
The unexpected drop in performance efficiency over time on watch on the part of the airborne participants prompted the Royal Air Force to commission Norman
Mackworth (1948; 1950/1961) to study the problem experimentally. In response to that assignment, Mackworth devised a simulated radar display known as the clock test. It consisted of a round clock face devoid of tic markings within which a black pointer moved about its inner circumference in 0.3 inch jumps once per second. At infrequent
2
and unpredictably occurring intervals, the pointer made a 0.6 inch jump signifying a critical signal requiring participant response. While the critical signals did not produce dramatic changes in the operational environment, they were clearly perceptible once participants were familiarized with them and the probability of critical signal occurrence was independent of their responses. Participants were tested individually for a prolonged and continuous period of time (two hours in Mackworth’s experiments). The Clock Test established the experimental parameters for most subsequent vigilance studies (Warm,
1984).
Mackworth’s research confirmed the field-derived suspicion that sustained attention is indeed fragile and that performance on a vigilance task declines over time. In
Mackworth’s study, the initial level of signal detections by his participants was high, around 85%, but after only 30 minutes on watch, signal detection dropped by 10% and continued to drop more gradually thereafter. This decline in performance efficiency over time, referred to as the vigilance decrement or decrement function, is the most omnipresent finding in vigilance research having been confirmed in many studies employing varied display arrangements including auditory and tactual as well as visual signals (Ballard, 1996; Davies & Parasuraman, 1982; Dember & Warm, 1979; Warm,
1984; Warm & Jerison, 1984). The decrement has been found in both real-life operational settings as well as in controlled studies conducted in the laboratory (Baker,
1962; Colquhoun, 1967; 1977; Hancock & Hart, 2002; Pigeau, Angus, O’Neill, & Mack,
1995). It is usually complete within the first 20 to 35 minutes of vigil onset with half of the decrement usually occurring within the first 15 minutes (Teichner, 1974). Under extreme conditions, however, such as when signal discriminability is especially poor, i.e.
3
low signal salience, the decrement can occur within the first five minutes of watch
(Helton, Dember, Warm, & Matthews, 2000; Jerison, 1963; Neuchterline, Parasuraman,
& Jiang, 1983; Temple et al., 2000). One thing is clear; understanding the factors that
drive human performance efficiency downward during a sustained attention task and also
determine the overall level of performance efficiency is of central importance for
continued public safety and national security.
Vigilance and Workload
The Arousal and Mindlessness Models. Due to the need to continuously
monitor a repetitive array of cascading neutral stimulus events for rare target signals,
vigilance tasks have typically been characterized as tedious, under-stimulating
assignments in which workload demands are minimal (Nachreiner & Hänecke, 1992;
Proctor & Van Zandt, 1994). This view has prompted two theories of vigilance
performance known as the arousal and mindlessness models. The former is founded on
the classic work of Moruzzi and Magoun (1949) indicating that the brainstem reticular
formation is a key system in the maintenance of cortical activation or arousal. According
to the arousal model, a certain degree of stimulation and excitation is necessary for
components of the central nervous system such as the ascending reticular formation,
locus coeruleus, and the diffuse thalamic projections, to facilitate the alertness required
for performance maintenance during a task (Aston-Jones, 1985; Frankmann & Adams,
1962; Loeb & Alluisi, 1984; Mackworth, 1969; Nachreiner & Hänecke, 1992; Proctor &
Van Zandt, 1994). With vigilance being viewed as an understimulating task requiring
minimal effort, it was postulated that the vigilance decrement was the result of inadequate
levels of cortical stimulation or underarousal.
4
More recently, Robertson and his colleagues (Manly, Robertson, Galloway, &
Hawkins, 1999; Robertson, Manly, Andrade, Baddeley, & Yiend, 1997) have also offered
a model of vigilance based upon understimulation termed the mindlessness model.
According to these investigators, the tedious and understimulating nature of vigilance
tasks leads participants to withdraw attentional effort from the task and deal with it in a
thoughtless, routinized manner. Dickman (2002) and Helton et al. (2005) have been
careful to point out that the Robertson approach reflects a purposive modulation of
attention rather than a spontaneous decline in wakefulness and vigor that is a
consequence of lowered activity in the arousal centers of the brain.
Both of these models are founded upon the assumption of understimulation
inherent in vigilance tasks. However, Warm, Dember, and Hancock (1996) have affirmed
that the view of vigilance tasks as imposing little information-processing demand upon
operators is founded on a potentially flawed task analysis, one based upon investigators’
surface impressions of the workload demands imposed upon participants rather than upon
measurements of the actual degree of perceived mental workload or the information
processing load/resource demands (Eggemeier, 1988) associated with the task. In an
effort to rectify that situation, Warm and his associates (1996) uncovered evidence
indicating that the understimulation assumption was incorrect. Rather than being a classic
case of understimulation, the cost of mental operations in vigilance is substantial.
The NASA-TLX Studies. Warm and his colleagues (1996) assessed the
workload of sustained attention by means of the NASA-Task-Load-Index (NASA-TLX),
a subjective rating scale of the mental workload demanded by a task (Hart & Staveland,
1988). The instrument is one of the most effective measures of perceived mental
5
workload currently available (Farmer & Brownson, 2003; Hill, Iavecchia, Byers, Zaklad,
& Christ, 1992; Lysaght et al., 1989; Nygren, 1991; Procter & Van Zandt, 1994; Wickens
& Hollands, 2000). It is a highly reliable (test-retest reliability = .83), multidimensional scale that provides an index of overall workload on a scale from 0-100 while identifying the relative contributions of six sources of workload. Three of these sources reflect the demands that tasks place upon operators (Mental Demand, Temporal Demand, and
Physical Demand) whereas the remainder characterizes the interaction between the operator and the task (Performance, Effort, and Frustration). The scale is reproduced in
Appendix A.
Recognizing that the NASA-TLX is essentially a subjective scale, Warm and his associates (1996) followed a strategy suggested by Natsoulas (1967) in which the validity of subjective reports is bolstered by linking them to psychophysical factors known to drive performance. In an extensive series of investigations, they found that counter to what might be anticipated from the arousal and mindlessness models, the vigilance decrement is not accompanied by a decline in mental workload. To the contrary, it is paralleled by a linear increase in the global workload score. In addition, they reported that global workload increases as signal conspicuity is reduced and that it also increases with increments in the rate of presentation of stimulus events that must be scanned in the hunt for critical signals (the background event rate), and increments in the number of displays to be monitored in that hunt. Workload in these studies was also elevated by the need to search wide areas of the monitored display in order to detect target signals and by the need to deal with the distracting effects of loud background noise. Individually, each of these parameters degrades the efficiency of vigilance performance (Ballard, 1996; Davies
6
& Parasuraman, 1982; Warm, 1993; Warm & Jerison, 1984). In all of the workload studies, the global workload scores fell within the upper level of the NASA-TLX and exceeded those typically found with other types of experimental tasks including mental arithmetic, memory search, grammatical reasoning, choice reaction time, and simple tracking, and were on par with those found with a motion-based flight simulator
(Hancock, 1988; Hancock, Rodenburg, Matthews, & Vercruyssen, 1988; Hart &
Staveland, 1988; Liu & Wickens, 1987; Sanderson & Woods, 1987). Moreover, the vigilance studies described by Warm and his associates (1996) revealed a consistent workload signature among the NASA-TLX subscales wherein Mental Demand and
Frustration were the principal components of the workload imposed by the vigilance tasks.
The high level of information-processing demand associated with vigilance tasks described by Warm et al. (1996) has been confirmed in a wide variety of subsequent investigations (Deaton & Parasuraman, 1993; Dittmar, Warm, Dember, & Ricks, 1993;
Finomore, 2006; Grier et al., 2003; Grubb, Warm, Dember, & Berch, 1995; Helton et al.,
2005; Hollander et al., 2004; Matthews, 1996; Miller, Warm, Dember, & Schumsky,
1998; Parsons et al., 2000; Scerbo, Greenwald, & Sawin, 1993; Schoenfeld & Scerbo,
1997; 1999; Szalma et al., 2004; Temple et al., 2000; Warm, Dember, & Parasuraman,
1991). These findings have led Johnson and Proctor (2004) to conclude in a recent review of attention that the high level of workload in vigilance challenges arousal theory and supports Parasuraman and his colleague’s view (Davies & Parasuraman, 1982;
Parasuraman, 1984; Parasuraman & Davies, 1977; Parasuraman, Warm, & Dember,
1987; Warm & Dember, 1998) that the information- processing demanded by vigilance
7
tasks is high and that the vigilance decrement reflects the depletion of information- processing resources over time. A similar argument regarding focused mental effort has been made by Grier et al. (2003) and Helton et al. (2005) with respect to the mindlessness model.
Challenges to the Workload Evidence
Boredom. A major aspect of the workload studies described above is the assumption that the elevated workload scores arise from the information-processing demands made by the vigilance task itself, a direct cost model. However, as Scerbo
(1998) and Sawin and Scerbo (1995) have suggested, there is another possibility – the high workload scores emanate not from task demands directly but indirectly from participants’ efforts to overcome the tedium and boredom inherent in the vigilance environment, an indirect cost model. In an effort to test these models, Hitchcock, Dember,
Warm, Moroney, and See (1999) performed an experiment in which one group of participants was given cues to the imminent arrival of critical signals while a control group was denied that information. Hitchcock et al. (1999) expected that the cueing procedure would permit participants to husband their information-processing resources because they would only have to monitor the vigilance display when prompted about signal arrival. On the other hand, since the tedious and repetitive nature of the task remained unchanged in the cuing condition, boredom was expected to be unaffected by cueing. Consequently, they anticipated that if the direct cost model were appropriate, the workload scores for the cued participants would be significantly lower than those for the uncued controls but that that cueing would have little effect upon boredom ratings.
However, if the indirect cost model was operative, both the workload and boredom
8
ratings should remain high. Consistent with the direct cost model, cueing significantly reduced the workload of the vigilance task but had no effect upon boredom. In this way,
Hitchcock and his associates (1999) were able to uncouple workload and boredom and provide substantial support for the direct cost model of the workload of sustained attention.
A second experiment along this line by Alikonis, Warm, Matthews, Dember, and
Kellaris (2002) was designed to disengage workload and boredom by focusing on a procedure calculated to suppress the boredom element. Toward that end, they took advantage of the fact that music has been shown to be effective in modifying participants’ moods and emotions (Hargreaves & North, 1999; Lewis, Dember, Schefft, &
Radenhausen, 1995). Accordingly, they asked participants to listen to a pleasant musical selection in the course of a vigilance task with the expectation that the music would lower the rated boredom of the task. However, since the musical background afforded no benefits in discriminating signal and non-signal events, it was anticipated that workload would remain high in the presence of music. Both expectations were confirmed in the
Alikonis et al. (2002) study, music significantly reduced the rated boredom of the vigilance task but had no effect upon the workload scores, which were in the upper levels of the NASA-TLX scale. The Hitchcock et al. (1999) and Alikonis et al. (2002) studies serve as converging operations (see Kramer, Coles, & Logan, 1996) supporting the direct cost view of the workload of sustained attention.
The Gunn et al. (2005) Study. The view that vigilance tasks impose high levels of workload upon participants and the support that view provides for a resource theory interpretation of vigilance performance have survived a serious challenge from the
9
indirect cost/boredom position. Recently, however, a new challenge has emerged in a
study conducted by Gunn and his associates (Gunn et al., 2005) which examined the
workload of sustained attention in a situation involving the simulated control of an
unmanned aerial vehicle (UAV) in which the vigilance signal warned participants of the
presence of an enemy aircraft in the UAV’s airspace. Upon noting that signal, they were required to make a manual detection response and then visually search a display of the surrounding airspace to locate and destroy the enemy aircraft. Given that the study employed a vigilance display that had previously been shown to be highly demanding
(Deaton & Parasuraman, 1993), and that participants were required not only to detect the vigilance signal but to perform a subsequent and simultaneous search and destroy task as well, one might anticipate that the perceived mental workload in this study would be equivalent to, or greater than, that found in prior vigilance investigations. To the contrary, workload scores in the study fell at the low end of the NASA-TLX scale, a result that was inconsistent with the previous vigilance studies and one that was incompatible with the resource model of sustained attention.
What can account for this unexpected drop in workload? Gunn and his colleagues
(2005) pointed out that the laboratory studies responsible for uncovering the high workload associated with vigilance tasks focused solely on the variables influencing signal detection and ignored the subsequent actions that might have to be taken by participants after detecting such signals, as is likely to be the case in an operational environment. Consequently, their study was the initial experimental investigation into the detection/action scenario whereby performance on a vigilance task had immediate consequences for action to follow. They suggested that the quality of vigilant behavior
10
could have taken on greater importance in this more dynamic context and led to lower
ratings of perceived workload, a suggestion consistent with the view that the manner in
which participants interpret a situation may have substantial impact upon the perceived
mental workload of sustained attention tasks (Hancock & Warm, 1989).
One goal for the present investigation was to test the detection/action possibility
along with three others: (1) the chance that the search/destroy task was less demanding
than the vigilance task and that participants averaged the two in assessing the workload
they experienced, a mode of responding that would lower the overall workload score; (2)
the beneficial effect of scene variation, since the search component in the Gunn et al.
(2005) study required participants to be exposed to natural ground-to-air scenes that were more variegated and interesting than those of traditional vigilance studies; and (3) the beneficial effect of knowledge of results (KR) about the accuracy of signal detections inherent in the Gunn et al. (2005) study, since KR has been found to be of effective value in lowering the workload in a vigilance task (Becker, Warm, Dember, & Hancock, 1995).
Insight into the potential contribution of KR in the study conducted by Gunn and his associates comes from the fact that the role of the vigilance task in that study was to eliminate the participant’s need to continually scan the surrounding air space by making such scanning necessary only when an enemy aircraft was noted by an automated program that scrutinized the sky and alerted the participant through the vigilance display.
Consequently, participants in that study monitored a display at the bottom of an otherwise blank computer screen. Upon detecting a critical signal, the screen then displayed the surrounding air space, thereby indicating a correct detection on the vigilance task. The surrounding air space was also displayed when the participant missed a critical signal,
11
indicating a detection failure on the vigilance task, while the airspace was not displayed
when the participant executed a detection response when no critical signal was present,
indicating a false alarm on the vigilance task.
Vigilance and Stress
An Additional Component in the Fabric of Vigilance. Along with the early
view that vigilance tasks were understimulating was the belief that they were benign assignments that, except for boredom, placed little stress upon participants (Frankmann &
Adams, 1962; Nachreiner & Hänecke, 1992; Welford, 1968). Subsequent research has shown, however, that in addition to boredom (Hitchcock et al., 1999; O’Hanlon, 1981;
Sawin & Scerbo, 1995; Scerbo, 1998; Thackray, Bailey, & Touchstone, 1977) vigilance tasks induce a variety of stress responses in participants (Warm, 1993).These effects are not a matter of inconsequence. Stress, as a well-documented antecedent for the mental and physical declines in worker health and productivity, has become a broad research focus for the National Institute for Occupational Safety and Health (Nickerson, 1992;
Sauter, Murphy, & Hurrell, 1990). Moreover, as described by Strauch (2002), the impact of stress on worker performance is a key concern for error and accident investigation.
Given that sustained attention tasks are often an unavoidable component of many human- machine systems, an understanding of the aspects of vigilance tasks that induce high levels of stress in human operators is of considerable importance.
Stress Defined. Any examination of the stress of sustained attention is predicated on a consideration of the general elements that give rise to stress reactions and a conceptual definition of the nature of those reactions. For the most part, stress has been viewed conventionally as arising from the operator’s physical and social environment
12
(Hockey 1984; 1986; Wickens & Hollands, 2000) and the roles of these factors,
particularly those of the physical environment, such as sensory isolation, temperature,
noise, and vibration, have been explored in depth in regard to vigilance (Ballard, 1996;
Davies & Parasuraman, 1982; Hancock, 1984). However, the finding that individuals
may also be stressed by the tasks they need to perform led Hancock and Warm (1989) to suggest that in order to understand the general role of stress in human performance, it is necessary to revise the narrow view of stress as an independent environmentally and/or socially derived agent that affects performance and realize that tasks themselves can be a significant source of stress. In that regard, vigilance is a case in point.
The term stress has evolved from a 17th century physics parameter used to
define the area over which a heavy load or weight impinged upon a man-made structure
(Lazarus, 1993), to a commonly used lay-term implying a generalized state of
psychological or mental discomfort. Despite this over-generalized simplification of the
term as it is used in everyday vernacular, providing an empirically grounded definition of
stress in the context of human performance has proven to be a challenging endeavor
because it has been conceptualized in several different ways (Alluisi, 1982; Asterita,
1985).
Influential early views were those of Cannon (1927; 1932) and Selye (1976) in
which stress was considered to stem from the somatic reactions that occur as a
consequence of disruptions to homeostasis. According to Cannon, threats to homeostatic
equilibrium generated by external conditions (e.g., heat or cold) and internal deficiencies
(e.g., low blood sugar) result in sympathetic activation of the adrenal medulla and the release of several hormones which, in turn, give rise to stressful emotions (see Asterita,
13
1985, and Dunbar, 1954 for reviews). Selye (1976) proposed the existence of a broad
physiological response called the General Adaptation Syndrome (GAS), a series of
nervous and endocrine gland activities that permit the body to cope with stressful stimuli
in the most effective way possible. Within that model, antecedents that provoke such a
broad physiological response are considered stressors. Still another view considers stress as a deviation from an optimum level of arousal (Hockey, 1984, 1986; Hockey &
Hamilton, 1983). Within that model, stressors such as extremes of temperature, noise, or lack of sleep increase or decrease the degree of arousal relative to an optimum level for a given task, and thus have a negative impact on performance efficiency. The arousal view of stress has, however, received considerable criticism on the grounds of a lack of correlation among different physiological indicators, difficulties in defining the effects of stressors on arousal independent of effects on performance and complexities arising from findings that the same stimulus elements elicit different stress responses across individuals (Hancock, 1984; Hockey, 1986; Hockey & Hamilton, 1983; Matthews, 2001;
Szalma, 1999). More recently, researchers have suggested that the physiological and emotional reactions to stress are not so much a consequence of homeostatic imbalance but rather a result of the interaction of multiple brain systems. For example, animal models of stress, such as those employed by Gray (1987) and LeDoux (1996), have provided evidence through the study of drug effects and brain lesions that brain systems involving the hippocampus, the septum, and the amygdala may act to produce patterns of emotional behavior (Matthews, 2000, 2001). These models, while providing valuable insight into the complexities of stress, have also produced some potentially perplexing results. For example, in drug studies involving the use of nicotine, O’Connor (1985)
14
found that the ways in which individuals experienced the drug as relaxing or stimulating
were mediated by their perceptions of the gains and costs associated with smoking,
implicating a cognitive component to an otherwise physiologically centered model.
Indeed, cognitive models proved a current alternative to those linked solely to physiology.
As described by Matthews et al. (2000), contemporary cognitive approaches to
stress affirm that the traditional physiologically based approaches may be overly
simplistic−they do not provide a satisfactory explanation for either inter-relating the effects of different stressors or for explaining the cognitive patterning of the stressor as a
whole. As an alternative, stress can be described in terms of person-environment
interactions. Two such models include Hockey’s cognitive state model (1984, 1986) and
Lazarus and Folkman’s transactional model of stress (1984). In Hockey’s cognitive state model (1984), stressors are seen as giving rise to cognitive patterns of activity which can produce changes in operator performance. The cognitive patterns and subsequent operational changes in performance brought about by various environmental stressors are thought to reflect shifts in the information-processing demands of a task (Hockey, 1984).
Within the Lazarus and Folkman (1984) transactional model, stress is considered to arise from individuals’ appraisals of their environment as taxing or exceeding their coping skills or threatening their physical well being. Within that model, stress states can seen as emanating from the dynamic interactions between individuals and the external demands that are placed upon them (Matthews, 2001; Matthews et al., 1999a). For the purposes of the present investigation, stress was viewed as a complex, multidimensional construct for which employment of the transactional model was the most appropriate.
15
Physiological Measures of Stress in Vigilance. One of the ways in which the
stress of sustained attention tasks has been assessed is through the measurement of
physiological indices such as catecholamine levels (adrenaline/epinephrine and norepinephrine) and corticosteroids, both of which are released into the bloodstream in elevated amounts under conditions of stress (Parasuraman, 1984; Warm, 1993). In early studies along this line conducted by Frankenhaeuser and her associates (1971, 1979), participants were assigned to one of three conditions: a challenging sensorimotor task in which they were asked to press buttons, push pedals, and pull levers, a supposedly understimulating sustained attention condition, and a control condition in which they
were asked to read magazines. In both experimental conditions, participants experienced
an increase in epinephrine and norepinephrine levels over time while catecholamine
levels for participants in the control condition decreased. Additionally, results from the
Frankenhaeuser, Nordheden, Myrsten, and Post (1971) study revealed a positive
correlation between adrenaline (epinephrine) levels and performance on the vigilance
task leading the researchers to suggest that performance efficiency comes at a
physiological cost in terms of stress. Similar results have been reported in other studies from Frankenhaeuser’s laboratory (Lundberg & Frankenhaeuser, 1979; Frankenhaeuser
& Patkai, 1964). Moreover, a study by O’Hanlon (1965) found elevated adrenaline levels among participants who had prior experience with a vigilance task while they waited to begin performing that task again on another day.
Along with increases in circulating catecholamines and corticosteroids, other studies of sustained attention have revealed increases over time in muscle tension
(Carriero, 1977; Hovanitz, Chin, & Warm, 1989), physiological tremor (Galinsky, Rosa,
16
Warm, & Dember, 1993), and general restlessness (Galinsky et al., 1993; Thackray,
Bailey, & Touchstone, 1977). In addition, vigilance tasks have been reported to produce tension headaches in sensitive participants (Hovanitz et al., 1989; Temple et al., 1997).
Mood Measures of Stress in Vigilance. While the stress of sustained attention has been examined through the use of physiological indices, self-report measures offer another option, one which Thayer (1989) suggests may be preferable to physiological indices due to the natural bridging between self-report measures and cognitive states.
In an early attempt at assessing the subjective states associated with sustained attention, Thackray et al. (1977), using a simple 9-point scale, found that participants in a vigilance task reported feeling much less attentive, and more strained, irritated, and fatigued after a vigil than before its start. These results have been replicated in numerous subsequent studies (Hovanitz et al., 1989; Lundberg, Warm, Seeman, & Porter, 1980;
Thieman, Warm, Dember, & Smith, 1989; Warm, Rosa, & Colligan, 1989). Moreover, using alternative scales such as the Yoshitake Fatigue Scale (Yoshitake, 1978) and the
Stanford Sleepiness Scale (Hoddes, Zarcone, Smythe, Phillips, & Dement, 1973), several studies have reported increased feelings of fatigue and sleepiness following a sustained attention task (Dittmar, Warm, Dember, & Ricks, 1993; Galinsky et al., 1993; Thieman et al., 1989; Warm, Dember, & Parasuraman, 1991).
Stress and the DSSQ. While these subjective assessment techniques have been valuable in uncovering some of the mood states associated with vigilance, they have also been found to be insufficient in that they typically take into consideration only unitary dimensions of stress. However, as Hockey (1997) and Matthews and his associates have affirmed (Matthews, 2001; Matthews et al., 2002), the structure of subjective mood states,
17
such as stress, is complex and a multi-dimensional framework is necessary for an
adequate description of them. Accordingly, Matthews et al. (1999b, 2002) developed an
instrument for the comprehensive assessment of stress, arousal, and fatigue in
performance contexts known as the Dundee Stress State Questionnaire (DSSQ). The
DSSQ consists of 85 items that yield eleven factor-analytically determined scales
measuring Energetic Arousal (alertness-sluggishness), Tense Arousal (nervousness-
relaxation), Hedonic Tone (happiness-cheerfulness), Intrinsic Task Motivation (interest in task content), Success Motivation (performance motivation) to Self-Focused Attention
(self-awareness, daydreaming, etc.), Self-Esteem, Concentration, Confidence and Control,
Task Relevant Cognitive Interference (worry about task performance), and Task
Irrelevant Cognitive Interference (self-oriented thoughts that are not task-related). A second factor analysis conducted on the eleven primary factors generated three secondary factors, Worry, Engagement, and Distress. The Worry factor encompasses Self-Focused
Attention, Self-Esteem, Task-Relevant Interference, and Task-Irrelevant Interference;
Engagement includes Energetic Arousal, Intrinsic and Success Motivation, and
Concentration, while the Distress factor incorporates Tense Arousal, Hedonic Tone, and
Confidence and Control (Matthews et al., 1999a). To date, a large series of studies
utilizing the DSSQ has revealed a consistent profile in which participation in a vigilance
task results in a loss in task engagement and a heightened level of distress along with a
decline in worry (Grier et al., 2003; Helton, Dember, Warm, & Matthews, 2000; Helton
et al., 2004, 2005; Matthews, 2001; Matthews et al., 2002; Parsons et al., 2000; Szalma et
al., 2004; Szalma, Hancock, Dember, & Warm, 2006; Temple et al., 2000). Other types
of tasks reveal different profiles. For example, a reading task was found to produce an
18
increase in engagement and a decline in distress along with a decline in worry (Matthews et al., 2002).
The Gunn et al. (2005) Study Revisited. The Gunn et al. (2005) study which forms the basis for this investigation focused only on workload and did not consider the stress of sustained attention. However, it may have potentially important implications for the stress issue as well. A key aspect of the transactional model of stress (Lazarus &
Folkman, 1984) which was featured in this investigation is the idea that stress arises from individuals’ appraisal of their inability to cope with demands placed upon them. Within that context, two aspects of the research on the stress of sustained attention are important to note: (1) as in the workload studies, all of the investigations of stress in vigilance have studied vigilance in an “abstract” format in which a detection/action sequence was absent, and (2) data are available to indicate that in many situations, task-induced stress appears to result from participant’s perceived lack of control over what transpires (Averill, 1973;
Hancock & Warm, 1989; Nickerson, 1992; Scerbo, 1998). Since the vigilance assignment in the scenario employed in the Gunn et al. (2005) study provided participants with a degree of situational control by enabling them to detect enemy threats, the stress of sustained attention in that scenario might be reduced in comparison to a traditional vigilance task in which the detection/action sequence is absent. A second goal for the present study was to test that possibility in regard to the Worry, Engagement, and
Distress dimensions of the DSSQ.
19
CHAPTER 2
Method
Participants
One-hundred-forty undergraduate psychology students from the University of
Cincinnati (70 women and 70 men) participated to fulfill a course requirement. They
ranged in age from 18 to 50 years, with a mean age of 20.3 years. All had normal to
corrected-to-normal vision and were right handed as determined by self-reports in
conjunction with handedness during informed consent signing.
Design
A between-subjects design was employed. Twenty participants, 10 women and 10
men, were assigned at random to one of seven experimental conditions needed to assess
the detection/action, composite averaging, scene variation, and KR explanations of the
workload results in the Gunn et al. (2005) study. Included in these conditions were: (1) a
detection/action composite condition (DAC-WVS/d) in which participants performed both
vigilance and search/destroy tasks and rated workload based the composite of both tasks as in the Gunn et al. (2005) study; (2) a detection/action composite condition (DAC-WV) in which participants performed both the vigilance and search/destroy tasks and rated workload only on the vigilance component−to assess the component averaging possibility; (3) a detection/action composite condition(DAC-WS/d) in which participants
performed the vigilance and search/destroy tasks and rated workload only on the
search/destroy component−to assess the component averaging possibility; (4) a vigilance
only random scene condition (V-RS) in which the search/destroy component was absent
but the scenes associated with that component appeared in a random manner that was
20
independent of vigilance responses−to assess the scene variation possibility; (5) a
vigilance only KR condition(V-KR) in which participants only performed the vigilance
task, the search/destroy task and its associated scenes were absent and KR was given via
differential light flashes−to assess the effects of KR; (6) a KR control condition
(V-KRC) in which participants only performed the vigilance task, the search/destroy task
and its associated scenes were absent as was KR, but the differential light flashes used to
provide KR were flashed at random intervals during the experimental session−to control
for the effects of accessory stimulation in the KR condition; and (7) a vigilance only
control condition (V-C) in which participants only performed the vigilance task, the
search/destroy, scene change, KR, and random light flash elements were absent.
Apparatus
All participants served in a 32.4 min vigil divided into 3 consecutive 10.8 min
periods of watch during which they assumed the role of a member of a control team in
charge of a simulated squadron of UAVs flying over enemy territory. All participants,
regardless of condition, continuously monitored the repetitive presentation of digit pairs
on a video display terminal (VDT) for the occurrence of threat signals indicating ground-
based enemy missile launchers that could fire on the squadron as it flew over enemy air
space. The warning display featured in this investigation was a cognitive-type vigilance
task developed by Deaton and Parasuraman (1993) and was identical to that employed in
the Gunn et al. (2005) study. Digit pairs were drawn from the set 0, 2, 3, 5, 6, and 9 and were displayed along a horizontal vector at the bottom center of a black CRT screen
shown in Figure 1. In all experimental conditions, stimuli were exposed for 300 ms
resulting in a digit pair every 1.5 s. Participants responded to the appearance of critical
21
signals defined by odd/even or even/odd digit pairings by pressing the spacebar on the
computer keyboard using only their left hand. Odd/odd or even/even digit pairings were considered neutral signals and required no response from the participants. In all experimental conditions, 18 critical signals (half odd-even, half even-odd) appeared at
random intervals within each of the three watchkeeping periods (signal probability
= .042). Responses occurring within 1.25 s of the onset of a critical signal were recorded
as correct detections (hits); those outside this temporal window were considered errors of
omission (misses). Responses to non-signal events were considered errors of commission
(false alarms). As in the Gunn et al. (2005) study, each block digit was contained within a
21 mm x 33 mm rectangular area and was separated from its pair-mate at the closest point
by 25 mm.
Neutral Signal Critical Signal
26 30
Figure 1. Examples of a neutral (safe) and critical (threat) signal, respectively.
For the three detection/action composite conditions (regardless of workload
imperative), a search/destroy screen appeared immediately following a correct warning detection or 1.5 s following a miss. The search/destroy screen did not appear following a false alarm. As can be seen in Figure 2, the search screen (dwell time 4.5 s) depicted a display of an aerial view of sandy terrain (gold) with the top border depicting ocean
(blue). Six black Xs representing enemy missile launchers were positioned on the sandy terrain at fixed locations, one at each corner and two on a horizontal vector located in the
22
middle of the screen with a 9 cm separation between the midpoints of the mid-screen
elements.
X X X X
X X 36
Figure 2. Example of a search/destroy screen.
Following a correct warning detection, the search/destroy screen would appear
with one of the six missile launchers highlighted in blue (see Figure 3). This was the
enemy missile launcher identified as the immediate threat to the UAV squadron. The
participant was required to lock onto (acquire) the target by placing a computer mouse directly on the highlighted X and left clicking, at which point the launcher would become circled in red indicating its destruction (see Figure 4). The mouse was located immediately to the right of the computer keyboard and required the use of the index finger of the right hand.
X X X X X X X X
X X X X 36 36
Figure 3. Search/destroy screen Figure 4. Search/destroy screen following following a correct detection. target acquisition and destruction.
23
If a participant failed to detect the warning signal, the search/destroy screen
would appear as in the case of a correct warning detection. However, in this case, all of the 6 Xs (missile launchers) appeared black and the correct missile launcher for destruction was not highlighted in blue. Thus, the participant was required to guess as to
which of the six enemy missile launchers (Xs) was the critical one for destruction. If the participant guessed correctly, the X became blue and was encompassed by a red circle signifying that a correct guess had been made (see Figure 4). If the participant guessed incorrectly, the acquired missile launcher (X) remained black and was encompassed in a
red circle indicating its destruction while the enemy missile launcher that was the actual
threat became blue signifying to the participant that an incorrect guess had been made
(Figure 5). When a participant committed an error of omission, there was only a one in
five chance of identifying the correct enemy missile launcher and a five in six chance of
failure. Consequently, there was an advantage to the participant for continued
performance of the warning detection component of the task, a parameter employed to
encourage continuous task engagement by participants.
X X X X
X X 36
Figure 5. Search/destroy screen following an incorrect guess at missile launcher acquisition.
24
Participants in the V-RS condition were informed that their only responsibility was to alert the mission control’s commanding officer of threat warnings presented on the vigilance display and that the occasional appearances of colored scenes on their display were computer system checks requiring no response from them. The “system check” scenes were identical to the search/destroy scenes illustrated in Figure 2. They appeared at random intervals 18 times during each watchkeeping period with a dwell time of 4.5 s.
Participants in the remaining conditions were also informed that their only responsibility was to signify their detection of threat warnings on the vigilance display to the mission control commander. Those in the V-KR condition were informed that they would be receiving KR in regard to their performance efficiency and that KR would appear in the form of differential green or red light flashes to signify correct detections or misses, respectively. They were also advised that the absence of a green light flash after a detection response signified a false alarm. As shown in Figure 6, the differential light flashes were presented as rectangular boxes (9.0 cm x 6.5 cm) that appeared for a brief duration (200 ms) directly over the digit pairs immediately following a correct detection or 1.26 s after signal onset in the case of a miss.
30 30
Figure 6. Examples of knowledge of results (correct detection and miss, respectively)
Participants in the KR-C condition were told that the appearance of the differential light flashes were computer system checks requiring no response from them. In this
25
condition, the light flashes occurred at random intervals 18 times per watchkeeping
period with dwell times of 200 ms. Participants in the V-C condition were only exposed
to the Deaton and Parasuraman (1993) vigilance task described above and none of the
supplementary instructions given to the other groups were necessary. Complete
descriptions of the instructions given to each experimental group are presented in
Appendix B.
Procedure
Upon arrival, participants were provided a brief overview of the study and asked
to complete an informed consent form (see Appendix C) before being administered a
paper/pencil pre-task version of the DSSQ. Upon completion of the pre-DSSQ,
participants were tested individually in a 2.0 x 1.9 x 1.9 m Industrial Acoustics Sound
Chamber. They were seated in front of VDT, which was mounted at eye level at a
viewing distance of 60 cm. Ambient illumination in the chamber was 3.48 cd/m2. It was provided by a 25 watt light bulb housed in a parabolic reflector positioned above and behind the participant to minimize glare on the VDT. Following the presentation of computerized task instructions, participants were given up to three practice trials with the vigilance task only (3.15 min; signal probability = .14). Practice trials ended when the participant achieved an 83% correct detection rate while committing no more than 5% false alarms. Mean percentages of final qualifying detection rates among the seven experimental groups varied from 89% to 93% and did not differ significantly across groups, F (6, 133) = 1.094, p > .05). A fourth practice trial on an abbreviated version
(3.15 min; signal probability = .14) of the specific condition they would experience during the main portion of the testing session was given to participants in each of the
26
experimental groups except those in the V-C group. The fourth practice trial was not necessary for participants in the latter group since no new information was introduced to them beyond that which occurred during the initial vigilance practice trials.
Upon completion of the final practice session, participants were given a three minute break before beginning the main portion of the experimental session. Instructions to the participant indicated that threat signals would occur less frequently than they did
during the practice sessions. Stimulus presentation and response recording were
orchestrated by a Dell personal computer. The software was written in C++ using
OpenGL and Microsoft’s Visual Studio for display generation.
Following completion of the experimental session, participants were administered
a computerized version of the NASA-TLX (the NASA-TLX and instructions for its
completion can be found in Appendix A) followed by a paper/pencil post-task version of
the DSSQ. Copies of the pre- and post-versions of the DSSQ can be found in Appendix D.
All participants were asked to surrender timepieces, cell phones, and pagers upon arriving
for the study. They had no knowledge of the duration of the experimental session other
than it would not exceed 120 minutes.
27
CHAPTER 3
Results
Vigilance Performance
Correct Detections. Mean percentages of correct detections and associated
standard errors for the seven experimental groups are presented in Table 1.
Table 1. Mean percentages of correct detections and standard errors for all groups within each period of watch (standard errors are in parentheses).
Periods Groups 1 2 3 M
DAC-WVS/d 90.40 86.85 89.85 89.03 (1.81) (1.94) (1.74)
DAC-WV 90.55 86.55 86.90 88.00 (1.63) (2.09) (2.33)
DAC-WS/d 87.95 85.60 89.30 87.62 (1.82) (2.41) (1.84)
V-RS 85.20 81.50 80.95 82.55 (2.28) (2.93) (3.91)
V-KR 91.80 85.15 88.75 88.57 (1.46) (2.70) (2.30)
V-KRC 89.55 83.85 83.20 85.53 (2.02) (2.78) (2.87)
V-C 86.60 81.15 84.10 83.95 (2.67) (2.83) (3.44)
M 88.86 84.38 86.15 86.46
Recall that the seven experimental groups were formulated to test possible
explanations for the workload differences in the Gunn et al. study (2005). In terms of
performance efficiency, however, the groups fall into two general categories−those in
which signal detection on the vigilance warning task was accompanied by a subsequent
28
action component (the first three groups in the table) and those for which a subsequent
action component was absent (the last four groups in the table). Given that the
detection/action scenario was a major innovation in the Gunn et al. (2005) study and a principal research theme in this study, the detection data were analyzed in terms of that scenario. Toward that end, preliminary analyses of variance (ANOVAs) were conducted
to determine if the three detection/action groups differed significantly from each other
and if the four groups in which only vigilance was involved differed significantly from each other. In both cases, the between groups ANOVAs were not significant across periods of watch, Fdetection/action (2, 57) = 0.264, p > .05, Fdetection-only (3, 76) = 1.360, p > .05.
Consequently, the scores were combined and then averaged across groups for the first
three conditions and for the last four conditions in Table 1 to yield two groups, a
detection/action condition and a detection-only condition, respectively. Mean percentages
of correct detections for the two derived groups reflecting the detection/action scenario is plotted as a function of periods of watch in Figure 7.
92 DA 90 D-Only
88
86
84
% Correct Detections % Detections Correct 82
80 123 P eriods of W atch
Figure 7. Mean percentages of correct detections for the detection/action and detection- only groups as a function of periods of watch. Error bars are standard errors.
29
A 2 (groups) × 3 (periods) mixed-ANOVA of the data of Figure 7 revealed significant main effects for groups, F (1, 138) = 4.362, p < .05, and periods, F (1.99,
274.51) = 9.560, p < .001. The interaction between these factors lacked significance, F
(1.99, 2.74.51) = 1.250, p > .05. It is evident in the figure that participants in the
detection/action group detected significantly more warning signals (M = 88.22, SE =
1.11) than those in the detection-only group (M = 85.15, SE = 0.96). Supplementary
Scheffé tests with an alpha level of .05 revealed that across groups the mean percentage
of correct detections during period one (M = 88.96, SE = 0.77) was significantly greater
than that for period two (M = 84.62; SE = 0.96) and for period three (M = 86.47, SE =
1.05), while the latter two periods did not differ significantly from each other.
The between-groups factor in this analysis was based upon unequal Ns which is a
problem in utilizing the ANOVA (Kirk, 1995). However, the computer program
employed for calculating degrees of freedom in this ANOVA and all subsequent
ANOVAs in this study (SPSS 14.0) featured a Type III sum of squares procedure which
has the advantage of being invariant to the cell frequencies and therefore obviates the
problem of unequal cell frequencies (Field, 2005; Maxwell & Delaney, 2004). Where
appropriate, Box’s epsilon was employed in this and subsequent analyses to correct for
violations of the sphericity assumption (Maxwell & Delaney, 2004). Complete
summaries of all ANOVA’s in this study are provided in Appendix E.
False Alarms. Mean percentage of false alarms and associated standard errors
for the seven groups are shown for each period of watch in Table 2. It is evident in the
table that false alarms were rare, not exceeding one percent in any experimental condition.
Consequently, false alarm data were not analyzed further.
30
Table 2. Mean percentages of false alarms and standard errors for all groups within each period of watch (standard errors are in parentheses).
Periods Groups 1 2 3 M
DAC-WVS/d 0.6 0.48 0.36 0.48 (0.14) (0.08) (0.08)
DAC-WV 0.47 0.47 0.42 0.45 (0.07) (0.11) (0.09)
DAC-WS/d 0.45 0.46 0.36 0.42 (0.09) (0.09) (0.07)
V-RS 0.47 0.39 0.27 0.38 (0.14) (0.06) (0.06)
V-KR 0.46 0.34 0.35 0.38
(0.09) (0.08) (0.08)
V-KRC 0.34 0.25 0.25 0.28 (0.06) (0.05) (0.04)
V-C 0.47 0.30 0.24 0.34 (0.06) (0.06) (0.04)
M 0.47 0.38 0.32 0.39
Workload: NASA-TLX
Global Workload. Mean global workload scores on the NASA-TLX and their associated standard errors are presented for the seven experimental groups in Figure 8. It is clear in the figure that the global workload scores were nearly identical and fell at the upper end of the TLX for all groups but the DAC-WS/d group in which participants
performed both the vigilance and search/destroy tasks but only rated the search/destroy
31
element of the task composite. The mean global workload rating for that group (M =
28.15) fell at the low end of the TLX scale.
80
70
60
50
40 30 20
10 M ean Global W orkload M Global ean 0 DAC- DAC- DAC- V-RS V-KR V-KRC V-C WVS/d WV WS/d Groups
Figure 8. Mean global workload ratings on the NASA-TLX for the seven experimental groups. Error bars are standard errors.
A one way between-groups ANOVA of the global workload data revealed a statistically significant groups effect, F (6, 133) = 14.330, p < .001. Supplementary
Scheffé tests performed on the means with an alpha level of .05 revealed that all 6 of the groups that fell at the high end of the TLX were significantly different from the DAC-
WS/d group but not from one another.
Weighted Workload Ratings. Mean weighted workload ratings for the six
NASA-TLX subscales were also determined for the seven experimental groups. These
values and their corresponding standard errors are presented in Table 3. It is evident in
the table that overall, Mental Demand (M = 238.4) and Temporal Demand (M = 180.8), were the principle determinants of workload, respectively. Physical Demand contributed
32
the least (M = 18.5) and consequently was dropped from an ANOVA of the subscale scores in order to meet the independence assumption of the analysis.
Table 3. Means and standard errors (in parenthesis) for the weighted subscales of the NASA-TLX for all experimental conditions. (MD = Mental Demand, PD = Physical Demand, TD = Temporal Demand, P = Performance, E = Effort, F = Frustration).
Workload Subscales Condition MD PD TD P E F M
DAC- WVS/d 267.25 15.25 174.25 112.50 182.25 147.75 149.88 (30.2) (9.0) (31.1) (23.1) (26.9) (30.3)
DAC -WV 296.75 30.00 197.25 79.50 155.75 181.50 156.79 (19.3) (18.9) (28.2) (15.9) (21.8) (36.8)
DAC-WS/d 58.00 16.00 73.75 87.75 92.75 94.25 70.42 (16.2) (9.3) (22.5) (19.4) (23.1) (23.3)
V-RS 245.50 19.25 209.00 111.75 154.50 219.50 159.92 (29.4) (16.0) (32.1) (22.8) (21.0) (40.8)
V-KR 241.75 8.50 175.25 91.25 149.75 208.00 145.75 (30.9) (5.5) (29.8) (15.5) (23.5) (31.7)
V-KRC 305.25 25.50 178.50 95.50 178.25 153.50 156.08 (27.4) (13.6) (26.7) (20.9) (20.0) (23.4)
V-C 254.25 15.00 257.75 81.75 149.75 153.75 152.04 (26.1) (14.0) (29.5) (14.1) (22.0) (31.2)
M 238.39 18.50 180.82 94.29 151.86 165.46 141.55
The data of Table 3 were subjected to a 7 (groups) × 5 (subscales) mixed-
ANOVA with repeated measures on the last factor. The ANOVA revealed significant
main effects for groups, F (6, 133) = 14.075, p < .001, and subscales, F (3.40, 451.64) =
26.218, p < .001 and a significant interaction between these factors, F (20.38, 451.64) =
1.981, p < .01. The Group × Subscales interaction is displayed in Figure 9, wherein the scores for each experimental group are presented for each subscale.
33
350
300 DAC-Wvs/d 250 DAC-Wv
200 DAC-Ws/d V-RS 150 V-KR 100 V-KRC
Mean Workload Workload Mean 50 V-C 0 MD TD P E F TLX Subscales
Figure 9. Mean workload ratings for the subscales of the TLX for the seven experimental groups. Error bars are standard errors.
To explore the nature of the Groups × Subscales interaction, separate one-way
ANOVAs were computed for the groups within each of the TLX subscales. Significant group differences were found only within the Mental Demand and Temporal Demand
Subscales (p < .05). No significant group differences were obtained within the
Performance, Effort, and Frustration subscales, (p > .05). It is evident in the figure that the significant groups effect found within the Mental Demand and Temporal Demand subscales stemmed primarily from low scores for the DAC-WS/d which rated workload
only on the search/destroy component of the detection/action composite.
Stress: DSSQ
To measure task-related changes in mood, arousal and fatigue, standardized
scores for individual participants on the pre-test and post-test administrations of the
Worry, Engagement, and Distress dimensions of the DSSQ were obtained. Factor scores
for these three higher-order factors were estimated from regression equations using
weights derived from a previous study providing normative data obtained from a large
34
sample of British participants (Matthews et al., 2002). That is, each factor score is estimated as a weighted sum of first-order scales, using scale values that were standardized against the normative data. Factor scores are distributed with a mean of 0 and standard deviation of 1, so that the values calculated represent deviations from normative values in standard deviation units. Pre-task and post-task values for the three stress state factors were calculated separately.
For each of the seven experimental groups, mean standardized pre-vigil scores and their associated standard errors on each DSSQ dimension are presented in Table 4.
Table 4. Mean standardized pre-vigil scores for all experimental groups on each DSSQ dimension. Standard errors are in parenthesis.
DSSQ Dimensions Groups Worry Engagement Distress M
DAC-WVS/d .297 .554 -.210 .214 (.203) (.158) (.184)
DAC-WV .253 .288 -.299 .080 (.176) (.165) (.195)
DAC-WS/d .478 .098 -.263 .104 (.240) (.184) (.278)
V-RS .303 .323 -.226 .133 (.242) (.151) (.245)
V-KR .283 .545 -.752 .025 (.227) (.160) (.144)
V-KRC .252 .570 -.374 .149 (.170) (.162) (.192)
V-C .448 .211 -.605 .018 (.223) (.180) (.170)
M .331 .370 -.390 .103
35
One-way ANOVAs conducted on the data for each DSSQ dimension indicated that, at the outset of the study, the seven experimental groups did not differ significantly from each other on any of the dimensions (p > .05 in each case).
Table 5 presents mean post – pre difference scores and their associated standard errors for each of the seven experimental groups on each DSSQ dimension.
Table 5. Mean standardized change scores for all experimental groups on each DSSQ dimension. Standard errors are in parenthesis.
DSSQ Dimensions Groups Worry Engagement Distress M
DAC-WVS/d -.101 -.021 1.000 0.293 (.16) (.21) (.24)
DAC-WV -.310 .016 .964 0.223 (.19) (.21) (.14)
DAC-WS/d -.346 .027 .738 0.140 (.24) (.23) (.30)
V-RS -.136 -.229 1.110 0.248 (.21) (.22) (.26)
V-KR .039 -.498 1.330 0.290 (.21) (.21) (.20)
V-KRC -.245 -.327 1.180 0.203 (.19) (.17) (.19)
V-C -.368 -.243 1.360 0.250 (.19) (.19) (.17)
M -0.228 -0.182 1.097 0.230
To explore the possibility that the opportunity for control would moderate task- induced stress, the procedure followed in the case of the performance data of grouping the participants into those individuals who experienced the detection/action imperative and those who were denied the action imperative was also followed in regard to the stress
36
data. As in the case of performance efficiency, the former category consisted of the first
three groups of Table 5, while the last four groups in the table composed the latter
category. Preliminary one-way ANOVAs performed on the data for each detection/action
category within each of the DSSQ dimensions indicated that were no significant post-pre
differences among composite group members within each category on any of the DSSQ
scales, p > .05 in each case.
Mean post-pre difference scores for the two derived groups reflecting the
detection/action scenarios are presented for each of the DSSQ dimensions in Figure 10.
1.5 DA 1.2 D-Only 0.9
e S cores 0.6 g 0.3 0 -0.3 Standard Chan Standard -0.6 W o rry En g a g e m e n t D is tre s s
DSSQ Dimensions
Figure 10. Mean standardized change scores for the detection/action and detection-only groups on the dimensions of the DSSQ. Error bars are standard errors.
It is evident in the figure that both of the groups were less worried after the experimental session than prior to its start, as indicated by the fact that the error bars for
each group did not encompass zero or no-change. Similarly, it is evident that both groups
were more distressed at the end of the session than at the beginning and that the level of distress was greater among the detection-only participants than among those afforded a
37
detection/action opportunity. It is also evident in the figure that participants in the
detection-only group lost engagement over the course of the session so that they were less
engaged at the completion of the session than their detection/action cohorts. A 2 (groups)
× 3 (dimension) mixed-ANOVA of the data of Figure 10 revealed a significant main
effect for dimensions, F (1.749, 241.393) = 92.106, p < .001 and a significant Group ×
Dimension interaction, F (1.749, 241.393) = 4.994, p < .01. Separate one way ANOVAs computed on the data for each of the DSSQ dimensions revealed that participants in the detection/action group were significantly more engaged and significantly less distressed than those in the detection-only group, F (1, 138) > 4.0, p < .05 in both cases. The groups did not differ significantly on the worry dimension, F (1, 138) < 1.00.
38
CHAPTER 4
Discussion
In a recent study, Gunn and his associates (2005) asked participants to simulate
the role of UAV operators and use a vigilance task to alert them to the presence of enemy
aircraft in the vicinity of their vehicles which they could then track down and destroy. A
key finding in that study was the low workload ratings that the participants gave on the
NASA-TLX to the combined vigilance/tracking task. A finding of that sort was in stark
contrast to the results of a substantial number of studies indicating that the cost of mental
operations in vigilance is substantial, as reflected in high workload ratings on the NASA-
TLX scale (Deaton & Parasuraman, 1993; Dittmar, Warm, Dember, & Ricks, 1993;
Finomore, 2006; Grier et al., 2003; Grubb et al., 1995; Helton et al., 2005; Hollander et
al., 2004; Matthews, 1996; Miller et al., 1998; Parsons et al., 2000; Scerbo, Greenwald,
& Sawin, 1993; Schoenfeld & Scerbo, 1997; 1999; Szalma et al., 2004; Temple et al.,
2000; Warm et al., 1996; Warm, Dember, & Parasuraman, 1991).
The primary purpose of this investigation was to test several hypotheses regarding the low level of perceived mental workload reported in the Gunn et al. (2005) study. The
debate about the sources of the low workload in that study was temporized in the present
case, however, by the inability to confirm the findings of the earlier investigation.
Consistent with past research, and in contrast to Gunn et al. (2005), workload ratings in
all conditions in this study except one were in the upper level of the NASA-TLX scale.
The sole exception was in the DAC-WS/d condition wherein participants were asked to
focus only upon the search/destroy task itself in rating workload. At first glance one
might conclude that this finding supports the contention that the observers in the Gunn et
39
al. (2005) study could have made their workload ratings based only upon the search
component of the task. However, that consideration can be dispelled given that the
participants in this study who rated the workload of the task in its entirety, the condition
paralleling that of the Gunn et al. (2005) study, rated the workload of the task high.
Moreover, for the purposes of this study, the low workload obtained in the DAC-
WS/d condition is potentially quite important−it indicates that the current participants did
not simply have a generalized response bias toward high ratings on the NASA-TLX scale
and that the workload scale was indeed sensitive to differential experimental conditions.
Also consistent with the body of past research, was the finding that Mental Demand was
generally one of the major determinants of workload in this experiment (Warm et al.,
1996).
The inability to confirm the Gunn et al. (2005) finding was unanticipated since the present study was similar to that of the earlier study in what would appear to be major
task parameters. Both studies featured a simulated UAV control assignment. The warning
vigilance task and its dimensions of signal probability and watch duration were identical
in both studies. As in the earlier study, the present participants were college students who
were required to meet a strong performance screening criterion before being exposed to
the main task. Perhaps the major differences between the two experiments were the
contexts in which they were performed and the nature of the search/destroy displays
employed. The present study was conducted in a university laboratory setting while the
earlier experiment took place in a military setting at Wright Patterson Air force Base.
Moreover, the search/destroy display in the present study utilized X’s as abstract
representations of ground-based enemy missile launchers and a mouse-click to destroy
40
the target, while Gunn and his associates (2005) utilized a more representative display
involving sky scenes and an aiming reticle for target destruction. It is possible that
differences in realism and consequent face validity may be the factors that underlie the
disparate outcomes of the two investigations.
A key feature of the Gunn et al. (2005) study was the fact that participants were
able to use their detection of critical signals on the vigilance task to accomplish a
corresponding goal−to protect their UAVs from enemy action. As noted in Chapter One,
this was the initial experiment on the workload of sustained attention to utilize a
detection/action scenario. In all of the other investigations in this area signal detection
was more abstract; it had no meaning beyond the act of detection itself. Although the
detection/action theme did not have a differential effect upon the groups that rated
workload in this study, it did have a substantial effect upon performance efficiency and
upon task-induced stress.
In order to explore the effects of the detection/action link on performance efficiency, the seven experimental conditions that comprised this investigation were combined into two broad groupings−those that were required to perform a task-relevant concrete act based upon the detection of critical signals on the vigilance display and those
whose responsibility was limited to the vigilance task alone. The overall level of threat
warnings (signal detections) was significantly higher in the former group than in the latter.
On a theoretical level, a result of this sort is consistent with the ecological approach to
human factors which emphasizes the information transactions between human operators
and the systems that they encounter, especially as they relate to perceiving situations for
planning and action (Flach, Hancock, Caird, & Vicente, 1995). A central aspect of this
41
approach is the notion that perceivers not only detect the properties of environmental
stimuli, they are also aware what those properties afford or furnish for potential action
and that they pay closest attention to stimulus information that specifies action
possibilities (Gibson, 1979). Along this line, signals on the vigilance display would have
greater potential for subsequent action in the detection/action group than in the detection- only group, leading to greater levels of attention to the vigilance display and perhaps
greater levels of motivation to perform the vigilance task in the detection/action condition.
Support for a view of this sort comes from the finding that participants in the detection/action condition retained task engagement, as reflected by the DSSQ scale, throughout the experiment, while those in the detection-only group became less engaged in the task over the course of the vigil.
An alternate explanation of the higher level of performance in the detection/action group is that the opportunity to destroy the enemy missile launchers gave participants in that group something else to do and in that way, relieved the tedium often associated with vigilance tasks, especially since performance efficiency in sustained attention tasks varies directly with the degree of boredom experienced by the participants (Hitchcock et al.,
1999; Jerison, Pickett, & Stenson, 1965; Sawin & Scerbo,1995; Scerbo 1998; Thackray,
Bailey, & Touchstone 1977).This seems unlikely, however, since, as was the case in the
Gunn et al. (2005) study, participants in the detection/action condition in this investigation were never relieved of the need to attend to the vigilance display even when engaged in the search and destroy aspect of their assignment.
Although the overall level of performance in the detection/action group was greater than that of the detection-only group, both groups showed a similar decrement in
42
signal detections over time. As described by Johnson and Proctor (2004) and by Warm and his associates (Grier et al., 2003; Helton et al., 2005; Warm et al., 1996), a temporal decline in performance efficiency coupled with high workload scores on the NASA-TLX supports the idea advocated by Parasuraman and his colleagues (Davies & Parasuraman,
1982; Parasuraman, 1984; Parasuraman & Davies, 1977; Parasuraman, Warm, & Dember,
1987; Warm & Dember, 1998) that the information-processing demanded by vigilance tasks is high and that the vigilance decrement reflects the depletion of information- processing resources over time. Currently, the ecological and the information-processing approaches are considered to be competing theoretical models in the general realm of perception (Coren, Ward, & Enns, 2004). However, when taken together, these approaches may provide a blended account for the present results−the meaningful affordances offered by the detection/action link may have led to increased attention and motivation on an overall level while the consummation of attentional resources without replenishment explains the generalized decline in detection efficiency over time on task.
In addition to exploring the sources of low workload in the Gunn et al. (2005) study, the present investigation was designed to test the possibility that the detection/action link would serve to reduce the stress associated with the need to sustain attention in comparison to a traditional vigilance case in which the detection/action sequence was absent. The logic for that prediction was tied to the finding that task- induced stress in a variety of situations appears to result from participants’ perceived lack of control over what transpires and their consequent inability to develop adequate coping strategies to meet the demands placed upon them (Averill, 1973; Hancock & Szalma,
2007; Hancock & Warm, 1989; Nickerson, 1992; Scerbo, 1998; Szalma et al., 2004).
43
Thus, since participants in the detection/action condition of this study were provided with
a degree of situational control that would enable them to counter enemy threats, a degree
of control that was not available to participants in the detection-only group, it was
anticipated that stress responses would be attenuated in the detection/action group
relative to their detection-only cohorts. The results with the DSSQ showed that
performing the experimental task elicited changes in subjective state and consistent with
expectation, participants in the detection/action group reported the task to be less stressful
than those in the detection-only group. The differential stress reactions were indexed by
lower ratings of distress on the part of the detection/action group and by the fact that
participants in that group maintained task engagement over the course of the vigil while those in the detection-only group became less engaged over time on task. In regard to the task engagement dimension, previous work by Matthews and Falconer (2002) has shown
that appraisal of tasks as challenging predicts higher task engagement. In addition to its
relevance to perceived control, the capacity in the detection/action group to thwart enemy
threats through close scrutiny of the vigilance display may have increased the challenge
of the detection/action scenario, which in turn, served to increase task engagement in the
detection/action group.
Given the general tendency for participants to find vigilance tasks to be stressful
(Hancock & Warm, 1989; Szalma et al., 2004; Warm, 1993), the fact that participants in
both the detection/action and the detection-only groups were significantly less worried at
the end of the experimental session that at its outset may seem anomalous. However, this
result is typical in studies using the DSSQ in a wide variety of experimental tasks and
most likely reflects non-specific learning effects in which participants become familiar
44
with task requirements and come to understand that they will not suffer physical or
emotional harm (Matthews et al., 2002)
In sum, although this investigation did not resolve the reasons for the low
workload ratings in the Gunn et al. (2005) study, it was successful in highlighting the importance of the detection/action scenario for the level of signal detection and the stress associated with the performance of vigilance tasks. Based upon the present results, it would appear imperative to focus on the implications of that scenario in future vigilance studies.
45
References
Alikonis, C.R., Warm, J.S., Matthews, G., Dember, W.N., & Kellaris, J.J. (2002, March).
Effects of music on the workload and boredom of sustained attention. Paper
presented at the annual meeting of the Southern Society for Philosophy and
Psychology, Nashville, TN.
Alluisi, E.A. (1982). Stress and stressors, commonplace and otherwise. In E.A. Alluisi &
E.A. Fleishman, (Eds.), Human performance and productivity: Vol. 3. Stress and
performance effectiveness (pp. 1-10). New York: Erlbaum.
Asterita, M.F. (1985). The physiology of stress. New York: Human Sciences Press, Inc.
Aston-Jones, G. (1985). Behavioral functions of locus coeruleus derived from cellular
attributes. Physiological Psychology, 13, 118-126.
Averill, J.R. (1973). Personal control over aversive stimuli and its relationship to stress.
Psychological Bulletin, 80, 286-303.
Baker, C.H. (1962). Man and radar displays. New York: Macmillan.
Ballard, J.C. (1996). Computerized assessment of sustained attention: A review of factors
affecting vigilance performance. Journal of Clinical and Experimental
Neuropsychology, 18, 843-863.
Becker, A.B., Warm, J.S., Dember, W.N., & Hancock, P.A. (1995). Effects of jet engine
noise and performance feedback on perceived workload in a monitoring task. The
International Journal of Aviation Psychology, 5, 49-62.
Cannon, W.B (1927). The James-Lange theory of emotions: A critical examination and
an alternative theory. American Journal of Psychology, 39, 106-124.
Cannon, W.B. (1932). The wisdom of the body. New York: W.W. Norton & Co.
46
Carriero, N. (1977). Physiological correlates of performance in a long duration repetitive
task. In R.R. Mackie (Ed.), Vigilance: Theory, operational performance, and
physiological correlates (pp. 307-330). New York: Plenum Press.
Colquhoun, W. P. (1967). Sonar target detection as a decision process. Journal of Applied
Psychology, 51, 187-190.
Colquhoun, W.P. (1977). Simultaneous monitoring of a number of auditory sonar outputs.
In R.R. Mackie (Ed.), Vigilance: Theory, operational performance, and
physiological correlates (pp. 163-188). New York: Plenum Press.
Coren, S., Ward, L.M., & Enns, J.T. (2004). Sensation and perception (6th ed.). Hoboken,
NJ: Wiley.
Davies, D.R., & Parasuraman, R. (1982). The psychology of vigilance. London:
Academic Press.
Deaton, J.E., & Parasuraman, R. (1993). Sensory and cognitive vigilance: Effects of age
on performance and subjective workload. Human Performance, 6, 71-97.
Dember, W.N., & Warm, J.S. (1979). Psychology of perception (2nd ed.). New York: Holt,
Rinehart & Winston.
Dickman, S.J. (2002). Dimensions of arousal: Wakefulness and vigor. Human Factors,
44, 429-442.
Dittmar, M.L., Warm, J.S., Dember, W.N., & Ricks, D.F. (1993). Sex differences in
vigilance performance and perceived workload. Journal of General Psychology,
120, 309-322.
Dunbar, F. (1954). Emotion and bodily changes. New York: Columbia University Press.
Eggemeier, F.T. (1988). Properties of workload assessment techniques. In PA. Hancock
47
& N Meshkati, (Eds.), Human mental workload (pp. 41-61). Amsterdam: North-
Holland.
Farmer, E., & Brownson, A. (2003). Review of workload measurement, analysis, and
interpretation methods. Brussels: European Organization for the Safety of Air
Navigation (Eurocontrol): Report for Eurocontrol Integra Progam.
Field, A.P. (2005). Discovering statistics using SPSS (2nd ed.). Thousand Oaks, CA: Sage.
Finomore, V.S. (2006) Effects of feature presence/Absence and Event Asynchrony on
vigilance performance and perceived mental workload. Unpublished master’s
thesis, University of Cincinnati, Cincinnati, Ohio.
Flach, J., Hancock, P., Caird, J., & Vicente, K. (Eds.). (1995). Global perspectives on the
ecology of human-machine systems: Vol. 1. Hillsdale, NJ: Erlbaum.
Frankenhaeuser, M., Nordheden, B., Myrsten, A.L., & Post, B. (1971).
Psychophysiological reactions to understimulation and overstimulation. Acta
Psychologica, 35, 298-308.
Frankenhaeuser, M., & Patkai, P. (1964). Catecholamine excretion and performance
under stress. Perceptual and Motor Skills, 19, 13-14.
Frankmann, J.P., & Adams, J.A. (1962). Theories of vigilance. Psychological Bulletin, 59,
257-272.
Galinsky, T.L., Rosa, R.R., Warm, J.S., & Dember, W.N. (1993). Psychophysical
determinants of stress in sustained attention. Human Factors, 34, 603-614.
Gibson, J.J. (1979). The ecological approach to visual perception. Boston: Houghton
Mifflin.
Gill, G.W. (1996). Vigilance in cytoscreening: Looking without seeing. Advance for
48
Medical Laboratory Professionals, 8, 14-15, 21.
Gray, J.A. (1987). The psychology of fear and stress (2nd ed.). Cambridge: Cambridge
University Press.
Grier, R.A., Warm, J.S., Dember, W.N., Matthews, G., Galinsky, T.L., Szalma, J.L., &
Parasuraman, R. (2003). The vigilance decrement reflects limitations in effortful
attention, not mindlessness. Human Factors, 45, 349-359.
Grubb, P.L., Warm, J.S., Dember, W.N., & Berch, D.B. (1995). Effects of multiple signal
discrimination on vigilance performance and perceived workload. Proceedings of
the Human Factors and Ergonomics Society, 39, 1360-1364.
Gunn, D.V., Warm, J.S., Nelson, W.T., Bolia, R.S., Schumsky, D.A., & Corcoran, K.J.
(2005). Target Acquisition with UAV’s: Vigilance displays and advanced cueing
interfaces. Human Factors, 47, 488-497.
Hancock, P.A. (1984). Environmental stressors. In J.S. Warm (Ed.), Sustained attention
in human performance (pp. 103-142). Chichester, UK: Wiley.
Hancock, P.A. (1988). The effects of gender and time of day upon the subjective estimate
of mental workload during the performance of a simple task. In P.A. Hancock &
N. Meshkati (Eds.), Human mental workload (pp. 239-250). Amsterdam: North-
Holland.
Hancock, P.A., & Hart, G. (2002). Defeating terrorism: What can human
factors/ergonomics offer? Ergonomics and Design, 10, 6-16.
Hancock, P.A., Rodenburg, C.J., Mathews, W.D., Vercruyssen, M. (1988). Estimation of
duration and mental workload at differing times of the day by males and females.
Proceedings of the Human Factors and Ergonomics Society, 32, 857-861.
49
Hancock, P.A., Szalma, J.L. (2006). Stress and Neuroergonomics. In R. Parasuraman &
M. Rizzo (Eds.), Neuroergonomics: The brain at work (pp. 195-206). UK: Oxford
University Press.
Hancock, P.A., & Warm, J.S. (1989). A dynamic model of stress and sustained attention.
Human Factors, 31, 519-537.
Hargreaves, D.J., & North, A.C. (1999). The functions of music in everyday life:
Redefining the social in music psychology. Psychology of Music, 27, 71-83.
Hart, S.G., & Staveland, L.E. (1988). Development of NASA-TLX (Task Load Index):
Results of empirical and theoretical research. In P.A. Hancock & N. Meshkati
(Eds.), Human mental workload (pp. 139-183). Amsterdam: North-Holland.
Helton, W.S., Dember, W.N., Warm, J.S., & Matthews, G. (2000). Optimism-
pessimism and false failure feedback: Effects on vigilance performance. Current
Psychology, 18, 311-325.
Helton, W.S., Hollander, T.D., Warm, J.S., Matthews, G., Dember, W.N., Wallaart, M.,
Beauchamp, G., Parasuraman, R., Hancock, P.A. (2005). Signal regularity and the
mindlessness model of vigilance. British Journal of Psychology, 96, 249-261.
Helton, W.S., Shaw, T.H., Warm, J.S., Matthews, G., Dember, W.N., & Hancock, P.A.
(2004). Workload demand transitions in vigilance: Effects on performance
efficiency and stress. In D.A. Vincenzi, M. Mouloua, & P.A. Hancock (Eds.)
Human performance, situation awareness and automation: Current research and
trends, HPSAA II, Vol. 1 (pp. 258-262). Mahwah, NJ: Erlbaum.
Hill, S.G., Iavecchia, H.P., Byers, A.C., Zaklad, A.L., & Christ, R.E. (1992). Comparison of
four subjective workload rating scales. Human Factors, 34, 429-439.
50
Hitchcock, E.M., Dember, W.N., Warm, J.S., Moroney, B.W., & See, J.E. (1999). Effects
of cueing and knowledge of results on workload and boredom in sustained
attention. Human Factors, 41, 365-372.
Hockey, G.R.J. (1984). Varieties of attentional state: The effects of environment. In R.
Parasuraman & D.R. Davies (Eds.), Varieties of attention (pp. 449-483). New
York: Academic Press.
Hockey, G.R.J. (1986). Changes in operator efficiency as a function of environmental
stress, fatigue, and circadian rhythms. In K.R. Boff, L. Kaufman, & J.P. Thomas
(Eds.), Handbook of perception and human performance. Vol. II: Cognitive
processes and performance (pp. 44:1 – 44:49). New York: Wiley.
Hockey, G.R.J. (1997).Compensatory control in the regulation of human performance
under stress and high workload: A cognitive-energetic framework. Biological
Psychology, 45, 73-93.
Hockey, G.R.J., & Hamilton, P. (1983). The cognitive patterning of stress states. In G.R.J.
Hockey (Ed.), Stress and fatigue in human performance (pp. 331-362).
Chichester: Wiley.
Hoddes, E., Zarcone, V., Smythe, H., Phillips, R., & Dement, W.C. (1973).
Quantification of sleepiness: A new approach. Psychophysiology, 10, 431-436.
Hollander, T., Warm, J.S., Matthews, G., Shockley, K., Dember, W.N., Tripp, L.D.
Weiler, E.,Scerbo, M. (2004). Feature presence/absence modifies the event rate
effect and cerebral hemovelocity in vigilance performance. Proceedings of the
Human Factors and Ergonomics Society, 48, 1943-1947.
Hovanitz, C.A., Chin, K., & Warm, J.S. (1989). Complexities in life stress-dysfunction
51
relationships: A case in point-Tension headache. Journal of Behavioral Medicine,
12, 55-75.
Howell, W.C. (1993). Engineering psychology in a changing world. Annual Review of
Psychology, 44, 231-263.
Jerison, H.J. (1963). On the decrement function in human vigilance. In D.N. Buckner &
J.J. McGrath (Eds.), Vigilance: A symposium (pp. 199-212). New York: McGraw-
Hill.
Jerison, H.J., Pickett, R.M., & Stensen, H.H. (1965). The elicited observing rate and
decision processes in vigilance. Human Factors, 7, 107-128.
Johnson, A., & Proctor, R.W. (2004). Attention: Theory and practice. Thousand Oaks,
CA: Sage.
Kirk, R.E. (1995). Experimental design: Procedures for the behavioral sciences (3rd ed.).
Pacific Grove, CA: Brooks/Cole.
Kramer, A.F., Coles, M.G.H., & Logan, G.D. (Eds.). (1996). Converging operations in
the study of visual selective attention. Washington, DC: American Psychological
Association.
Lazarus, R.S. (1993). From psychological stress to the emotions: A history of changing
outlooks. Annual Review of Psychology, Vol. 44, 1-21.
Lazarus, R.S., & Folkman, S. (1984). Stress, appraisal, and coping. New York: Springer.
LeDoux, J.E. (1996). The emotional brain: The mysterious underpinnings of emotional
life. New York: Simon & Schuster.
Lewis, L.M., Dember, W.N., Schefft, B.K., & Radenhausen, R.A. (1995). Can
experimentally induced mood affect optimism and pessimism scores? Current
52
Psychology: Developmental, Learning, Personality, Social, 14, 29-41.
Liu, Y., Wickens, C. D. (1987) Mental workload and cognitive task automation: An
evaluation of subjective and time estimation metrics (Report No. EPL-87-
02/NASA-87-2). Urbana, IL: Engineering Psychology Research Laboratory,
Department of Mechanical and Industrial Engineering, University of Illinois.
Loeb, M., Alluisi, E.A. (1984). Theories of vigilance. In J.S. Warm (Ed.), Sustained
attention in human performance (pp. 179-205). Chichester, UK: Wiley.
Lundberg, U., & Frankenhaeuser, M. (1979). Pituitary-adrenal and sympathetic-adrenal
correlates of distress and effort (Report 548). Stockholm, Sweden: University of
Stockholm, Department of Psychology.
Lundberg, P.K., Warm, J.S., Seeman, W., & Porter, W.K. (1980, May). Vigilance and the
type-A individual: Attentive, aroused, and able. Paper presented at the meeting of
the Midwestern Psychological Association, Chicago, IL.
Lysaght, R.J., Hill, S.G., Dick, A.O., Plamondon, B.D., Linton, P.M., Wierwille, W.W.,
Zaklad, A.L., Bittner, A.C., & Wherry, R.J. (1989). Operator workload:
Comprehensive review and evaluation of operator workload methodologies
(Report 851). U.S. Army Research Institute for the Behavioral and Social
Sciences. Alexandria, VA.
Mackworth, J. F. (1969). Vigilance and habituation. Baltimore MD: Penguin.
Mackworth, N.H. (1948). The breakdown of vigilance during prolonged visual search.
Quarterly Journal of Experimental Psychology, 1, 6-21.
Mackworth, N.H. (1950/1961). Researches on the measurement of human performance.
In H.W. Sinaiko, (Ed.), Selected papers in the design and use of control systems
53
(pp. 174-331). (Reprinted from Medical Research Council Special Report Series
268, London, H.M. Stationary Office, 1950).
Manly, T., Robertson, I.H., Galloway, M., & Hawkins, K. (1999). The absent mind:
Further investigations of sustained attention to response. Neuropsychologica, 37,
661-670.
Matthews, G. (1996). Signal probability effects on high-workload vigilance tasks.
Psychonomic Bulletin and Review, 3, 339-343.
Matthews, G. (2000) Stress and emotion: Physiology, cognition and health. In D.S. Gupta
& R.M. Gupta (Eds.), Psychology for psychiatrists, pp. 143-174. London: Whurr
Publishers.
Matthews, G. (2001). Levels of transaction: A cognitive science framework for operator
stress. In P.A. Hancock, & P.A. Desmond, (Eds.), Stress, workload, and fatigue
(pp. 5-33). Mahwah, NJ: Erlbaum.
Matthews, G., Campbell, S.E., Desmond, P.A., Huggins, J., Falconer, S., & Joyner, L.A.
(1999a) Assessment of task-induced state change: Stress, fatigue and workload
components. In M. Scerbo (Ed.), Automation technology and human
performance: Current research and trends, pp. 199-203. Hillsdale, NJ: Erlbaum.
Matthews, G., Campbell, S.E., Falconer, S., Joyner, L., Huggins, J., Gilliland, K., Grier,
R., & Warm, J.S. (2002). Fundamental dimensions of subjective state in
performance settings: Task engagement, distress and worry. Emotion, 2, 315-340.
Matthews, G., Davies, D.R., Westerman, S.J., & Stammers, R.B. (2000). Human
performance: Cognition, stress, and individual differences. Philadelphia, PA:
Taylor & Francis Group.
54
Matthews, G., & Falconer, S. (2002). Personality, coping and task-induced stress in
customer service personnel. Proceedings of the Human Factors and Ergonomics
Society 46th Annual Meeting, pp. 963-967. Santa Monica, CA: Human Factors
and Ergonomics Society.
Matthews, G., Joyner, L., Gilliland, K., Campbell, S., Falconer, S., & Huggins, J.
(1999b). Validation of a comprehensive stress state questionnaire: Towards a state
‘big three’? In I.Mervielde, I.J. Deary, F. DeFruyt, & F. Ostendorf (Eds.),
Personality psychology in Europe (Vol. 7, pp. 335-350). Tilburg: Tilburg
University Press.
Maxwell, S.E., & Delaney, H.D. (2004). Designing experiments and analyzing data: A
model comparison perspective (2nd ed.). Mahwah, NJ; Erlbaum.
Miller, L.C., Warm, J.S., Dember, W.N., & Schumsky, D.A. (1998). Sustained attention
and feature-integrative displays. Proceedings of the Human Factors and
Ergonomics Society, 42, 1585-1589.
Molloy, R., & Parasuraman, R. (1996). Monitoring an automated system for a single
failure: Vigilance and task complexity effects. Human Factors, 38, 311-322.
Moruzzi, G., & Magoun, H.W. (1949). Brain stem reticular formation and activation of
the EEG. Electroencephalography and Clinical Neurophysiology, 1, 455-473.
Nachreiner, F., & Hänecke, K. (1992). Vigilance. In A.P. Smith & D.M. Jones (Eds.),
Handbook of human performance, Vol.3: State & trait (pp. 261-288). London:
Academic Press.
Natsoulas, T, (1967). What are perceptual reports about? Psychological Bulletin, 67,
249-272.
55
Nickerson, R.S. (1992). Looking ahead: Human factors challenges in a changing world.
Hillsdale, NJ: Erlbaum.
Nuechterlein, K.H., Parasuraman, R., & Jiang, Q. (1983). Visual sustained attention:
Image degradation produces rapid sensitivity decrement over time. Science, 220,
327-329.
Nygren, T.E. (1991). Psychometric properties of subjective workload measurement
techniques: Implications for their use in the assessment of perceived mental
workload. Human Factors, 33, 17-33.
O’Connor, K. (1985). A model of situational preference amongst smokers. Personality
and Individual Differences, 6, 151-160.
O’Hanlon, J.F. (1965). Adrenaline and noradrenalin: Relation to performance in a visual
vigilance task. Science, 150, 507-509.
O’Hanlon, J.F. (1981). Boredom: Practical consequences and a theory. Acta
Psychologica, 49, 53-82.
Parasuraman, R. (1984). The Psychobiology of Sustained Attention. In J. S. Warm (Eds.),
Sustained attention in human performance (pp. 61-101). Chichester, UK: Wiley.
Parasuraman, R., & Davies, D.R. (1977). A taxonomic analysis of vigilance. In R.R.
Mackie (Ed.), Vigilance: Theory, operational performance, and physiological
correlates (pp. 559-574). New York: Plenum.
Parasuraman, R., Warm, J.S., & Dember, W.N. (1987). Vigilance: Taxonomy and utility.
In L.S. Mark, J.S. Warm, & R.L. Huston (Eds.), Ergonomics and human factors:
Recent research (pp.12-32). New York: Springer-Verlag.
Parasuraman, R., Warm, J.S., & See, J.E. (1999). Brain Systems of Vigilance. In R.
56
Parasuraman (Ed.), The attentive brain (pp. 221-256). London England: MIT
Press.
Parsons, K.S., Warm, J.S., Matthews, G., Dember, W.N., Galinsky, T.L., & Hitchcock,
E.M. (2000, November). Changes in signal probability and vigilance
performance. Paper presented at the annual meeting of the Psychonomic Society,
New Orleans, LA.
Pigeau, R.A., Angus, R.G., O’Neil, P., & Mack, I. (1995). Vigilance latencies to aircraft
detection among NORAD surveillance operators. Human Factors, 37, 622-634.
Proctor, R.W., & Van Zandt, T. (1994). Human factors in simple and complex systems.
Needham Heights, MA: Allyn & Bacon.
Robertson, I.H., Manly, T., Andrade, J., Baddeley, B.T., & Yiend, J. (1997). “Oops!”
Performance correlates of everyday attentional failures in traumatic brain injured
and normal subjects. Neuropsychologia, 35, 747-758.
Sanderson, P.M., & Woods, M.D. (1987). Subjective mental workload and locus of
control (Report No. EPL-87-03). Urbana, IL: Engineering Psychology Research
Laboratory, Department of Mechanical and Industrial Engineering.
Satchell, P.M. (1993). Cockpit monitoring and alerting systems. Brookfield, VT: Ashgate.
Sauter, S.L., Murphy, L.R., & Hurrell, J.J. (1990). Prevention of work-related
psychological disorders. A national strategy proposed by the National Institute for
Occupational Safety and Health. American Psychologist, 45, 1146-1158.
Sawin, D.A., & Scerbo, M.W. (1995). Effects of instruction type and boredom proneness
in vigilance: Implications for boredom and workload. Human Factors, 37, 752-
765.
57
Scerbo, M.W. (1998). What’s so boring about vigilance? In R.R. Hoffman, M.F. Sherrick,
& J.S. Warm (Eds.), Viewing psychology as a whole: The integrative science of
William N. Dember. Washington, DC: American Psychological Association.
Scerbo, M.W., Greenwald, C.Q., & Sawin, D.A. (1993). The effects of subjective-
controlled pacing and task type on sustained attention and subjective workload.
The Journal of General Psychology, 120, 293-307.
Schoenfeld, V.S., & Scerbo, M.W. (1997). Search differences for the presence and
absence of features in sustained attention. Proceedings of the Human Factors and
Ergonomics Society, 41, 1288-1292.
Schoenfeld, V.S., & Scerbo, M.W. (1999). The effects of search differences for the
presence and absence of features on vigilance performance and mental workload.
In M.W. Scerbo & M. Mouloua (Eds.), Automation technology and human
performance: Current research and trends (pp. 177-182). Mahwah, NJ: Erlbaum
Selye, H. (1976). The stress of life. New York: McGraw-Hill.
Sheridan, T. (1970). On how often the supervisor should sample. IEEE Transactions on
Systems Science and Cybernetics SSC-6, 140-145.
Sheridan, T. (1987). Supervisory control. In G. Salvendy (Ed). Handbook of human
factors (pp. 1243-1268). New York: Wiley.
Strauch, B. (2002). Investigating human error: Incidents, accidents, and complex systems.
Burlington, VT: Ashgate.
Szalma, J.L. (1999). Sensory and temporal determinants of workload and stress in
sustained attention. Unpublished doctoral dissertation, University of Cincinnati,
Cincinnati, OH.
58
Szalma, J.L., Hancock, R.A., Dember, W.N., & Warm, J.S. (2006). Training for
vigilance: The effect of KR format and dispositional optimism and pessimism on
performance and stress. British Journal of Psychology, 97, 115-135.
Szalma, J.L., Warm, J.S., Matthews, G., Dember, W.N., Weiler, E.M., Meier, A., &
Eggemeier,. F.T. (2004). Effects of sensory modality and task duration on
performance, workload, and stress in sustained attention. Human Factors, 46,
219-233.
Teichner, W.H. (1974). The detection of a simple visual signal as a function of time on
watch. Human Factors, 16, 339-353.
Temple, J.G., Warm, J.S., Dember, W.N., Hovanitz, C.A., LaGrange, C.M., Mcnutt, B.,
Bierer, D.W., & Van Osdell, J. (1997, March). Effects of headache on vigilance
and arithmetic performance. Paper presented at the annual meeting of the
Southern Society for Philosophy and Psychology, Atlanta, GA.
Temple, J.G., Warm, J.S., Dember, W.N., Jones, K.S., LaGrange, C.M., & Matthews,
G. (2000). The effects of caffeine and signal salience on performance, workload,
and stress in an abbreviated vigilance task. Human Factors, 42, 183-194.
Thackray, R.I., Bailey, J.P., & Touchstone, R.M. (1977). Physiological, subjective, and
performance correlates of reported boredom and monotony while performing a
simulated radar control task. In R.R. Mackie (Ed.), Vigilance: Theory, operational
performance, and physiological correlates (pp. 203-215). New York: Plenum.
Thayer, R.E. (1989). The biopsychology of mood and arousal. New York: Oxford
University Press.
Thieman, J.A., Warm, J.S., Dember, W.N., & Smith, E.D. (1989, March). Effects of
59
caffeine on vigilance performance and task-induced stress. Paper presented at the
annual meeting of the Southern Society for Philosophy and Psychology, New
Orleans, LA.
Warm, J.S. (1984). An introduction to vigilance. In J.S. Warm (Ed.), Sustained attention in
human performance (pp. 1-14). Chichester, UK: Wiley.
Warm, J.S. (1993). Vigilance and target detection. In B.M. Huey & C.D. Wickens (Eds.),
Workload transition: Implications for individual and team performance (pp.139-
169). Washington DC: National Academy Press.
Warm, J. S., & Dember, W. N. (1998). Tests of vigilance taxonomy. In R. R. Hoffman,
M. F. Sherrick, & J. S. Warm (Eds.), Viewing psychology as a whole: The
integrative science of William N. Dember (pp. 87-112). Washington, DC:
American Psychological Association.
Warm, J.S., Dember, W.N., & Hancock, P.A. (1996) Vigilance and workload in
automated systems. In R. Parasuraman & M. Mouloua (Eds.), Automation and
human performance: Theory and applications (pp. 183-200). Mahwah, NJ:
Erlbaum.
Warm, J.S., Dember, W.N., & Parasuraman, R. (1991). Effects of olfactory stimulation
on performance and stress in a visual sustained attention task. Journal of the
Society of Cosmetic Chemists, 42, 199-210.
Warm, J.S., & Jerison, H.J. (1984). The psychophysics of vigilance. In J.S. Warm (Ed.),
Sustained attention in human performance (pp. 15-59). Chichester, UK: Wiley.
Warm, J.S., Rosa, R.R., & Colligan, M.J. (1989). Effects of auxiliary load on vigilance
performance in a simulated work environment. Proceedings of the Human Factors
60
and Ergonomics Society, 33, 1419-1421.
Weinger, M.B., & Englund, C.E. (1990). Ergonomic and human factors affecting
anesthetic vigilance and monitoring performance in the operating room
environment. Anesthesiology, 73, 995-1021.
Welford, A.T. (1968). Fundamentals of skill. London: Methuen & Co. Ltd.
Wickens, C.D. (1992). Engineering psychology and human performance (2nd ed.). New
York: Harper-Collins.
Wickens, C.D., & Hollands, J.G. (2000). Engineering psychology and human
performance (3rd ed.). Upper Saddle River, NJ: Prentice-Hall.
Yoshitake, H. (1978). Three characteristic patterns of subjective fatigue symptoms.
Ergonomics, 21, 231-233.
61
Appendix A
NASA-Task Load Index (NASA-TLX)
62
INSTRUCTIONS: RATINGS
We are not only interested in assessing your performance but also the experiences you had during the different task conditions. Right now we are going to describe the technique that will be used to examine your experiences. In the most general sense we are examining the "Workload" you experienced. Workload is a difficult concept to define precisely, but a simple one to understand generally. The factors that influence your experience of workload may come from the task itself, your feelings about your own performance, how much effort you put in, or the stress and frustration you felt. The workload contributed by different task elements may change as you get more familiar with a task, perform easier or harder versions of it, or move from one task to another. Physical components of workload are relatively easy to conceptualize and evaluate. However, the mental components of workload may be more difficult to measure.
Since workload is something that is experienced individually by each person, there are no effective “rulers” that can be used to estimate the workload of different activities. One way to find out about workload is to ask people to describe the feelings they experienced. Because workload may be caused by many different factors, we would like you to evaluate several of them individually rather than lumping them into a single global evaluation of overall workload. This set of six rating scales was developed for you to use in evaluating your experiences during different tasks. Please read the descriptions of the scales carefully. If you have a question about any of the scales in the table, please ask the experimenter about it. It is extremely important that they be clear to you. You may keep the descriptions with you for reference during the experiment.
After performing the task, six rating scales will be displayed. You will evaluate the task by marking each scale at the point which matches your experience. Each line has two endpoint descriptors that describe the scale. Note that "Performance" goes from “good” on the left to “bad” on the right. This order has been confusing for some people. Left-click the mouse in the desired location. Please consider your responses carefully in distinguishing among the task conditions. Consider each scale individually. Your ratings will play an important role in the evaluation being conducted, thus, your active participation is essential to the success of this experiment, and is greatly appreciated.
63
RATING SCALE DEFINITIONS
Title Endpoints Descriptions
MENTAL Low/High How much mental and perceptual activity was DEMAND required (e.g., thinking, deciding, calculating, remembering, looking, searching, etc.)? Was the task easy or demanding, simple or complex, exacting or forgiving?
PHYSICAL Low/High How much physical activity was required (e.g., DEMAND pushing, pulling, turning, controlling, activating, etc.)? Was the task easy or demanding, slow or brisk, slack or strenuous, restful or laborious?
TEMPORAL Low/High How much time pressure did you feel due to the DEMAND rate or pace at which the tasks or task elements occurred? Was the pace slow and leisurely or rapid and frantic?
PERFORMANCE Good/Poor How successful do you think you were in accomplishing the goals of the task set by the experimenter (or yourself)? How satisfied were you with your performance in accomplishing these goals?
EFFORT Low/High How hard did you have to work (mentally and physically) to accomplish your level of performance?
FRUSTRATION Low/High How insecure, discouraged, irritated, stressed and annoyed versus secure, gratified, content, relaxed and complacent did you feel during the task?
64
INSTRUCTIONS: SOURCES-OF-WORKLOAD EVALUATION
Throughout this experiment the rating scales are used to assess your experiences in the different task conditions. Scales of this sort are extremely useful, but their utility suffers from the tendency people have to interpret them in individual ways. For example, some people feel that mental or temporal demands are the essential aspects of workload regardless of the effort they expended or the performance they achieved. Others feel that if they performed well the workload must have been, low, and vice versa. Yet others feel that effort or feelings of frustration are the most important factors in workload; and so on. The results of previous studies have already found every conceivable pattern of values. In addition, the factors that create levels of workload differ depending on the task. For example, some tasks might be difficult because they must be completed very quickly. Others may seem easy or hard because of the intensity of mental or physical effort required. Yet others feel difficult because they cannot be performed well, no matter how much effort is expended.
The evaluation you are about to perform is a technique that has been developed by NASA to assess the relative importance of six factors in determining how much workload you experienced. The procedure is simple: You will be presented with a series of pairs of rating scale titles (for example, Effort vs. Mental Demands) and asked to choose which of the items was more important to your experience of workload in the task(s) that you just performed. Each pair of scale titles will appear separately on the screen.
Left-click the mouse to select the Scale Title that represents the more important contributor to workload for the Specific task you performed in this experiment.
After you have finished the entire series we will be able to use the pattern of your choices to create a weighted combination of the ratings from that task into a summary workload score. Please consider your choices carefully and make them consistent with how you used the rating scales during the particular task you were asked to evaluate. Don't think that there is any correct pattern; we are only interested in your opinions. If you have any questions, please ask them now.
65
Appendix B
Instruction Sets for all Experimental Conditions
66
INSTRUCTION SET FOR THE DETECTION/ACTION CONDITIONS
INSTRUCTIONS This study is focused upon the control of unmanned aerial vehicles or UAVs. These are aircraft that fly with no humans onboard and are maneuvered by operators located in distant ground-control centers. In comparison to conventional aircraft, UAVs are capable of longer flights at much higher altitudes, and they can tolerate greater levels of gravitational force. They also eliminate the risk of pilot loss due to capture or injury. Consequently, the Air Force is using UAVs in places like Iraq to monitor enemy activities, for bombing enemy targets, and for other combat missions.
UAV
In this study, I would like you to assume you are a member of a distant ground control team in charge of a flight of UAVs flying over enemy territory.
Your particular task is to DETECT AND DESTROY enemy ground vehicles BEFORE they launch missiles at your planes.
To protect your flight of UAVs, the Air Force has developed a computer system that scans the ground over which your planes are flying and identifies those enemies that you can destroy. The system does this through a numerical code represented by green digits at the bottom of the computer screen in front of you.
Pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Safe signals are indicated by cases in which both digits in a pair are odd/odd (for example: 35, 59, 33 etc) or even/even (for example: 02, 26, 66 etc).
In those cases, all is well and you need make NO response.
Following are examples of SAFE SIGNALS
67
Safe Signal with ODD/ODD Digit Pair
You would do nothing
35
Safe Signal with ODD/ODD Digit Pair
You would do nothing
59
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
02
68
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
26
Recall that pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Threat signals are those in which the digits are even/odd (for example: 03, 25, 69) or odd/even (for example: 30, 52, 96). When ever you spot an even/odd or an odd/even pair you are to press the spacebar on the computer keyboard…
as QUICKLY as you can with your LEFT hand.
Following are examples of THREAT SIGNALS
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
03
69
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
25
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
30
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
52
70
SOME PRACTICE SESSIONS WILL NOW FOLLOW. YOU WILL PRACTICE DETECTING THREAT SIGNALS .
DO YOU HAVE ANY QUESTIONS?
When you press the spacebar to a threat signal, an air-ground scene will appear on your screen showing six enemy missile launchers denoted by X’s.
One of the launchers will be HIGHLIGHTED IN BLUE (X). That launcher is the one identified by the computer system as an IMMEDIATE THREAT TO YOUR PLANES
You are to move your mouse over it with your right hand and left-click. At that point the launcher will be circled in red ( X ), indicating that you have DESTROYED IT AND PROTECTED YOUR FLIGHT OF UAVS.
Following is an example of a highlighted launcher (X) with a red circle around it ( X ) denoting correct threat signal detection
XX X X
XX X X X X X X
36
71
XX X X
XX X X X X X X
36
In the event that you MISS a threat signal (even/odd or odd/even digit pair), the air-ground scene will come up anyway….
but the launcher that is the immediate threat WILL NOT be highlighted in blue
INSTEAD, ALL 6 ENEMY MISSILE LAUNCHERS WILL BE BLACK (X)
You will have to GUESS which of the six missile launchers is the critical one that you must destroy with the mouse. If you guess correctly, the enemy launcher will become circled in red and become blue.
But if you guess incorrectly, the launcher you chose will still become circled in red – but the launcher that is the immediate threat will become blue – indicating that you have failed to protect your planes. In these cases (when you’ve missed the threat signal) you will have a one in six chance of successfully destroying the launcher and a five in six chance of failure.
Obviously, it is important that you correctly detect Threat Signals.
Next is an example of a correct guess and then an example of an incorrect guess.
XX X X
XX X X X X X X
65
72
XX X X
XX X X
X Correct guess X X X
65
XX X X Correct target XX X X
Your pick (incorrect) circled in red X X X X
65
PLEASE NOTE THAT SPEED IS IMPORTANT HERE.
If you do not press the spacebar quickly enough, you will miss the threat signals. and If you do not destroy the enemy launcher with the mouse quickly enough, the vehicle will escape and be free to harm your planes.
Also note: Threat signals may appear even while you are executing a destroy action with the mouse. THUS you need to watch for threat signals ALL THE TIME!
The system will not show air-ground scenes when you have pressed the spacebar to a safe signal.
Performance will be logged throughout the study thus providing a score indicating your competence as a controller. Therefore, DO NOT press the spacebar indiscriminately, for that will lower your overall score.
73
A FINAL PRACTICE SESSION WILL NOW FOLLOW DURING WHICH YOU WILL PRACTICE THE TASK IN ITS ENTIRETY - PERFORMING BOTH THE DETECTION OF THREAT SIGNALS AND THE THE ENEMY LAUNCHER DESTRUCTION COMPONENT AT THE SAME TIME.
DO YOU HAVE ANY QUESTIONS?
(3 MINUTE BREAK)
YOU WILL NOW BEGIN THE MAIN TASK
WHICH WILL BE SOMEWHAT LONGER THAN
THE PRACTICE SESSIONS YOU JUST PERFORMED.
THREAT SIGNALS WILL OCCUR LESS FREQUENTLY
THAN DURING THE PRACTICE SESSIONS.
DO YOU HAVE ANY QUESTIONS?
74
INSTRUCTIONS FOR THE VIGILANCE-RANDOM SCENE CONDITION
INSTRUCTIONS This study is focused upon the control of unmanned aerial vehicles or UAVs. These are aircraft that fly with no humans onboard and are maneuvered by operators located in distant ground-control centers. In comparison to conventional aircraft, UAVs are capable of longer flights at much higher altitudes, and they can tolerate greater levels of gravitational force. They also eliminate the risk of pilot loss due to capture or injury. Consequently, the Air Force is using UAVs in places like Iraq to monitor enemy activities, for bombing enemy targets, and for other combat missions.
UAV
In this study, I would like you to assume you are a member of a distant ground control team in charge of a flight of UAVs flying over enemy territory.
Your particular task is to WARN the commanding officer in charge of the flight of UAVs of the presence of enemy ground vehicles BEFORE they launch missiles at your planes.
To protect your flight of UAVs, the Air Force has developed a computer system that scans the ground over which your planes are flying and identifies those enemies that must be destroyed. The system does this through a numerical code represented by green digits at the bottom of the computer screen in front of you.
Pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Safe signals are indicated by cases in which both digits in a pair are odd/odd (for example: 35, 59, 33 etc) or even/even (for example: 02, 26, 66 etc).
In those cases, all is well and you need make NO response.
Following are examples of SAFE SIGNALS
75
Safe Signal with ODD/ODD Digit Pair
You would do nothing
35
Safe Signal with ODD/ODD Digit Pair
You would do nothing
59
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
02
76
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
26
Recall that pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Threat signals are those in which the digits are even/odd (for example: 03, 25, 69) or odd/even (for example: 30, 52, 96). When ever you spot an even/odd or an odd/even pair you are to press the spacebar on the computer keyboard…
as QUICKLY as you can with your LEFT hand.
Following are examples of THREAT SIGNALS
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
03
77
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
25
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
30
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
52
78
SOME PRACTICE SESSIONS WILL NOW FOLLOW. YOU WILL PRACTICE DETECTING THREAT SIGNALS .
DO YOU HAVE ANY QUESTIONS?
When you press the spacebar to a threat signal, you will have informed the control system that you have detected the threat
THUS
SENDING A MESSAGE TO THE COMMANDING OFFICER OF THE FLIGHT.
Obviously, it is important that you correctly detect Threat Signals.
From time to time, air-ground scenes depicting the terrain over which the UAVs are operating will appear on your computer screen.
These scenes are computer system checks and require NO response from you.
Following is an example of an air-ground scene
XX X X
XX X X X X X X
35
79
PLEASE NOTE THAT SPEED IS IMPORTANT HERE.
If you do not press the spacebar quickly enough, you will miss the threat signal and the control system will not record that you detected the signal.
Also note: Threat signals may appear even when the system checks (air-ground scenes depicting the terrain) flash on the screen. THUS you need to watch for threat signals ALL THE TIME!
Performance will be logged throughout the study thus providing a score indicating your competence as a controller. Therefore, DO NOT press the spacebar indiscriminately, for that will lower your overall score.
A FINAL PRACTICE SESSION WILL NOW FOLLOW DURING WHICH YOU WILL PRACTICE THE TASK IN ITS ENTIRETY – PERFORMING THE DETECTION OF THREAT SIGNALS WITH THE APPEARANCE OF THE AIR-GROUND SCENE SYSTEM CHECKS.
DO YOU HAVE ANY QUESTIONS?
(3 MINUTE BREAK)
YOU WILL NOW BEGIN THE MAIN TASK
WHICH WILL BE SOMEWHAT LONGER THAN
THE PRACTICE SESSIONS YOU JUST PERFORMED.
THREAT SIGNALS WILL OCCUR LESS FREQUENTLY
THAN DURING THE PRACTICE SESSIONS.
DO YOU HAVE ANY QUESTIONS?
80
INSTRUCTIONS FOR THE VIGILANCE -KR CONDITION
INSTRUCTIONS This study is focused upon the control of unmanned aerial vehicles or UAVs. These are aircraft that fly with no humans onboard and are maneuvered by operators located in distant ground-control centers. In comparison to conventional aircraft, UAVs are capable of longer flights at much higher altitudes, and they can tolerate greater levels of gravitational force. They also eliminate the risk of pilot loss due to capture or injury. Consequently, the Air Force is using UAVs in places like Iraq to monitor enemy activities, for bombing enemy targets, and for other combat missions.
UAV
In this study, I would like you to assume you are a member of a distant ground control team in charge of a flight of UAVs flying over enemy territory.
Your particular task is to WARN the commanding officer in charge of the flight of UAVs of the presence of enemy ground vehicles BEFORE they launch missiles at your planes.
To protect your flight of UAVs, the Air Force has developed a computer system that scans the ground over which your planes are flying and identifies those enemies that must be destroyed. The system does this through a numerical code represented by green digits at the bottom of the computer screen in front of you.
Pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Safe signals are indicated by cases in which both digits in a pair are odd/odd (for example: 35, 59, 33 etc) or even/even (for example: 02, 26, 66 etc).
In those cases, all is well and you need make NO response.
Following are examples of SAFE SIGNALS
81
Safe Signal with ODD/ODD Digit Pair
You would do nothing
35
Safe Signal with ODD/ODD Digit Pair
You would do nothing
59
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
02
82
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
26
Recall that pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Threat signals are those in which the digits are even/odd (for example: 03, 25, 69) or odd/even (for example: 30, 52, 96). When ever you spot an even/odd or an odd/even pair you are to press the spacebar on the computer keyboard…
as QUICKLY as you can with your LEFT hand.
Following are examples of THREAT SIGNALS
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
03
83
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
25
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
30
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
52
84
SOME PRACTICE SESSIONS WILL NOW FOLLOW. YOU WILL PRACTICE DETECTING THREAT SIGNALS .
DO YOU HAVE ANY QUESTIONS?
When you press the spacebar to a threat signal, you will have informed the control system that you have detected the threat
THUS
SENDING A MESSAGE TO THE COMMANDING OFFICER OF THE FLIGHT.
Obviously, it is important that you correctly detect Threat Signals.
When you correctly identify a threat signal, a GREEN BOX will briefly flash on the screen above the digits to let you know you have successfully identified a threat. When you fail to detect a threat signal, a RED BOX will briefly flash on the screen above the digits to let you know you have missed a threat signal. Should you respond to a safe signal, NO change will occur.
Following are examples of both GREEN and RED BOXES
85
36
36
PLEASE NOTE THAT SPEED IS IMPORTANT HERE.
If you do not press the spacebar quickly enough, you will miss the threat signal and the control system will not record that you detected the signal.
Also note: Threat signals may appear even when the colored boxes are flashing on the screen. THUS you need to watch for threat signals ALL THE TIME!
Performance will be logged throughout the study thus providing a score indicating your competence as a controller. Therefore, DO NOT press the spacebar indiscriminately, for that will lower your overall score.
86
A FINAL PRACTICE SESSION WILL NOW FOLLOW DURING WHICH YOU WILL PRACTICE THE TASK IN ITS ENTIRETY - PERFORMING THE DETECTION OF THREAT SIGNALS WITH THE APPEARANCE OF THE FLASHING COLORED BOXES.
DO YOU HAVE ANY QUESTIONS?
(3 MINUTE BREAK)
YOU WILL NOW BEGIN THE MAIN TASK
WHICH WILL BE SOMEWHAT LONGER THAN
THE PRACTICE SESSIONS YOU JUST PERFORMED.
THREAT SIGNALS WILL OCCUR LESS FREQUENTLY
THAN DURING THE PRACTICE SESSIONS.
DO YOU HAVE ANY QUESTIONS?
87
INSTRUCTIONS FOR THE KR-CONTROL CONDITION
INSTRUCTIONS This study is focused upon the control of unmanned aerial vehicles or UAVs. These are aircraft that fly with no humans onboard and are maneuvered by operators located in distant ground-control centers. In comparison to conventional aircraft, UAVs are capable of longer flights at much higher altitudes, and they can tolerate greater levels of gravitational force. They also eliminate the risk of pilot loss due to capture or injury. Consequently, the Air Force is using UAVs in places like Iraq to monitor enemy activities, for bombing enemy targets, and for other combat missions.
UAV
In this study, I would like you to assume you are a member of a distant ground control team in charge of a flight of UAVs flying over enemy territory.
Your particular task is to WARN the commanding officer in charge of the flight of UAVs of the presence of enemy ground vehicles BEFORE they launch missiles at your planes.
To protect your flight of UAVs, the Air Force has developed a computer system that scans the ground over which your planes are flying and identifies those enemies that must be destroyed. The system does this through a numerical code represented by green digits at the bottom of the computer screen in front of you.
Pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Safe signals are indicated by cases in which both digits in a pair are odd/odd (for example: 35, 59, 33 etc) or even/even (for example: 02, 26, 66 etc).
In those cases, all is well and you need make NO response.
Following are examples of SAFE SIGNALS
88
Safe Signal with ODD/ODD Digit Pair
You would do nothing
35
Safe Signal with ODD/ODD Digit Pair
You would do nothing
59
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
02
89
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
26
Recall that pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Threat signals are those in which the digits are even/odd (for example: 03, 25, 69) or odd/even (for example: 30, 52, 96). When ever you spot an even/odd or an odd/even pair you are to press the spacebar on the computer keyboard…
as QUICKLY as you can with your LEFT hand.
Following are examples of THREAT SIGNALS
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
03
90
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
25
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
30
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
52
91
SOME PRACTICE SESSIONS WILL NOW FOLLOW. YOU WILL PRACTICE DETECTING THREAT SIGNALS .
DO YOU HAVE ANY QUESTIONS?
When you press the spacebar to a threat signal, you will have informed the control system that you have detected the threat
THUS
SENDING A MESSAGE TO THE COMMANDING OFFICER OF THE FLIGHT.
Obviously, it is important that you correctly detect Threat Signals.
From time to time, briefly flashing colored boxes ( or ) will appear on the computer screen above the digit pairs.
These flashes are computer system checks and require NO response from you.
Following are examples of both GREEN and RED BOXES
92
36
35
PLEASE NOTE THAT SPEED IS IMPORTANT HERE.
If you do not press the spacebar quickly enough, you will miss the threat signal and the control system will not record that you detected the signal
Also note: Threat signals may appear even when the system checks (flashing colored boxes) occur on the screen. THUS you need to watch for threat signals ALL THE TIME!
Performance will be logged throughout the study thus providing a score indicating your competence as a controller. Therefore, DO NOT press the spacebar indiscriminately, for that will lower your overall score.
93
A FINAL PRACTICE SESSION WILL NOW FOLLOW DURING WHICH YOU WILL PRACTICE THE TASK IN ITS ENTIRETY – PERFORMING THE DETECTION OF THREAT SIGNALS WITH THE APPEARANCE OF THE FLASHING COLORED BOX SYSTEM CHECKS.
DO YOU HAVE ANY QUESTIONS?
(3 MINUTE BREAK)
YOU WILL NOW BEGIN THE MAIN TASK
WHICH WILL BE SOMEWHAT LONGER THAN
THE PRACTICE SESSIONS YOU JUST PERFORMED.
THREAT SIGNALS WILL OCCUR LESS FREQUENTLY
THAN DURING THE PRACTICE SESSIONS.
DO YOU HAVE ANY QUESTIONS?
94
INSTRUCTIONS FOR THE VIGILANCE-CONTROL CONDITION
INSTRUCTIONS This study is focused upon the control of unmanned aerial vehicles or UAVs. These are aircraft that fly with no humans onboard and are maneuvered by operators located in distant ground-control centers. In comparison to conventional aircraft, UAVs are capable of longer flights at much higher altitudes, and they can tolerate greater levels of gravitational force. They also eliminate the risk of pilot loss due to capture or injury. Consequently, the Air Force is using UAVs in places like Iraq to monitor enemy activities, for bombing enemy targets, and for other combat missions.
UAV
In this study, I would like you to assume you are a member of a distant ground control team in charge of a flight of UAVs flying over enemy territory.
Your particular task is to WARN the commanding officer in charge of the flight of UAVs of the presence of enemy ground vehicles BEFORE they launch missiles at your planes.
To protect your flight of UAVs, the Air Force has developed a computer system that scans the ground over which your planes are flying and identifies those enemies that must be destroyed. The system does this through a numerical code represented by green digits at the bottom of the computer screen in front of you.
Pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Safe signals are indicated by cases in which both digits in a pair are odd/odd (for example: 35, 59, 33 etc) or even/even (for example: 02, 26, 66 etc).
In those cases, all is well and you need make NO response.
Following are examples of SAFE SIGNALS
95
Safe Signal with ODD/ODD Digit Pair
You would do nothing
35
Safe Signal with ODD/ODD Digit Pair
You would do nothing
59
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
02
96
Safe Signal with EVEN/EVEN Digit Pair
You would do nothing
26
Recall that pairs of digits from the numbers 0, 2, 3, 5, 6, and 9 will flash on the screen. Threat signals are those in which the digits are even/odd (for example: 03, 25, 69) or odd/even (for example: 30, 52, 96). When ever you spot an even/odd or an odd/even pair you are to press the spacebar on the computer keyboard…
as QUICKLY as you can with your LEFT hand.
Following are examples of THREAT SIGNALS
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
03
97
Threat Signal with EVEN/ODD Digit Pair
You would press the spacebar as quickly as you can with your left hand
25
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
30
Threat Signal with ODD/EVEN Digit Pair
You would press the spacebar as quickly as you can with your left hand
52
98
SOME PRACTICE SESSIONS WILL NOW FOLLOW. YOU WILL PRACTICE DETECTING THREAT SIGNALS .
DO YOU HAVE ANY QUESTIONS?
When you press the spacebar to a threat signal, you will have informed the control system that you have detected the threat THUS SENDING A MESSAGE TO THE COMMANDING OFFICER OF THE FLIGHT.
Obviously, it is important that you correctly detect Threat Signals
PLEASE NOTE THAT SPEED IS IMPORTANT HERE.
If you do not press the spacebar quickly enough, you will miss the threat signal and the control system will not record that you detected the signal.
Also note: You must continue to look for threat signals ALL THE TIME.
Performance will be logged throughout the study thus providing a score indicating your competence as a controller. Therefore, DO NOT press the spacebar indiscriminately, for that will lower your overall score.
(3 MINUTE BREAK)
YOU WILL NOW BEGIN THE MAIN TASK
WHICH WILL BE SOMEWHAT LONGER THAN
THE PRACTICE SESSIONS YOU JUST PERFORMED.
THREAT SIGNALS WILL OCCUR LESS FREQUENTLY
THAN DURING THE PRACTICE SESSIONS.
DO YOU HAVE ANY QUESTIONS?
99
Appendix C
Informed Consent Form
100
University of Cincinnati McMicken College of Arts and Sciences, Department of Psychology Consent to Participate in a Research Study
Principle Investigator: Kelley S. Parsons, M.A. E-Mail: [email protected] Phone: (513) 556-5529
Title of Study: Detection-Action Sequence in Vigilance: Effects on Workload and Stress
Introduction: Before agreeing to participate in this study, it is important that the following explanation of the proposed procedures be read and understood. It describes the purpose, procedures, risks, and benefits of the study. It also describes the right to withdraw from the study at any time. It is important to understand that no guarantee or assurance can be made as to the results of the study.
Purpose: The purpose of this study is to examine the effects of a detection-action sequence in vigilance on workload and stress. This information may have potentially important implications for worker comfort and performance during tasks of this nature, as well as the safety of the public. You will be one of approximately 112 participants taking part in this study.
Duration: Your participation in this study will last approximately one hour and 45 minutes. If you are participating in this study to satisfy a course requirement for an introductory psychology class, participation will earn 2 hours of experimental credit towards the completion of the course (you will receive one hour of experimental credit for each partial hour, up to a full hour, of participation). All other requirements pertaining to the completion of this course must be discussed with the instructor.
Procedures: When you arrive, the researcher will begin by briefly explaining the study to you. If you consent to participate in this study, you will be given a questionnaire asking you to rate how strongly you agree with different statements that relate to your current mood, motivations, thinking style, thinking content, and other general questions about yourself. After completion of this questionnaire, you will be familiarized with a computerized task and then asked to perform the task. The task you will be performing is analogous to those that confront air-traffic controllers, airplane pilots, airport baggage inspectors, people engaged in nuclear power plant regulation, industrial quality control inspectors, and physicians monitoring patient data during surgery. Upon task completion, you will be given an additional questionnaire, similar to the one completed at the beginning of study. You will also be given a questionnaire asking you to give your ratings of how demanding the task was related to specific dimensions. At the completion of the study, the researcher will answer any questions you may have about the study in general or about specific information regarding the task you completed.
Exclusion: Since this task involves perceptual discriminations, participants must have either normal or corrected-to- normal vision. Therefore, if you wear glasses or contacts, you will be asked to wear them during the study. If you have forgotten to wear your contacts or glasses, you will be asked to re-schedule your participation in this study. Additionally, to maintain participant consistency, you must be right handed to participate.
Risk/Discomforts: The task you will be asked to complete will be performed on a computer. Therefore, you may experience short-term eye strain, similar to that which you would experience from working on the computer for an hour. However, glare-free lighting will be used to minimize this discomfort.
101
Benefits: You may receive no direct benefit from your participation in this study, but your participation will provide information about how various factors can influence performance in these types of tasks.
Confidentiality: Confidentiality will be protected as follows. Participants in this study will not be individually identifiable. Aside from the informed consent, your name will not appear on any other document in this study. Each participant will be allocated an arbitrary identifier to ensure confidentiality. These participant identifiers will be discarded after data have been entered into computer files. The informed consent form will be kept separate from other study materials, under lock. Agents of the University of Cincinnati will be allowed to inspect sections of the research records related to this study. The data from the study may be published; however, you will not be identified by name.
Right to refuse or withdraw: Your participation is voluntary and you may refuse to participate, or may discontinue participation at any time, without penalty or loss of benefits to which you are otherwise entitled. The investigator has the right to withdraw you from the study at any time. Your withdrawal from the study may be for reasons related solely to you (for example, not following study-related directions from the investigator, etc.) or because the entire study has been terminated.
Offer to answer questions: If you have any other questions about this study, you may call Kelley S. Parsons at 556-5529 or Dr. Joel Warm at 556-5529. If you have questions about your rights as a research participant, you may contact the chair of the Institutional Review Board – Social and Behavioral Sciences, at 558-5784.
Legal Rights: Nothing in this consent form waives any legal rights you may have nor does it release the investigator, the sponsor, the institution, or its agents from liability for negligence.
I HAVE READ THE INFORMATION PROVIDED ABOVE. I VOLUNTARILY AGREE TO PARTICIPATE IN THIS STUDY. I WILL RECEIVE A COPY OF THIS CONSENT FORM FOR MY INFORMATION.
______Participant (Print) Date
______Participant (Signature) Date
______Signature of Person Obtaining Consent (role in Study) Date
(Revised 7-25-05)
102
Appendix D
Dundee Stress State Questionnaire (DSSQ)
103
PRE-STATE QUESTIONNAIRE
General Instructions. This questionnaire is concerned with your feelings and thoughts at the moment. We would like to build up a detailed picture of your current state of mind, so there are quite a few questions, divided into four sections. Please answer every question, even if you find it difficult. Answer, as honestly as you can, what is true of you. Please do not choose a reply just because it seems like the 'right thing to say'. Your answers will be kept entirely confidential. Also, be sure to answer according to how you feel AT THE MOMENT. Don't just put down how you usually feel. You should try and work quite quickly: there is no need to think very hard about the answers. The first answer you think of is usually the best.
Before you start, please provide some general information about yourself.
Age...... (years) Sex. M F (Circle one) Occupation...... If student, state your course...... Date today...... Time of day now......
1. MOOD STATE
First, there is a list of words which describe people's moods or feelings. Please indicate how well each word describes how you feel AT THE MOMENT. For each word, circle the answer from 1 to 4 which best describes your mood.
Definitely Slightly Slightly Definitely Not Not 1. Happy 1 2 3 4 2. Dissatisfied 1 2 3 4 3. Energetic 1 2 3 4 4. Relaxed 1 2 3 4 5. Alert 1 2 3 4 6. Nervous 1 2 3 4 7. Passive 1 2 3 4 8. Cheerful 1 2 3 4 9. Tense 1 2 3 4 10. Jittery 1 2 3 4 11. Sluggish 1 2 3 4 12. Sorry 1 2 3 4 13. Composed 1 2 3 4 14. Depressed 1 2 3 4 15. Restful 1 2 3 4 16. Vigorous 1 2 3 4 17. Anxious 1 2 3 4 18. Satisfied 1 2 3 4 19. Unenterprising 1 2 3 4 20. Sad 1 2 3 4 21. Calm 1 2 3 4 22. Active 1 2 3 4 23. Contented 1 2 3 4 24. Tired 1 2 3 4 25. Impatient 1 2 3 4 26. Annoyed 1 2 3 4 27. Angry 1 2 3 4 28. Irritated 1 2 3 4 29. Grouchy 1 2 3 4
104
2. MOTIVATION
Please answer some questions about your attitude to the task you are about to do. Rate your agreement with the following statements by circling one of the following answers:
Extremely = 4 Very much = 3 Somewhat = 2 A little bit = 1 Not at all = 0
1. I expect the content of the task will be interesting 0 1 2 3 4 2. The only reason to do the task is to get an external reward (e.g. payment) 0 1 2 3 4 3. I would rather spend the time doing the task on something else 0 1 2 3 4 4. I am concerned about not doing as well as I can 0 1 2 3 4 5. I want to perform better than most people do 0 1 2 3 4 6. I will become fed up with the task 0 1 2 3 4 7. I am eager to do well 0 1 2 3 4 8. I would be disappointed if I failed to do well on the task 0 1 2 3 4 9. I am committed to attaining my performance goals 0 1 2 3 4 10. Doing the task is worthwhile 0 1 2 3 4 11. I expect to find the task boring 0 1 2 3 4 12. I feel apathetic about my performance 0 1 2 3 4 13. I want to succeed on the task 0 1 2 3 4 14. The task will bring out my competitive drives 0 1 2 3 4 15. I am motivated to do the task 0 1 2 3 4
105
3. THINKING STYLE
In this section, we are concerned with your thoughts about yourself: how your mind is working, how confident you feel, and how well you expect to perform on the task. Below are some statements which may describe your style of thought RIGHT NOW. Read each one carefully and indicate how true each statement is of your thoughts AT THE MOMENT. To answer, circle one of the following answers: Extremely = 4 Very much = 3 Somewhat = 2 A little bit = 1 Not at all = 0 1. I'm trying to figure myself out. 0 1 2 3 4 2. I'm very aware of myself. 0 1 2 3 4 3. I'm reflecting about myself. 0 1 2 3 4 4. I'm daydreaming about myself. 0 1 2 3 4 5. I'm thinking deeply about myself. 0 1 2 3 4 6. I'm attending to my inner feelings. 0 1 2 3 4 7. I'm examining my motives. 0 1 2 3 4 8. I feel that I'm off somewhere watching myself. 0 1 2 3 4 9. I feel confident about my abilities. 0 1 2 3 4 10. I am worried about whether I am regarded as a success or failure. 0 1 2 3 4 11. I feel self-conscious. 0 1 2 3 4 12. I feel as smart as others. 0 1 2 3 4 13. I am worried about what other people think of me. 0 1 2 3 4 14. I feel confident that I understand things. 0 1 2 3 4 15. I feel inferior to others at this moment. 0 1 2 3 4 16. I feel concerned about the impression I am making. 0 1 2 3 4 17. I feel that I have less scholastic ability right now than others. 0 1 2 3 4 18. I am worried about looking foolish. 0 1 2 3 4 19. My attention is directed towards things other than the task. 0 1 2 3 4 20. I am finding physical sensations such as muscular tension distracting. 0 1 2 3 4 21. I expect my performance will be impaired by thoughts irrelevant to the task. 0 1 2 3 4 22. I have too much to think about to be able to concentrate on the task. 0 1 2 3 4 23. My thinking is generally clear and sharp. 0 1 2 3 4 24. I will find it hard to maintain my concentration for more than a short time. 0 1 2 3 4 25. My mind is wandering a great deal. 0 1 2 3 4 26. My thoughts are confused and difficult to control. 0 1 2 3 4 27. I expect to perform proficiently on this task. 0 1 2 3 4 28. Generally, I feel in control of things. 0 1 2 3 4 29. I can handle any difficulties I encounter 0 1 2 3 4 30. I consider myself skillful at the task 0 1 2 3 4
106 4. THINKING CONTENT
This set of questions concerns the kinds of thoughts that go through people's heads at particular times, for example while they are doing some task or activity. Below is a list of thoughts, some of which you might have had recently. Please indicate roughly how often you had each thought DURING THE LAST TEN MINUTES or so, by circling a number from the list below.
1= Never 2= Once 3= A few times 4= Often 5= Very often
1. I thought about how I should work more carefully. 1 2 3 4 5 2. I thought about how much time I had left. 1 2 3 4 5 3. I thought about how others have done on this task. 1 2 3 4 5 4. I thought about the difficulty of the problems. 1 2 3 4 5 5. I thought about my level of ability. 1 2 3 4 5 6. I thought about the purpose of the experiment. 1 2 3 4 5 7. I thought about how I would feel if I were told how I performed. 1 2 3 4 5 8. I thought about how often I get confused. 1 2 3 4 5 9. I thought about members of my family. 1 2 3 4 5 10. I thought about something that made me feel guilty. 1 2 3 4 5 11. I thought about personal worries. 1 2 3 4 5 12. I thought about something that made me feel angry. 1 2 3 4 5 13. I thought about something that happened earlier today. 1 2 3 4 5 14. I thought about something that happened in the recent past 1 2 3 4 5 (last few days, but not today). 15. I thought about something that happened in the distant past 1 2 3 4 5 16. I thought about something that might happen in the future. 1 2 3 4 5
107 POST-STATE QUESTIONNAIRE
General Instructions
This questionnaire is concerned with your feelings and thoughts while you were performing the task. We would like to build up a detailed picture of your current state of mind, so there are quite a few questions, divided into six sections. Please answer every question, even if you find it difficult. Answer, as honestly as you can, what is true of you. Please do not choose a reply just because it seems like the 'right thing to say'. Your answers will be kept entirely confidential. Also, be sure to answer according to how you felt WHILE PERFORMING THE TASK. Don't just put down how you usually feel. You should try and work quite quickly: there is no need to think very hard about the answers. The first answer you think of is usually the best.
1. MOOD STATE
First, there is a list of words which describe people's moods or feelings. Please indicate how well each word describes how you felt WHILE PERFORMING THE TASK. For each word, circle the answer from 1 to 4 which best describes your mood.
Definitely Slightly Slightly Definitely Not Not
1. Happy 1 2 3 4 2. Dissatisfied 1 2 3 4 3. Energetic 1 2 3 4 4. Relaxed 1 2 3 4 5. Alert 1 2 3 4 6. Nervous 1 2 3 4 7. Passive 1 2 3 4 8. Cheerful 1 2 3 4 9. Tense 1 2 3 4 10. Jittery 1 2 3 4 11. Sluggish 1 2 3 4 12. Sorry 1 2 3 4 13. Composed 1 2 3 4 14. Depressed 1 2 3 4 15. Restful 1 2 3 4 16. Vigorous 1 2 3 4 17. Anxious 1 2 3 4 18. Satisfied 1 2 3 4 19. Unenterprising 1 2 3 4 20. Sad 1 2 3 4 21. Calm 1 2 3 4 22. Active 1 2 3 4 23. Contented 1 2 3 4 24. Tired 1 2 3 4 25. Impatient 1 2 3 4 26. Annoyed 1 2 3 4 27. Angry 1 2 3 4 28. Irritated 1 2 3 4 29. Grouchy 1 2 3 4
108 2. MOTIVATION
Please answer the following questions about your attitude to the task you have just done. Rate your agreement with the following statements by circling one of the following answers:
Extremely = 4 Very much = 3 Somewhat = 2 A little bit = 1 Not at all = 0
1. The content of the task was interesting 0 1 2 3 4 2. The only reason to do the task is to get an external reward (e.g. payment) 0 1 2 3 4 3. I would rather have spent the time doing the task on something else 0 1 2 3 4 4. I was concerned about not doing as well as I can 0 1 2 3 4 5. I wanted to perform better than most people do 0 1 2 3 4 6. I became fed up with the task 0 1 2 3 4 7. I was eager to do well 0 1 2 3 4 8. I would be disappointed if I failed to do well on this task 0 1 2 3 4 9. I was committed to attaining my performance goals 0 1 2 3 4 10. Doing the task was worthwhile 0 1 2 3 4 11. I found the task boring 0 1 2 3 4 12. I felt apathetic about my performance 0 1 2 3 4 13. I wanted to succeed on the task 0 1 2 3 4 14. The task brought out my competitive drives 0 1 2 3 4 15. I was motivated to do the task 0 1 2 3 4
109 3. THINKING STYLE
In this section, we are concerned with your thoughts about yourself: how your mind is working, how confident you feel, and how well you believed you performed on the task. Below are some statements which may describe your style of thought during task performance. Read each one carefully and indicate how true each statement was of your thoughts WHILE PERFORMING THE TASK. To answer circle one of the following answers: Extremely = 4 Very much = 3 Somewhat = 2 A little bit = 1 Not at all = 0 1. I tried to figure myself out. 0 1 2 3 4 2. I was very aware of myself. 0 1 2 3 4 3. I reflected about myself. 0 1 2 3 4 4. I daydreamed about myself. 0 1 2 3 4 5. I thought deeply about myself. 0 1 2 3 4 6. I attended to my inner feelings. 0 1 2 3 4 7. I examined my motives. 0 1 2 3 4 8. I felt that I was off somewhere watching myself. 0 1 2 3 4 9. I felt confident about my abilities. 0 1 2 3 4 10. I was worried about whether I am regarded as a success or failure. 0 1 2 3 4 11. I felt self-conscious. 0 1 2 3 4 12. I felt as smart as others. 0 1 2 3 4 13. I was worried about what other people think of me. 0 1 2 3 4 14. I felt confident that I understood things. 0 1 2 3 4 15. I felt inferior to others. 0 1 2 3 4 16. I felt concerned about the impression I was making. 0 1 2 3 4 17. I felt that I had less scholastic ability than others. 0 1 2 3 4 18. I was worried about looking foolish. 0 1 2 3 4 19. My attention was directed towards things other than the task. 0 1 2 3 4 20. I found physical sensations such as muscular tension distracting. 0 1 2 3 4 21. My performance was impaired by thoughts irrelevant to the task. 0 1 2 3 4 22. I had too much to think about to be able to concentrate on the task. 0 1 2 3 4 23. My thinking was generally clear and sharp. 0 1 2 3 4 24. I found it hard to maintain my concentration for more than a short time. 0 1 2 3 4 25. My mind wandered a great deal. 0 1 2 3 4 26. My thoughts were confused and difficult to control 0 1 2 3 4 27. I performed proficiently on this task. 0 1 2 3 4 28. Generally, I felt in control of things. 0 1 2 3 4 29. I was able to handle any difficulties I encountered 0 1 2 3 4 30. I consider myself skillful at the task 0 1 2 3 4
110 4. THINKING CONTENT
This set of questions concerns the kinds of thoughts that go through people's heads at particular times, for example while they are doing some task or activity. Below is a list of thoughts, some of which you might have had recently. Please indicate roughly how often you had each thought during THE LAST TEN MINUTES (while performing the task), by circling a number from the list below. 1= Never 2= Once 3= A few times 4= Often 5= Very often
1. I thought about how I should work more carefully. 1 2 3 4 5 2. I thought about how much time I had left. 1 2 3 4 5 3. I thought about how others have done on this task. 1 2 3 4 5 4. I thought about the difficulty of the problems. 1 2 3 4 5 5. I thought about my level of ability. 1 2 3 4 5 6. I thought about the purpose of the experiment. 1 2 3 4 5 7. I thought about how I would feel if I were told how I performed. 1 2 3 4 5 8. I thought about how often I get confused. 1 2 3 4 5 9. I thought about members of my family. 1 2 3 4 5 10. I thought about something that made me feel guilty. 1 2 3 4 5 11. I thought about personal worries. 1 2 3 4 5 12. I thought about something that made me feel angry. 1 2 3 4 5 13. I thought about something that happened earlier today. 1 2 3 4 5 14. I thought about something that happened in the recent past 1 2 3 4 5 (last few days, but not today). 15. I thought about something that happened in the distant past 1 2 3 4 5 16. I thought about something that might happen in the future. 1 2 3 4 5
5. OPINIONS OF THE TASK
Next, please answer some questions about the task. Please indicate what you thought of the task while you were performing it. Please try to rate the task itself rather than your personal reactions to it. For each adjective or sentence circle the appropriate number, on the six point scales provided (where 0 = not at all to 5 = very much so).
Threatening 0 1 2 3 4 5 Enjoyable 0 1 2 3 4 5 Fearful 0 1 2 3 4 5 Exhilarating 0 1 2 3 4 5 Worrying 0 1 2 3 4 5 Informative 0 1 2 3 4 5 Frightening 0 1 2 3 4 5 Challenging 0 1 2 3 4 5 Terrifying 0 1 2 3 4 5 Stimulating 0 1 2 3 4 5 Hostile 0 1 2 3 4 5 Exciting 0 1 2 3 4 5
The task was a situation:
Which was likely to get out of control 0 1 2 3 4 5 In which you were unsure of how much influence you have 0 1 2 3 4 5 In which somebody else was to blame for difficulties 0 1 2 3 4 5 In which you had to hold back from doing what you really want 0 1 2 3 4 5 Which you could deal with effectively 0 1 2 3 4 5 In which efforts to change the situation tended to make it worse 0 1 2 3 4 5 In which other people made it difficult to deal with the problem 0 1 2 3 4 5 Which was just too much for you to cope with 0 1 2 3 4 5
111 6. DEALING WITH PROBLEMS
Finally, think about how you dealt with any difficulties or problems which arose while you were performing the task. Below are listed some options for dealing with problems such as poor performance or negative reactions to doing the task. Please indicate how much you used each option, specifically as a deliberately chosen way of dealing with problems. To answer circle one of the following answers:
Extremely = 4 Very much = 3 Somewhat = 2 A little bit = 1 Not at all = 0
I ...
1. Worked out a strategy for successful performance 0 1 2 3 4 2. Worried about what I would do next 0 1 2 3 4 3. Stayed detached or distanced from the situation 0 1 2 3 4 4. Decided to save my efforts for something more worthwhile 0 1 2 3 4 5. Blamed myself for not doing better 0 1 2 3 4 6. Became preoccupied with my problems 0 1 2 3 4 7. Concentrated hard on doing well 0 1 2 3 4 8. Focused my attention on the most important parts of the task 0 1 2 3 4 9. Acted as though the task wasn't important 0 1 2 3 4 10. Didn't take the task too seriously 0 1 2 3 4 11. Wished that I could change what was happening 0 1 2 3 4 12. Blamed myself for not knowing what to do 0 1 2 3 4 13. Worried about my inadequacies 0 1 2 3 4 14. Made every effort to achieve my goals 0 1 2 3 4 15. Blamed myself for becoming too emotional 0 1 2 3 4 16. Was single-minded and determined in my efforts to overcome any problems 0 1 2 3 4 17. Gave up the attempt to do well 0 1 2 3 4 18. Told myself it wasn't worth getting upset 0 1 2 3 4 19. Was careful to avoid mistakes 0 1 2 3 4 20. Did my best to follow the instructions for the task 0 1 2 3 4 21. Decided there was no point in trying to do well 0 1 2 3 4
112
Appendix E
Summary Tables of Statistical Analysis
113 Table E1. Analysis of Variance for Practice Detection Rate Scores
Source df MS F Sig. Groups 6 37.874 1.094 >.05 S/G 133 34.615
Table E2. Analysis of Variance for Percent Correct Detections within the Detection/Action Composite Group
Source df MS F p Groups 2 32.217 .264 >.05 S/G 57 122.107
Table E3. Analysis of Variance for Percent Correct Detections within the Detection-Only Composite Group
Source df MS F p Groups 3 400.411 1.359 >.05 S/G 76 294.457
Table E4. Analysis of Variance for Correct Detection Scores
Source df dfadj MS F p Between Subjects Groups (A) 1 ---- 967.314 4.362 <.05 S/G 138 ---- 221.772
Within Subjects Periods (B) 2 1.989 653.410 9.558 <.001 AB 2 1.989 85.396 1.249 >.05 B × S/G 276 274.508 68.361
Note: dfadj. = degrees of freedom obtained when Box’s ε is used to correct for violations of sphericity. Box’s ε = .995.
114 Table E5. Analysis of Variance for the NASA-TLX Global Workload Scores
Source df MS F p Groups 6 3233.774 14.328 <.001 S/G 133 225.691
Table E6. Analysis of Variance for the NASA-TLX Subscale Scores
Source df dfadj MS F p Between Subjects Groups (A) 6 ---- 142212.476 14.075 <.001 S/G 133 ---- 10104.070
Within Subjects Subscales (B) 4 3.396 445399.810 26.218 <.001 AB 24 20.375 33657.116 1.981 <.01 B × S/G 532 451.642 16988.467
Note: dfadj. = degrees of freedom obtained when Box’s ε is used to correct for violations of sphericity. Box’s ε = .849.
Table E7. Analysis of Variance for Groups within the NASA-TLX Mental Demand Subscale
Source df MS F p Groups 6 138543.274 10.115 <.05 S/G 133 13696.645
Table E8. Analysis of Variance for Groups within the NASA-TLX Temporal Demand Subscale
Source df MS F p Groups 6 61752.798 3.747 <.05 S/G 133 16482.434
115 Table E9. Analysis of Variance for Groups within the NASA-TLX Performance Subscale
Source df MS F p Groups 6 3553.095 .487 >.05 S/G 133 7296.316
Table E10. Analysis of Variance for Groups within the NASA-TLX Effort Subscale
Source df MS F p Groups 6 17149.940 1.667 >.05 S/G 133 10290.733
Table E11. Analysis of Variance for Groups within the NASA-TLX Frustration Subscale
Source df MS F p Groups 6 35506.429 1.773 >.05 S/G 133 20027.491
Table E12. Analysis of Variance for Groups within the Pre-DSSQ Worry Dimension
Source df MS F p Groups 6 .173 .190 >.05 S/G 133 .910
Table E13. Analysis of Variance for Groups within the Pre-DSSQ Engagement Dimension
Source df MS F p Groups 6 .709 1.290 >.05 S/G 133 .550
116 Table E14. Analysis of Variance for Groups within the Pre-DSSQ Distress Dimension
Source df MS F p Groups 6 .869 1.028 >.05 S/G 133 .845
Table E15. Analysis of Variance of the Change Scores for the Detection/Action Composite Group Members within the DSSQ Worry Dimension
Source df MS F p Groups 2 .351 .438 >.05 S/G 57 .802
Table E16. Analysis of Variance of the Change Scores for the Detection/Action Composite Group Members within the DSSQ Engagement Dimension
Source df MS F p Groups 2 .013 .014 >.05 S/G 57 .945
Table E17. Analysis of Variance of the Change Scores for the Detection/Action Composite Group Members within the DSSQ Distress Dimension
Source df MS F p Groups 2 .411 .367 >.05 S/G 57 1.121
Table E18. Analysis of Variance of the Change Scores for the Detection-Only Composite Group Members within the DSSQ Worry Dimension
Source df MS F p Groups 3 .597 .736 >.05 S/G 76 .811
117 Table E19. Analysis of Variance of the Change Scores for the Detection-Only Composite Group Members within the DSSQ Engagement Dimension
Source df MS F p Groups 3 .305 .380 >.05 S/G 76 .803
Table E20. Analysis of Variance of the Change Scores for the Detection-Only Composite Group Members within the DSSQ Distress Dimension
Source df MS F p Groups 3 .273 .309 >.05 S/G 76 .884
Table E21. Analysis of Variance for the Composite Groups on the DSSQ Dimensions
Source df dfadj MS F p Between Subjects Groups (A) 1 ----- .083 .081 >.05 S/G 138 ----- 1.019
Within Subjects Dimensions (B) 2 1.749 83.080 92.106 <.001 AB 2 1.749 4.505 4.994 <.01 B × S/G 276 241.393 .902
Note: dfadj. = degrees of freedom obtained when Box’s ε is used to correct for violations of sphericity. Box’s ε = .875.
Table E22. Analysis of Variance for the Composite Groups on the DSSQ Worry Dimension
Source df MS F p Groups 1 .193 .242 >.05 S/G 138 .796
118 Table E23. Analysis of Variance for the Composite Groups on the DSSQ Engagement Dimension
Source df MS F p Groups 1 3.768 4.491 <.05 S/G 138 .839
Table E24. Analysis of Variance for the Composite Groups on the DSSQ Distress Dimension
Source df MS F p Groups 1 4.003 4.161 <.05 S/G 138 .962
119